Fix Your AI Image Quality

What Changed in Google's Nano Banana and How to Adapt

In partnership with

From Our Sponsor:

AI Agents Are Reading Your Docs. Are You Ready?

Last month, 48% of visitors to documentation sites across Mintlify were AI agents—not humans.

Claude Code, Cursor, and other coding agents are becoming the actual customers reading your docs. And they read everything.

This changes what good documentation means. Humans skim and forgive gaps. Agents methodically check every endpoint, read every guide, and compare you against alternatives with zero fatigue.

Your docs aren't just helping users anymore—they're your product's first interview with the machines deciding whether to recommend you.

That means:
→ Clear schema markup so agents can parse your content
→ Real benchmarks, not marketing fluff
→ Open endpoints agents can actually test
→ Honest comparisons that emphasize strengths without hype

In the agentic world, documentation becomes 10x more important. Companies that make their products machine-understandable will win distribution through AI.

Guide: Fix Your AI Image Quality

In February 2026, Google quietly swapped the default image generation model inside the Gemini app from Nano Banana Pro to a newer, lighter model called Nano Banana 2. No announcement, no popup — just a switch. If your AI-generated product images started looking a little different and you couldn't pinpoint why, that's the reason.

Nano Banana 2 runs on Gemini 3.1 Flash — a faster, cheaper architecture designed for high-volume use. It inherited many of Pro's strengths (better text handling, sharper details, stronger world knowledge), but it's still a Flash-tier model. It generates images in roughly 10–15 seconds at about half the API cost of Pro, making it excellent for rapid iteration and concept testing.

For ecommerce sellers, though, there are a few pitfalls worth understanding before you rely on it for production work.

It tends toward an "AI look." NB2 outputs can skew slightly plastic — skin that's a touch too smooth, materials that feel synthetic, the occasional extra finger. For product-on-white listings or social ads, most customers won't notice. For hero images or brand photography where realism matters, the gap between NB2 and Pro is visible.

Text rendering is improved but not bulletproof. NB2 handles labels, signs, and packaging copy better than the original Nano Banana, but Pro still delivers more precise typography. If your workflow involves readable text on packaging mockups or infographics, NB2 may produce garbled or slightly off results where Pro wouldn't.

Content filters are stricter. Google tightened the safety rails on NB2 around recognizable public figures, character modifications, anything remotely suggestive, and financial data in images. Standard product photography is unaffected, but lifestyle storytelling or creative brand content may hit walls that Pro would allow through.

The app may silently swap your model mid-session. Google's documentation confirms daily image quotas exist for every plan tier. When your Pro quota runs out, subsequent requests get routed to a lower-tier model without notification. If quality suddenly drops during a session, you may no longer be getting the model you think you're getting.

Editing within a session can degrade resolution. Iterating on the same image repeatedly can reduce output fidelity independent of which model is active. Your third edit may look worse than your first — not because of the prompt, but because of the edit pipeline itself.

Quick Reference: Two Models, Two Jobs

Nano Banana 2

Nano Banana Pro

Architecture

Gemini 3.1 Flash (lightweight)

Gemini 3.0 Pro (heavyweight)

Speed

~10–15 seconds

~30 seconds

API cost per image

~$0.03

~$0.06

Output quality

Strong (~95% of Pro)

Best available

Text on images

Improved over original

Most precise

Content filters

Stricter

More permissive

Consistency across edits

Weaker

Slightly stronger

Default in Gemini app?

Yes (since Feb 2026)

No — opt-in only

Nano Banana Pro still exists and is not being deprecated. Google's official deprecation page shows no sunset date. This appears to be a cost play — running Pro on heavy Gemini 3.0 architecture for hundreds of millions of free users isn't sustainable, so Google moved the masses to Flash and reserved Pro for paying customers.

What You'll Need

For basic use: A Google account with access to the Gemini app. A Google AI Pro or Ultra subscription if you want Pro model access.

For direct model control: A Google AI Studio account (free, gives explicit model selection) or a Gemini API key for programmatic access.

For best results: Product photos on neutral backgrounds (multiple angles), logo files (SVG or high-res PNG), brand color hex codes and font specifications, and reference images for materials, textures, or styling you want matched.

Step 1: Know Which Model You're Actually Getting

Before diagnosing any quality issue, confirm which model generated the image. In the Gemini app, check whether Fast, Thinking, or Pro mode is active. Three things can quietly change what you're getting without warning:

The default swap. The Gemini app now defaults to NB2. If you were accustomed to Pro's output and didn't notice the change, your images may feel flatter or more synthetic. That's not a bug — it's a different model.

Silent fallback on limits. When your Pro quota runs out, the app routes requests to a lower tier without telling you. If output quality drops mid-session, suspect this first.

Edit-path degradation. Repeated edits within a session can reduce resolution and fidelity regardless of the active model. If your third iteration looks worse than the first, start a fresh session.

For guaranteed clarity on which model is running, use AI Studio where you select the model explicitly.

Step 2: Choose the Right Access Path

Access Path

What You Get

Best For

Gemini App

Defaults to NB2. Paid users can "Redo with Pro" from the three-dot menu (⋮).

Quick ideation, conversational editing, casual use

Google AI Studio

Direct model picker — select gemini-3.1-flash-image-preview (NB2) or gemini-3-pro-image-preview (Pro).

Controlled production work where you need to know exactly which model is running

Gemini API

Explicit model ID in every request. No ambiguity, no fallback.

Automated pipelines, batch generation, app integrations

Adobe Firefly / Photoshop

Both NB2 and Pro available as partner models.

Design-native workflows inside Adobe's toolset

If quality consistency matters to your work, move final production renders to AI Studio. The Gemini app is great for exploration, but AI Studio removes the guesswork.

Step 3: Structure Your Workflow Around Both Models

The strongest approach isn't choosing one model — it's routing each task to the model that handles it best.

Exploration phase → Nano Banana 2. Use NB2 for all early-stage work: brainstorming compositions, testing scene concepts, iterating on angles, running through lifestyle variations. At 10 seconds per generation and half the cost, you stay in creative flow without burning premium credits.

Production phase → Nano Banana Pro. Once you have a winning concept, move to Pro for the final render. Pro handles photorealistic skin and material textures, precise text rendering on packaging and labels, and tighter adherence to complex prompt instructions noticeably better.

Task

Model

Why

Rough concept exploration

NB2

Speed and cost efficiency for throwaway iterations

Lifestyle scene testing

NB2

Good enough quality; volume matters more here

A/B test creative variants

NB2

Need many versions fast; customers won't see the gap

Hero images for storefronts

Pro

Realism and polish directly affect sales here

Packaging with readable text

Pro

Text accuracy is Pro's clearest advantage

Infographics and callouts

Pro

Typography precision prevents embarrassing errors

Brand photography (final)

Pro

The shots that represent your brand need the best model

Step 4: Improve Output Quality with Iterative Prompting

Both models produce significantly better results when you break complex requests into focused passes rather than cramming everything into one prompt.

Pass 1 — Composition and placement. Lock the overall scene layout, product position, and camera angle. Don't worry about details yet.

Pass 2 — Subject fidelity. With composition established, focus on preserving product identity, surface materials, and color accuracy. Upload reference images (packshot, material swatches, color palette) to anchor the model.

Pass 3 — Typography and detail work. Add packaging text, logos, labels, and fine-print last. This is where you switch to Pro if you haven't already, since text accuracy is its strongest advantage over NB2.

Use reference images aggressively. Both models accept up to 14 reference images per prompt. For ecommerce work, upload your product packshot, detail or label close-up, logo file, brand color reference, material or texture swatch, and typography specimen. The more visual anchors you provide, the less the model defaults to generic, synthetic-looking output. This single technique produces the largest quality improvement regardless of which model you're using.

Step 5: Diagnose and Fix Common Quality Drops

When output quality suddenly changes, work through this before reworking your prompts:

Symptom

Likely Cause

Fix

Images suddenly look flatter or more synthetic

App fell back to a lower tier after hitting daily limit

Switch to AI Studio for explicit model control, or wait for limits to reset

Quality degrades after several edits

Edit-path resolution loss

Start a fresh session using the best output so far as a new reference image

Text on packaging is garbled or illegible

Using NB2 for typography-heavy work

Switch to Pro for the text rendering pass

Output looks nothing like references

Too many competing instructions in one prompt

Split into multiple passes (Step 4)

Skin or materials look artificially smooth

NB2's Flash architecture tendency

Use Pro for final render, or add explicit texture instructions

Content filter blocks the generation

NB2's stricter safety rails

Try Pro (more permissive), or rephrase the prompt

Do You Love The AI For Ecommerce Sellers Newsletter?

You can help us!

Spread the word to your colleagues or friends who you think would benefit from our weekly insights 🙂 Simply forward this issue.

In addition, we are open to sponsorships. We have more than 66,000 subscribers with 75% of our readers based in the US. To get our rate card and more info, email us at [email protected]

The Quick Read:

The Tools List:

🔎 Zocket - A generative AI model trained with local and global purchase data for the highest level of marketing performance.

⚖️ The calibration game - Get better at identifying hallucinations in LLMs.

🤖 Strut AI - Quickly capture projects, notes, drafts, and more in collaborative workspaces powered by AI.

📊 Sheet Copilot - Run tasks in Google Sheets using AI.

📕 Reconfigured - The analysts' journal for recording findings.

🪡 Needle - Enable AI search across all your data in seconds.

About The Writer:

Jo Lambadjieva is an entrepreneur and AI expert in the e-commerce industry. She is the founder and CEO of Amazing Wave, an agency specializing in AI-driven solutions for e-commerce businesses. With over 13 years of experience in digital marketing, agency work, and e-commerce, Joanna has established herself as a thought leader in integrating AI technologies for business growth.

For Team and Agency AI training book an intro call here.

What did you think of today’s email?