an image of a landscape created through Nano Banana 2

Google Unveils Nano Banana 2: Faster Image Generation in a Next‑Gen Model

Google has officially unveiled Nano Banana 2, the newest version of its widely used AI image generator. Technically known as Gemini 3.1 Flash Image, this updated model is designed to deliver more realistic, more detailed images than the original Nano Banana—while also generating them faster. Google is also making Nano Banana 2 the new default image model inside the Gemini app, covering its Fast, Thinking, and Pro modes.

Nano Banana first arrived in August 2025 and quickly took off, with users creating millions of images through the Gemini app. The model gained especially strong traction in markets such as India. A few months later, Google introduced Nano Banana Pro, which focused on higher-quality, more detailed image results for users who wanted sharper outputs and more refined visuals.

Now, Nano Banana 2 aims to blend the best of both worlds. Google says the model keeps many of the high-fidelity traits people appreciated in the Pro version, but improves speed to better match everyday creative workflows. Users can generate images in resolutions ranging from 512 pixels up to 4K, with multiple aspect ratio options—making it easier to create everything from quick social graphics to higher-resolution visuals suitable for presentations and larger displays.

One of the standout upgrades is consistency. Nano Banana 2 can maintain character consistency for up to five characters, which is a major win for anyone building a sequence of scenes or trying to tell a story across multiple images. It can also preserve accurate “fidelity” across up to 14 objects in a single workflow, helping keep complex visual compositions stable when prompts get more detailed. Google also says the model handles nuanced, multi-part requests more effectively, supporting richer lighting, deeper textures, and sharper details for a more polished look.

The rollout is broad. In addition to becoming the default image generation model across the Gemini app, Nano Banana 2 is also set as the default in Google’s video editing tool, Flow. Google is also integrating the model more deeply into Search experiences: it will be the default for image-related results accessed through Google Lens and in AI Mode, spanning 141 countries across the Google app and the web on both desktop and mobile.

Users who subscribe to Google’s higher-tier plans, Google AI Pro and Ultra, won’t lose access to Nano Banana Pro. Subscribers can continue using the Pro model for specialized image tasks by regenerating images through the three-dot menu—giving advanced users a way to choose the best tool depending on whether they prioritize speed or maximum detail.

Google is also opening the door for developers. Nano Banana 2 will be available in preview via the Gemini API, Gemini CLI, and the Vertex API, and it will also be accessible through AI Studio and Google’s development tool Antigravity, which launched last November. This wider availability signals Google’s intention to make the model a foundation not only for consumer image generation, but also for creative apps, enterprise tools, and automated content pipelines built by third parties.

On the trust and transparency front, Google says every image created with Nano Banana 2 will include a SynthID watermark, identifying it as AI-generated. The output also supports C2PA Content Credentials, an industry standard backed by major technology companies for verifying content origin and edits. Google notes that since it introduced SynthID verification inside the Gemini app in November, people have used the feature more than 20 million times—an indication that provenance tools are becoming a routine part of how users evaluate AI-generated content.

With Nano Banana 2 becoming the new default across Gemini, Flow, and key Search experiences, Google is positioning faster, more realistic AI image generation as a core feature across its ecosystem—while also expanding developer access and strengthening content labeling to help users understand what they’re looking at.