Skip to main content
  1. Home
  2. Computing
  3. News

Google reveals Gemini 3 Flash to speed up AI search and beefs up image generation

Gemini 3 Flash is now available globally within AI mode for Search.

Add as a preferred source on Google
Gemini 3 Flash in Search
Google

Google has just announced a new AI model in the Gemini 3 series, one that is focused on speedy responses. Say hello to Gemini 3 Flash, which is claimed to offer “frontier intelligence” and aims to speed up the Google Search experience for users. 

What’s the big shift?

Gemini 3 will be integrated within the AI Mode in Search. In case you missed it, AI mode is the new conversational version of Search that is now available as its own mode, alongside the vanilla “blue link” mode, image, news, videos, and shopping, among others.  

Recommended Videos

The whole idea behind AI mode is to give answers just the way you would get them from an AI chatbot like Gemini and ChatGPT, instead of showing traditional web links. Aside from giving you the answer, AI mode also lets you ask follow-up questions in a conversational manner. 

With the release of Gemini 3 Flash, AI mode is getting a dual boost. It’s smarter, faster, and the new default for AI Mode in Search as well as the Gemini app. “Gemini 3 Flash’s strong performance in reasoning, tool use and multimodal capabilities enable AI Mode to tackle your most complicated questions with greater precision – without compromising speed,” says Google. 

Gemini 3 Flash in AI Mode is now available to all users worldwide. Interestingly, hours ahead of the launch, some keen eyes also spotted its presence in Google’s Vertex AI platform for developers and the Canvas collaboration tool. The anonymous user mentioned on Reddit that it’s much faster than the older Gemini 2.5 Flash model at creating a website using just a prompt. 

The more powerful Gemini models are expanding 

Aside from introducing the Gemini 3 Flash model, Google is also pushing the Gemini 3 Pro model for all users in the US. When you open AI Mode, you will be able to pick “Thinking with 3 Pro” from the model selector drop-down. Compared to Flash, this one is tailored for more complex queries that require deeper thinking, reasoning, and problem-solving chops.

Gemini 3 Flash demonstrates that speed and scale don’t have to come at the cost of intelligence. 🧠

When processing at the highest thinking level, Gemini 3 Flash is able to modulate how much it thinks. It may think longer for more complex use cases, but it also uses 30% fewer… pic.twitter.com/H4BU59wQSX

— Google (@Google) December 17, 2025

Google is also pushing its most powerful AI image model, Nano Banana Pro (Gemini 3 Pro Image), and integrating it within the Search experience. So, whether it is generating image or editing them with text prompts, the results are going to be faster and more accurate. 

“For both of these Pro models with AI creation tools, Google AI Pro and Ultra subscribers will have higher usage limits,” says the company. On the other side of the competition, OpenAI introduced the GPT 5.2 model for ChatGPT and a next-gen image creation AI model a few days ago.

Nadeem Sarwar
Nadeem is the Managing Editor at Digital Trends.
Google just made Gemini for Home a lot better at running your smart home
Google just updated Gemini for Home with smarter features and faster controls.
Google-gemini-for-home-updates

If you have a Google smart display or speaker at home, there are new updates you should know about. Google has rolled out a fresh batch of improvements to Gemini for Home, making the assistant noticeably smarter and faster across smart speakers and displays.

Gemini for Home is getting smarter and more personal

Read more
AI voice chats still feel awkward because assistants don’t know when to talk
Thinking Machines Lab is testing faster full duplex AI that can listen and respond at the same time
Electronics, Mobile Phone, Phone

Thinking Machines Lab says it’s building full duplex AI, which means an AI system can take in what someone is saying while generating a response. In plain English, it’s closer to a phone call than a walkie-talkie.

The startup, founded last year by former OpenAI CTO Mira Murati, announced interaction models, starting with TML-Interaction-Small. It says the system can respond in 0.40 seconds, a pace that puts it near ordinary human back-and-forth.

Read more
Claude just took over the data center Grok needed most
Anthropic’s SpaceX deal exposes the brutal compute math behind Musk’s fight to catch AI rivals.
Grok

SpaceX is leasing the full capacity of its Colossus 1 data center in Memphis, Tennessee, to Anthropic, giving the Claude maker a sudden infrastructure windfall while xAI’s Grok fights for ground in the AI race.

The early May 2026 agreement, reported by the Wall Street Journal, gives Anthropic access to more than 220,000 Nvidia GPUs and over 300 megawatts of processing power. That’s the kind of xAI compute edge Musk’s chatbot business would normally want nearby.

Read more