Skip to main content
  1. Home
  2. Computing
  3. News

Google’s best Gemini AI feature could soon appear in your everyday apps

Deep Research goes beyond Google apps with the new Interactions API

Add as a preferred source on Google
android-phone-with-apps
Onur Binay / Unsplash

Google just upgraded Gemini Deep Research, its most advanced AI research agent, and this time the upgrade is not limited to Google’s own products. Google says its AI capabilities can soon appear inside the everyday apps you already use. With the launch of the new Interactions API, Deep Research is no longer a Google-exclusive feature. It can now be built directly into third-party apps, opening the door for a wave of tools on your phone to quietly gain far more powerful AI.

Gemini Deep Research is built for long, complex tasks that many chatbots tend to struggle with. Instead of answering a single question, it works like a real researcher. It plans what information it needs, searches the web, reads through results, identifies gaps, and then continues searching until it builds a complete, well-sourced answer. It uses Gemini 3 Pro as its reasoning engine, which Google says is its most factual model yet, trained to reduce hallucinations during long, multi-step tasks.

How this upgrade reaches your apps

With the Interactions API, any developer can now plug this research agent into their own apps. Deep Research can read uploaded documents, combine them with public web data, and produce structured reports with citations. Developers can also control the structure of the final output, request tables or formatted sections, or receive results in JSON for automation. Such capabilities make the AI suitable for automated analysis tools, finance workflows, scientific research aides, or knowledge apps.

Because this feature is now developer-accessible, your favorite apps like finance, study, and productivity platforms can start using Deep Research behind the scenes. Since Google is also integrating Deep Research into apps like Search and Gemini, you may soon see richer, deeper answers without doing the work yourself. Instead of manually checking multiple sources or switching between tabs, your apps could soon pull together verified information automatically.

Recommended Videos

Google is also preparing to ship it directly into products like Google Search, NotebookLM, Google Finance, and the Gemini app. As this tech quietly slips into the apps you already rely on, the idea of research may soon feel less like a chore and more like something your phone simply handles for you.

Manisha Priyadarshini
Manisha Priyadarshini is a tech and entertainment writer with over nine years of editorial experience.
Google just announced a new kind of laptop, and it puts Gemini everywhere
Google's new Googlebook platform puts Gemini at the center of every laptop interaction, from the cursor to the desktop, with devices from major PC makers arriving this fall.
Googlebook

Google wants Gemini to be the brain of your next laptop, and the company has announced a whole new category to make that happen. Dubbed Googlebook, the new laptop platform puts Gemini at the center of the experience, with devices from Acer, Asus, Dell, HP, and Lenovo expected this fall.

What makes it different

Read more
Google just made Gemini for Home a lot better at running your smart home
Google just updated Gemini for Home with smarter features and faster controls.
Google-gemini-for-home-updates

If you have a Google smart display or speaker at home, there are new updates you should know about. Google has rolled out a fresh batch of improvements to Gemini for Home, making the assistant noticeably smarter and faster across smart speakers and displays.

Gemini for Home is getting smarter and more personal

Read more
AI voice chats still feel awkward because assistants don’t know when to talk
Thinking Machines Lab is testing faster full duplex AI that can listen and respond at the same time
Electronics, Mobile Phone, Phone

Thinking Machines Lab says it’s building full duplex AI, which means an AI system can take in what someone is saying while generating a response. In plain English, it’s closer to a phone call than a walkie-talkie.

The startup, founded last year by former OpenAI CTO Mira Murati, announced interaction models, starting with TML-Interaction-Small. It says the system can respond in 0.40 seconds, a pace that puts it near ordinary human back-and-forth.

Read more