Skip to main content
  1. Home
  2. Computing
  3. News

Hackers are using Gemini to target you, Google says

Google links Gemini use to recon, phishing, coding, and post-breach activity.

Add as a preferred source on Google
Close-up of hands on a laptop keyboard in a dark room.
Dmitry Tishchenko / 123RF

Google says hackers are abusing Gemini to speed up cyberattacks, and it isn’t limited to cheesy phishing spam. In a new Google Threat Intelligence Group report, it says state-backed groups have used Gemini across multiple phases of an operation, from early target research to post-compromise work.

The activity spans clusters linked to China, Iran, North Korea, and Russia. Google says the prompts and outputs it observed covered profiling, social engineering copy, translation, coding help, vulnerability testing, and debugging when tools break during an intrusion. Fast help on routine tasks can still change the outcome.

AI help, same old playbook

Google’s researchers frame the use of AI as acceleration, not magic. Attackers already run recon, draft lures, tweak malware, and chase down errors. Gemini can tighten that loop, especially when operators need quick rewrites, language support, or code fixes under pressure.

Recommended Videos

The report describes Chinese-linked activity where an operator adopted an expert cybersecurity persona and pushed Gemini to automate vulnerability analysis and produce targeted test plans in a made-up scenario. Google also says a China-based actor repeatedly used Gemini for debugging, research, and technical guidance tied to intrusions. It’s less about new tactics, more about fewer speed bumps.

The risk isn’t just phishing

The big shift is tempo. If groups can iterate faster on targeting and tooling, defenders get less time between early signals and real damage. That also means fewer obvious pauses where mistakes, delays, or repeated manual work might surface in logs.

Google also flags a different threat that doesn’t look like classic scams at all, model extraction and knowledge distillation. In that scenario, actors with authorized API access hammer the system with prompts to replicate how it performs and reasons, then use that knowledge to train another model. Google frames it as commercial and intellectual property harm, with potential downstream risk if it scales, including one example involving 100,000 prompts aimed at replicating behavior in non-English tasks.

What you should watch next

Google says it has disabled accounts and infrastructure tied to documented Gemini abuse, and it has added targeted defenses in Gemini’s classifiers. It also says it continues testing and relies on safety guardrails.

For security teams, the practical takeaway is to assume AI-assisted attacks will move quicker, not necessarily smarter. Track sudden improvements in lure quality, faster tooling iteration, and unusual API usage patterns, then tighten response runbooks so speed doesn’t become the attacker’s biggest advantage.

Paulo Vargas
Paulo Vargas is an English major turned reporter turned technical writer, with a career that has always circled back to…
I built a Mac app to track my bad posture with AirPods. I didn’t write a line of code.
A one-shot attempt with Claude that ran in the first attempt. It almost felt like witnessing magic.
Person wearing AirPods Pro.

A few weeks ago, I wrote about an app that looks at you through the Mac’s webcam, and as soon as it detects a slouching posture, it sends a notification. The app even logs all the instances and provides a daily posture score. It was an open-source app, but soon after it was shared on Reddit by the creator, a huge chunk of fellow Reddit lurkers started asking about how it processes and stores data. Those were existentially valid queries.

After all, you are giving an app access to the camera, which can monitor you and the world around you in real-time. Is there a backdoor that allows a bad actor to take a sneak peek? What else is the app logging in the background, and how much of the audio-visual stream is being relayed or stored on an external cloud server? Thankfully, the app works fully online, and all the processing happens locally on my Mac. But the sense of unease prevailed.

Read more
Fitbit is becoming Google Health, and it’s getting a bunch of wellness upgrades
Google is finally treating health tracking as a platform play, pulling in medical records, third-party fitness data, and AI coaching in a way that Fitbit's standalone app was never built to handle.
New Google Health app.

Google is officially pulling the plug on the Fitbit app, replacing it with the new Google Health app on May 19, 2026. It is quite ironic, as the company just announced a new Fitbit Air screenless fitness tracker, but the change will take place via an OTA update. 

This is happening after Fitbit’s fifteen-year run, wherein it gathered millions of fitness-focused users and provided them with various health trackers and meaningful insights via its software. 

Read more
Your coworker’s AI-built app might be leaking company secrets
Thousands of AI-built apps are spilling secrets online
girl coding on computer

AI coding tools have made it ridiculously easy to build a web app, and it only takes a few minutes to set up now. This ease has lowered the barrier to app development, which is causing a new set of issues. So what happens when these AI-made apps go live without anyone checking the locks? You get secrets spilling out all over the internet.

A WIRED report highlights a major security problem around so-called “vibe-coded” apps, which are built using AI development platforms such as Lovable, Replit, Base44, and Netlify.

Read more