Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Android Central
Android Central
Technology
Jay Bonggolto

Turns out Gmail's AI email summaries aren't as secure as we thought

Gmail app on Pixel Tablet.

What you need to know

  • Google’s AI helper can be fooled into showing fake email summaries right in Gmail, making phishing feel weirdly legit.
  • A researcher proved it works and flagged it to Mozilla’s 0din bug bounty program.
  • Google says this trick hasn’t been used in the wild yet.

It seems like Google Gemini can be tricked into showing fake email summaries right inside Gmail, and since it all looks legit, it can be a sneaky new way to pull off phishing.

A cybersecurity researcher recently exposed a flaw that lets hackers twist Google Gemini into showing harmful instructions through Gmail summaries, putting Workspace users at risk. The issue has been reported to 0din, Mozilla’s AI-focused bug bounty program (via BleepingComputer).

Google slipped Gemini into its Workspace apps a while back, and one of its handy tricks is breaking down your emails. Just slide open the side panel in Gmail, and Gemini will be able to pull key details or turn messages into calendar events on the fly.

But here’s the scary part: this feature also opens the door to “prompt-injection” attacks. If someone slips a sneaky prompt into an email, Gemini might run it right in the side panel, and the user might not even notice.

(Image credit: Marco Figueroa / Odin)
(Image credit: Marco Figueroa / Odin)

Hackers can hide secret commands in emails using simple HTML and CSS tricks like white text on a white background or font size zero. You won’t notice a thing, but Gemini will read it loud and clear, and that’s where things can go sideways.

Since these sneaky emails skip the usual red flags (no sketchy links or shady attachments), they often slip right past Google’s spam filters and land straight in your inbox.

Wolf in Google’s clothing

Gemini sees those hidden prompts as legit and spits them out in the email summary with zero warnings. Since it’s coming from Google’s own AI, it looks totally above board, which means most folks won’t even think to question it. That false sense of trust is what makes this so dangerous.

According to BleepingComputer, even though prompt injection has been on the radar since 2024 and some guardrails are already in place, this tactic still works. That means the current defenses aren’t quite keeping up with how crafty attackers are getting.

Google says there’s no sign of hackers actually using this trick in real-world attacks thus far. But the research clearly shows it can be done, which raises a big red flag. Just because it hasn’t hit the wild yet doesn’t mean it’s not a serious threat waiting to happen.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.