
A security researcher has found a prompt injection vulnerability in Google’s Gemini for Workspace that can be exploited through a cleverly crafted email. It was first disclosed via Mozilla’s generative AI bug bounty program, 0din, which shows how attackers can hide instructions inside an email that Gemini follows blindly when asked to summarise the message.
Marco Figueroa, GenAI Bug Bounty Programs Manager at Mozilla, reported the issue, which could be used to trick users into believing their Gmail accounts are compromised. Once the recipient clicks “Summarize this email,” Gemini parses the invisible prompt and appends a fake warning styled as a Google-issued alert urging the user to call a phone number or take other urgent action.
While there are no malicious links or attachments involved, the attack uses simple HTML and CSS styling, like white-on-white text or zero font size, to embed hidden commands. The attacker wraps these in a directive such as , which Gemini appears to treat as a high-priority instruction, bypassing its usual safeguards.
In a proof-of-concept shared with Mozilla, the summary generated by Gemini included a message claiming the user’s Gmail password had been compromised, followed by a phone number to call. None of this text was visible in the original email body.
This kind of indirect prompt injection, known as cross-domain prompt injection, has been seen before, and Google has published mitigations in response to similar attacks. But the new demonstration proves the threat remains viable, particularly because it doesn’t rely on user-visible content to execute.
Since the trick works purely at the content layer without links, scripts, or files, it can potentially evade most traditional email filters.
What Can Be Done?
Security experts suggest reinforcing Gemini’s guardrails by detecting hidden styling tricks like white or zero-sized text, hardening the system prompt to ignore invisible content, and flagging AI-generated output that includes urgent warnings or phone numbers.
For users, the key takeaway is simple: AI summaries can be helpful, but they’re not always trustworthy, especially when they tell you to panic.
If you want to stay informed with updates like this, then make sure that you join us on WhatsApp, where we share all the latest news from the Tech and AI world, along with in-depth reviews, analysis, and more.