
Although a researcher was able to demonstrate that Google Gemini could be tricked into giving users fake information like leading them to malicious websites, Google has said it doesn’t consider this ASCII smuggling attack a true security bug and it has no plans to issue a fix for the flaw.
As reported by BleepingComputer, the company dismissed the findings as being more of an issue of social engineering attacks than an actual security vulnerability.
Since Gemini is so closely integrated with Google Workspace, this vulnerability is a high risk issue as this attack could be used to embed hidden text in Calendar invites or emails to instruct the AI assistant in unseen Calendar invite tiles, overwrite organizer details or hidden meeting descriptions or links.
ASCII smuggling is an attack style that uses special characters from the Tags Unicode block to introduce payloads that are invisible to users, but can still be detected and processed by large language models. – Essentially, this means that it hides letters, numbers or other characters to introduce malicious code to the AI assistant that users can’t see. LLMs have been vulnerable to ASCII smuggling attacks – and similar methods – for quite some time, however, the threat is now higher because agentic AI tools, like Gemini, have both widespread access to sensitive user data and can perform autonomous tasks.
According to the researchers “If users have LLMs connected to their inboxes, an email with hidden commands can instruct them [the AI] to search the inbox for sensitive items, send contact details and then turn a standard phishing attempt into an autonomous data extraction tool.” LLMs that have been told to browse websites could also potentially stumble onto hidden payloads in product descriptions and feed them with malicious URLs to feed back to users.
There are other techniques that use similar methods to manipulate the gap between what users see and what machines read including CSS manipulation and GUI limitations. The security researcher involved in this research found that Gemini, like Grok and DeepSeek, is vulnerable to ASCII smuggling attacks while Claude, ChatGPT and Microsoft Copilot are safe from such threats by implementing some form of input sanitization.
We reached out to Google for comment about the research and will update this story if and when we hear back.