Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Hardware
Tom’s Hardware
Technology
Nathaniel Mott

Google's AI could be tricked into enabling spam, revealing a user's location, and leaking private correspondence with a calendar invite — 'promptware' targets LLM interface to trigger malicious activity

The Google logo on a background of circles.

SafeBreach researchers have revealed how a malicious Google Calendar invite could be used to exploit Gemini—the AI assistant that Google has built into its Workplace software suite, Android operating system, and search engine—as part of their ongoing efforts to determine the dangers posed by the rapid integration of AI in tech products.

The researchers dubbed an exploit like this "promptware" because it "utilizes a prompt—a piece of input via text, images, or audio samples—that is engineered to exploit an LLM interface at inference time to trigger malicious activity, like spreading spam or extracting confidential information." The broader security community has underestimated the risks associated with promptware, SafeBreach said, and this report is meant to demonstrate just how much havoc these exploits can wreak.

At a high level, this particular exploit took advantage of Gemini's integration with the broader Google ecosystem, the ability to clutter up Google Calendar's user interface with invitations, and their intended victim's habit of thanking an automaton for... automaton-ing. The researchers said this allowed them to indirectly trigger promptware buried within the user's chat history and perform the following actions:

  • Perform spamming and phishing
  • Generate toxic content 
  • Delete a victim’s calendar events
  • Remotely control a victim’s home appliances (e.g., connected windows, boiler, lights)
  • Geolocate a victim 
  • Video stream a victim via Zoom
  • Exfiltrate a victim’s emails

Check out the full report for a step-by-step breakdown of how the exploit worked. The researchers said they disclosed the flaws to Google in February and that Google "published a blog that provided an overview of its multi-layer mitigation approach to secure Gemini against prompt injection techniques" in June. (It's not clear at what point those mitigations were introduced between the disclosure and the blog post.)

This kind of back-and-forth has been a mainstay of computing for decades. Companies introduce new technologies, people find ways to exploit them, companies occasionally come up with defenses against those exploits, and then people find something else to take advantage of. So, in that sense, the SafeBreach research just reveals another problem to add to the seemingly infinite array of such issues.

But a number of factors combine to make this report more alarming than it might be otherwise. Those include SafeBreach's point about security pros not taking promptware seriously, the "move fast and break things" approach companies are taking with their "AI" deployments, and the incorporation of these chatbots into seemingly every product a company offers. (As highlighted by Gemini's ubiquity.)

"According to our analysis, 73% of the threats posed to end users by an LLM personal assistant present a High-Critical risk," SafeBreach said. "We believe this is significant enough to require swift and dedicated mitigation actions to secure end users and decrease this risk."

Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.