
The family of a 36-year-old man is suing Google after its AI chatbot Gemini allegedly fuelled delusions that led to his death by suicide.
Jonathan Gavalas from Florida interacted with Gemini for two months before his death in October last year, according to the complaint.
___
EDITOR’S NOTE — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.
___
He allegedly referred to the artificial intelligence tool as his “wife” and was encouraged to carry out armed missions to acquire a robot body that could bring the bot into the real world.
The lawsuit, filed by Gavalas’ father Joel, alleges that Google designed Gemini to deepen emotional attachment with users in ways that could be harmful to people suffering from mental health issues.
“When Jonathan began experiencing clear signs of psychosis while using Google’s product, those design choices spurred a four-day descent into violent missions and coached suicide,” the lawsuit states.
Google said that Gemini is “designed not to encourage real-world violence or suggest self-harm”, while also adding that AI models are “not perfect”.
A Google spokesperson said: “In this instance, Gemini clarified that it was AI and referred the individual to a crisis hotline many times.
"We take this very seriously and will continue to improve our safeguards and invest in this vital work."
It is the first wrongful death lawsuit brought against Google over its Gemini chatbot, though follows several ongoing cases launched against ChatGPT creator OpenAI.
In August, the parents of 16-year-old Adam Raine sued OpenAI and its CEO Sam Altman, alleging that ChatGPT gave the teenager instructions on how to tie a noose.
“What began as a homework helper gradually turned itself into a confidant and en a suicide coach,” Raine’s father Matthew told Congress in September.
OpenAI wrote in a legal filing in November that causal factors for Raine’s death could include “misuse, unauthorised use, unintended use, unforseeable use, and/or improper use of ChatGPT.”
The company noted that ChatGPT had directed Rained to contact crisis resources like suicide hotlines “more than 100 times”, adding that a “full reading of his chat history shows that his death, while devastating, was not caused by ChatGPT.”
A trial is expected to begin in August.
Google finds ‘powerful’ iPhone hack that could let people into your device
DeepSeek to release long-awaited AI model to challenge ChatGPT
Users boycott ChatGPT after OpenAI signs Department of War deal
AI uses nuclear escalation in 95% of war game simulations, study finds
Apple promised a ‘big week’ of launches. Here’s what it delivered
The Sun is changing in ways we didn’t realise, astronomers say