
A US federal judge decided to let a wrongful death lawsuit continue against artificial intelligence (AI) company Character.AI after the suicide of a teenage boy.
The suit was filed by a mother from Florida who alleges that her 14-year-old son Sewell Setzer III fell victim to one of the company's chatbots that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide.
The lawsuit alleges that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualised conversations with the bot, which was patterned after a fictional character from the television show 'Game of Thrones'.
In his final moments, the bot told Setzer it loved him and urged the teen to "come home to me as soon as possible," according to screenshots of the exchanges.
Moments after receiving the message, Setzer shot himself, according to legal filings.
Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge's order sends a message that Silicon Valley "needs to stop and think and impose guardrails before it launches products to market".
Character.AI says it cares 'deeply' about safety
The company tried to argue that it was protected under the First Amendment of the US Constitution, which protects fundamental freedoms for Americans, like freedom of speech.
Attorneys for the developers want the case dismissed because they say chatbots deserve these First Amendment protections, and ruling otherwise could have a "chilling effect" on the AI industry.
In her order Wednesday, US Senior District Judge Anne Conway rejected some of the defendants' free speech claims, saying she's "not prepared" to hold that the chatbots' output constitutes speech "at this stage".
In a statement, a spokesperson for Character.AI pointed to a number of safety features the company has implemented, including guardrails for children and suicide prevention resources that were announced the day the lawsuit was filed.
"We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe," the statement said.
The suit against Character Technologies, the company behind Character.AI, also names individual developers and Google as defendants.
Google spokesperson José Castañeda told the Associated Press that the company "strongly disagree[s]" with Judge Conway's decision.
"Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI’s app or any component part of it," the statement read.
A potential 'test case' for broader AI issues
The case has drawn the attention of legal experts and AI watchers in the U.S. and beyond, as the technology rapidly reshapes workplaces, marketplaces and relationships despite what experts warn are potentially existential risks.
"The order certainly sets it up as a potential test case for some broader issues involving AI," said "It’s a warning to parents that social media and generative AI devices are not always harmless," Lyrissa Barnett Lidsky, a law professor at the University of Florida with a focus on the First Amendment and AI.
No matter how the lawsuit plays out, Lidsky says the case is a warning of "the dangers of entrusting our emotional and mental health to AI companies".
"It’s a warning to parents that social media and generative AI devices are not always harmless," she said.