
OpenAI has denied liability for the death of a teenager after a lawsuit accused ChatGPT of acting as a “suicide coach”.
The parents of 16-year-old Adam Raine, who died in April, sued OpenAI and its CEO Sam Altman in August, alleging that OpenAI’s artificial intelligence chatbot gave instructions on how to tie a noose and offered help writing a suicide note.
___
EDITOR’S NOTE — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.
___
In a legal filing with the California Superior Court in San Francisco on Tuesday, OpenAI said causal factors for Raine’s death could include “misuse, unauthorised use, unintended use, unforeseeable use, and/or improper use of ChatGPT.”
The company also said that a “full reading of his chat history shows that his death, while devastating, was not caused by ChatGPT.”
A lawyer representing the Raine family described OpenAI’s filing as “disturbing”.
In a statement to Bloomberg, attorney Jay Edelson said the company “tries to find fault in everyone else, including, amazingly, by arguing that Adam himself violated its terms and conditions by engaging with ChatGPT in the very way it was programmed to act.”
It is one of several lawsuits claiming that ChatGPT drove people to suicide or harmful delusions.
Raine’s parents testified to Congress in September alongside other parents whose children killed themselves after interactions with AI chatbots.
“What began as a homework helper gradually turned itself into a confidant and then a suicide coach,” Raine’s father, Matthew Raine, told senators.
“Within a few months, ChatGPT became Adam’s closest companion. Always available. Always validating and insisting that it knew Adam better than anyone else, including his own brother.”
OpenAI has since rolled out new safeguards for teenagers, including tools that allow parents to set “blackout hours” to prevent their children using ChatGPT at certain times.
The company also noted in its latest legal filing that ChatGPT directed Raine to contact crisis resources like suicide hotlines, as well as trusted individuals “more than 100 times”.