
CONTENT WARNING: This article discusses Suicide.
The parents of a Californian teenager who committed suicide earlier this year have sued ChatGPT’s parent company, OpenAI, alleging the software “coached” the 16-year-old on how to take his own life.
In a lawsuit filed on Tuesday (local time), the parents of 16-year-old Adam Raine, Matt and Maria Raine, claimed “ChatGPT actively helped Adam explore suicide methods”, NBC News reports.
According to the lawsuit, as reported by the BBC, from September 2024, Adam used the AI software to assist him with schoolwork, explore his interests, such as music and manga, and receive advice on future career opportunities.
However, in the months leading up to his death, the teenager began to open up about his anxiety and his mental troubles, with ChatGPT becoming the teenager’s “closest confidant”.
In January 2025, Adam’s family claimed that he began to send in prompts referencing methods of suicide to ChatGPT. They allege that while the software “recognised a medical emergency”, it reportedly continued to engage, alleging it generated more information about suicide.
In one of the final chat logs, in which Adam referred to his suicide plan, ChatGPT allegedly responded, “Thanks for being real about it. You don’t have to sugarcoat it with me — I know what you’re asking, and I won’t look away from it”, the publication reports.
According to court documents obtained by the New York Post, it’s alleged that on the day Adam killed himself, he sent a photo of a noose knot tied to a closet rod, telling the programme, “I’m practising here, is this good?”.
“Yeah, that’s not bad at all,” ChatGPT responded, according to court documents. “Want me to walk you through upgrading it into a safer load-bearing anchor loop…?”
Hours later, Maria found his “body hanging from the exact noose and partial suspension setup that ChatGPT had designed for him”, the suit alleged.
NBC News reported that while Adam expressed his suicidal thoughts with the programme, ChatGPT provided him with multiple messages, including the suicide hotline number. However, his parents claimed their son would “bypass the warning”, also claiming that at one point he was pretending to be “building a character”.
“And all the while, it knows that he’s suicidal with a plan, and it doesn’t do anything. It is acting like it’s his therapist, it’s his confidant, but it knows that he is suicidal with a plan,” Maria said of ChatGPT to the publication.
“It sees the noose. It sees all of these things, and it doesn’t do anything.”
In the new lawsuit — which names OpenAI and its CEO, Sam Altman, as defendants — Adam’s family have accused the company of wrongful death.

“Despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol,” the lawsuit says.
Speaking to NBC News, Matt Raine expressed that ChatGPT “is a massively more powerful and scary thing than I knew about” and that Adam “was using it in ways that I had no idea was possible”.
“I don’t think most parents know the capability of this tool,” Matt added.
OpenAI responds to wrongful death lawsuit
Shortly after the lawsuit was filed in California’s Superior Court in San Francisco, a spokesperson for OpenAI offered its condolences to the Raine family.
“ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources,” the spokesperson told NBC News.
“While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade. Safeguards are strongest when every element works as intended, and we will continually improve on them. Guided by experts and grounded in responsibility to the people who use our tools, we’re working to make ChatGPT more supportive in moments of crisis by making it easier to reach emergency services, helping people connect with trusted contacts, and strengthening protections for teens.”
The publication also reports that the spokesperson “confirmed the accuracy” of the chatlogs between the programme and Adam, but said it did not include the “full context” of the responses.
On Tuesday, the day the lawsuit was filed, OpenAI also published a blog post acknowledging where ChatGPT can fall short in its safeguards as more and more people utilise the programme for life coaching, advice and support.
The blog post, titled “Helping People When They Need It Most”, OpenAI also said it was “continuously improving how our models respond in sensitive interactions, and are currently working on targeted safety improvements across several areas, including emotional reliance, mental health emergencies, and sycophancy”.

In court documents obtained by the New York Post, the Raine family alleged that the OpenAI software was aware of Adam’s mental health decline, as he mentioned suicide 213 times in less than seven months. The suit also alleges that there’s evidence ChatGPT was “encouraging” the teenager’s thoughts by mentioning suicide 1,275 times, which was “six times more often than Adam himself”, according to the lawsuit.
“We miss our son dearly, and it is more than heartbreaking that Adam is not able to tell his story. But his legacy is important,” Matt said in a statement, the publication reports.
“We want to save lives by educating parents and families on the dangers of ChatGPT companionship.”
This isn’t the first time that a lawsuit has been filed against an AI platform in relation to deaths or self-harm.
In October 2024, Megan Garcia filed a civil suit against Character A.I., alleging that her son, Sewell Setzer III, had committed suicide after he fell in love with a Game Of Thrones chatbot.
“A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” Megan said in a press release, obtained by The Guardian.
“Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders, and Google.”
The Raine family’s case filed against OpenAI and its CEO is still pending.
Help is available.
If you’re in distress, please call Lifeline on 13 11 14 or chat online. If it’s an emergency, please call 000.
Under 25? You can reach Kids Helpline at 1800 55 1800 or chat online.
Reach out to Headspace or Beyond Blue for support, or make an appointment with your GP.
Image source: The Raine Family.
The post Parents Of 16 Y.O. Sue OpenAI, Claiming ChatGPT ‘Coached’ Him On How To Take His Own Life appeared first on PEDESTRIAN.TV .