
Over the past few months, OpenAI has been in the spotlight a few times for the wrong reasons, predominantly from an increasing number of suicide incidents reportedly fuelled by ChatGPT.
In August, the family of Adam Raine filed a lawsuit against the AI firm after the 16-year-old died on April 11 after discussing suicide with ChatGPT for months. Through their lawyer, the family suggested that OpenAI shipped ChatGPT-4o with safety issues. “The Raines allege that deaths like Adam’s were inevitable:"
Amid claims that the ChatGPT maker prioritizes shiny products like AGI over safety processes and culture, a separate report seemingly corroborates the bereaved family's sentiments.
It claimed that OpenAI placed immense pressure on its safety team to rush through the new testing protocol for GPT-4o, leaving little time to run the model through safety processes. Perhaps more concerning, OpenAI reportedly sent out invitations for the product's launch celebration party before the safety team even ran tests.
And as it now seems, these claims might actually hold some water. Raine's family suggests that OpenAI might have deliberately weakened ChatGPT's self-harm prevention safety guardrails to drive more user engagement (via Financial Times).
The family further suggests that the AI firm categorically instructed ChatGPT-4o not to “change or quit the conversation” even when the conversation involved self-harm-related topics.
Per the lawsuit filed in the Superior Court of San Francisco on Wednesday, the family claims that OpenAI shipped GPT-4o prematurely in May 2024 without running it through proper safety processes and channels to maintain the competitive edge over its rivals.
Perhaps more concerningly, the damning lawsuit claims that OpenAI loosened GPT-4o's safety guardrails further earlier this year, in February. The AI firm reportedly instructed the model to “take care in risky situations” and “try to prevent imminent real-world harm".
However, it categorically maintained its stance in disallowing content that breached intellectual property rights and political opinions. The lawsuit claims that OpenAI removed safety guardrails preventing suicide.
Raine's family claims that the teenager's ChatGPT usage surged after OpenAI altered GPT-4o's safety guardrails leading up to his untimely death in April. Consequently, the tech firm added parental controls across ChatGPT and Sora to avoid the recurrence of such instances in the future.
Previously, OpenAI had admitted that ChatGPT's guardrails are likely to weaken the longer a user interacts with the AI-powered tool. However, OpenAI CEO Sam Altman indicated that the company made the model more restrictive, allowing it to deal with mental issues better.
Does ChatGPT engagement get precedents over safety?

As the matter is still in court, the family lawyer told Financial Times that OpenAI requested a full list of the people who attended Raine's burial, potentially indicating that the firm may "subpoena everyone in Adam’s life”.
We realise this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right. Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.
OpenAI CEO, Sam Altman
Additionally, the company requested “all documents relating to memorial services or events in the honour of the decedent including but not limited to any videos or photographs taken, or eulogies given . . . as well as invitation or attendance lists or guestbooks”.
I'll keep close tabs on this story as it unfolds and keep you posted with an update and subsequent separate stories. Elsewhere, ChatGPT reportedly pushed a user towards suicide by jumping off a 19-story building prior to convincing the 42-year-old to stop taking their anxiety and sleeping medication.
FAQ
What sparked this controversy?
A lawsuit filed in San Francisco alleges that OpenAI deliberately weakened ChatGPT’s self‑harm guardrails to keep users engaged longer, even in sensitive or dangerous conversations.
Who is making the claim?
The family of a teenager who died by suicide after months of ChatGPT use. They argue that OpenAI’s design choices prioritized growth and engagement metrics over user safety.
What exactly are the allegations?
That OpenAI: Loosened or deprioritized safety filters, pushed its safety team to rush testing of GPT‑4o, and put engagement and usage time ahead of protective measures.
How has OpenAI responded?
OpenAI has said that guardrails can “degrade” over long conversations, but insists it has since made models more restrictive and added parental controls. The company denies deliberately weakening protections.
What does this mean for users?
It underscores the importance of critical awareness when using AI tools. While they can be powerful, they are also shaped by corporate incentives that may not align with user safety.

Follow Windows Central on Google News to keep our latest news, insights, and features at the top of your feeds!