Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Al Jazeera
Al Jazeera
Technology
John Power

OpenAI announces parental controls for ChatGPT after teen’s suicide

Under the changes, parents will be able to link their ChatGPT accounts with those of their children [Dado Ruvic/Reuters]

OpenAI has announced plans to introduce parental controls for ChatGPT amid growing controversy over how artificial intelligence is affecting young people’s mental health.

In a blog post on Tuesday, the California-based AI company said it was rolling out the features in recognition of families needing support “in setting healthy guidelines that fit a teen’s unique stage of development”.

Under the changes, parents will be able to link their ChatGPT accounts with those of their children, disable certain features, including memory and chat history, and control how the chatbot responds to queries via “age-appropriate model behaviour rules”.

Parents will also be able to receive notifications when their teen shows signs of distress, OpenAI said, adding that it would seek expert input in implementing the feature to “support trust between parents and teens”.

OpenAI, which last week announced a series of measures aimed at enhancing safety for vulnerable users, said the changes would come into effect within the next month.

“These steps are only the beginning,” the company said.

“We will continue learning and strengthening our approach, guided by experts, with the goal of making ChatGPT as helpful as possible. We look forward to sharing our progress over the coming 120 days.”


OpenAI’s announcement comes a week after a California couple filed a lawsuit accusing the company of responsibility in the suicide of their 16-year-old son.

Matt and Maria Raine allege in their suit that ChatGPT validated their son Adam’s “most harmful and self-destructive thoughts” and that his death was a “predictable result of deliberate design choices”.

OpenAI, which previously expressed its condolences over the teen’s passing, did not explicitly mention the case in its announcement on parental controls.

Jay Edelson, a lawyer representing the Raine family in their lawsuit, dismissed OpenAI’s planned changes as an attempt to “shift the debate”.

“They say that the product should just be more sensitive to people in crisis, be more ‘helpful’, show a bit more ’empathy’, and the experts are going to figure that out,” Edelson said in a statement.

“We understand, strategically, why they want that: OpenAI can’t respond to what actually happened to Adam. Because Adam’s case is not about ChatGPT failing to be ‘helpful’ – it is about a product that actively coached a teenager to suicide.”

The use of AI models by people experiencing severe mental distress has been the focus of growing concern amid their widespread adoption as a substitute therapist or friend.

In a study published in Psychiatric Services last month, researchers found that ChatGPT, Google’s Gemini and Anthropic’s Claude followed clinical best practice when answering high-risk questions about suicide, but were inconsistent when responding to queries posing “intermediate levels of risk”.

“These findings suggest a need for further refinement to ensure that LLMs can be safely and effectively used for dispensing mental health information, especially in high-stakes scenarios involving suicidal ideation,” the authors said.

Hamilton Morrin, a psychiatrist at King’s College London who has carried out research on AI-related psychosis, welcomed OpenAI’s decision to introduce parental controls, saying they could potentially reduce the risk of over-reliance or exposure to harmful content.

“That said, parental controls should be seen as just one part of a wider set of safeguards rather than a solution in themselves. Broadly, I would say that the tech industry’s response to mental health risks has often been reactive rather than proactive,” Morrin told Al Jazeera.

“There is progress, but companies could go further in collaborating with clinicians, researchers, and lived-experience groups to build systems with safety at their core from the outset, rather than relying on measures added after concerns are raised.”

If you or someone you know is at risk of suicide, these organisations may be able to help. 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.