Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Independent UK
The Independent UK
Technology
Andrew Griffin

OpenAI tries to reassure people about ChatGPT’s safety after horrifying stories of mental distress

OpenAI has rushed to assure people that it is trying to make ChatGPT better at dealing with people in the midst of acute mental health crises.

The chatbot has been at the centre of a number of stories in recent weeks in which it was shown to have been talking to people who were undergoing severe mental distress and in some cases took their own lives. In some of those cases, AI chatbots even appeared to encourage dangerous behaviour and suggest that users should not take common advice such as speaking with loved ones.

The company called those cases "heartbreaking" and said the stories "weigh heavily on us". As a result, the company is speeding up its work on how ChatGPT deals with people "in serious mental and emotional distress", it said.

"Our goal is for our tools to be as helpful as possible to people—and as a part of this, we’re continuing to improve how our models recognise and respond to signs of mental and emotional distress and connect people with care, guided by expert input," OpenAI said.

The company said that it had been optimising its systems to deal with mental health crises for years. It pointed to changes it made in early 2023, for instance, which mean that ChatGPT should not offer self-harm instructions and would move into "supportive, empathic language".

But it admitted "there have been moments when our systems did not behave as intended in sensitive situations". It said that it would work to fix them in a variety of scenarios.

Those include long conversations, during which the safeguards built into ChatGPT can break down, for instance. It noted that over time, the system's "safety training may degrade" so that it might offer links to resources initially but over time would "eventually offer an answer that goes against our safeguards".

As such, it is planning to change the system so that it can be reliable in longer conversations, and over multiple chats, it said.

It also noted that its current safety checks do not necessarily account for different kinds of mental distress, and admitted that it had focused primarily on acute self-harm. If instead someone confessions to delusions, for instance, then the system might in fact think its user is playing and subtly reinforce those behaviours.

Multiple experts have warned that ChatGPT's propensity to flatter and encourage its users can mean that it actually encourages delusions and could be actively dangerous for the people who use it. OpenAI said that it is working on new changes that will allow ChatGPT to "de-escalate by grounding the person in reality".

A range of other changes will allow the system to recommend new kinds of expert help, put people in touch with emergency contacts and add extra protections for teens.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.