Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Euronews
Euronews
Roselyne Min

OpenAI adds mental health safeguards to ChatGPT, saying chatbot has fed into users’ ‘delusions’

OpenAI is adding mental health safeguards to ChatGPT, after it said the chatbot failed to recognise “signs of delusion or emotional dependency”.

As artificial intelligence (AI) tools become more widely adopted, more people are turning to chatbots for emotional support and help tackling personal challenges.

However, OpenAI’s ChatGPT has faced criticism that it has failed to respond appropriately to vulnerable people experiencing mental or emotional distress. In one case, a 30-year-old man with autism reportedly was hospitalised for manic episodes and an emotional breakdown after ChatGPT reinforced his belief that he had discovered a way to bend time.

“We don’t always get it right,” OpenAI said in a statement announcing the changes. “Our approach will keep evolving as we learn from real-world use”.

The changes will enable ChatGPT to better detect signs of mental or emotional distress, respond appropriately, and point users to evidence-based resources when needed, the company said.

The chatbot will now encourage breaks during long sessions, and the company will soon roll out a new feature to respond to questions involving high-stakes personal decisions.

For example, it will no longer give a direct answer to questions such as “Should I break up with my boyfriend?” but will instead ask questions to help the user think through their personal dilemmas.

OpenAI said it is also setting up an advisory group of experts in mental health, youth development, and human-computer-interaction (HCI) to incorporate their perspectives in future ChatGPT updates.

The tech giant envisions ChatGPT as useful in a range of personal scenarios: preparing for a tough discussion at work, for example, or serving as a sounding board to help someone who’s “feeling stuck …untangle [their] thoughts”.

Experts say that while chatbots can provide some kind of support in gathering information about managing emotions, real progress often happens through personal connection and trust built between a person and a trained psychologist.

This is not the first time OpenAI has adjusted ChatGPT in response to criticism over how it handles users’ personal dilemmas. In April, OpenAI rolled back an update that it said made ChatGPT overly flattering or agreeable.

ChatGPT was “sometimes saying what sounded nice instead of what was actually helpful,” the company said.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.