Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Independent UK
The Independent UK
Technology
Andrew Griffin

ChatGPT makers respond to growing number of people using AI as a therapist

OpenAI has announced ChatGPT users who converse with the bot for an extended amount of time will now receive reminders encouraging them to take a break - (Pau Barrena/AFP via Getty Images)

OpenAI, the creator of ChatGPT, says it is rushing to fix the systems’ difficulties with helping users in mental distress.

In recent months, increasing reports have suggested that people are turning to the system as a kind of therapist, for help with personal problems and mental health issues. But ChatGPT is often overly encouraging of users who consult it, encouraging people’s delusions and failing to challenge their assumptions.

Now OpenAI says that it is responding to those concerns with a range of updates and research intended to help make the system less dangerous when it is used by people are experiencing mental health crises or similar problems.

The company knows “that AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress”, it said.

Those changes include improvements to the models to make it better at recognising “signs of delusion or emotional dependency”, it said. Numerous reports have shown that the system could encourage people’s delusions, or allow them to become emotionally attached to the system.

Users will also now be shown alerts if they are having long sessions of sending messages to ChatGPT. The message says that it is “just checking in” and asks whether it is a “good time for a break”.

It will also look to work through questions with people, rather than giving them an answer, especially in “high-stakes personal decisions”. If a user asks whether they should break up with their boyfriend, for instance, it will aim to help them weigh up the decision rather than telling them what to do.

OpenAI also committed to work with experts to improve the system’s response at “critical moments – for example, when someone shows signs of mental or emotional distress”.

That has included working with medical experts, an advisory group on mental health and similar concerns, as well as working with researchers to improve the systems’ ability to spot concerning behaviour and respond to it.

The announcement comes as OpenAI continues to tease the launch of GPT-5, an update to the model that powers ChatGPT. It will be the first major release since GPT-4 was launched in 2023, and OpenAI boss Sam Altman has looked to hype the new version as potentially transformative.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.