Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology
Robert Booth UK technology editor

Parents could get alerts if children show acute distress while using ChatGPT

ChatGPT logo
The company behind ChatGPT say ‘families and teens may need support in setting healthy guidelines’. Photograph: Dado Ruvić/Reuters

Parents could be alerted if their teenagers show acute distress while talking with ChatGPT, amid child safety concerns as more young people turn to AI chatbots for support and advice.

The alerts are part of new protections for children using ChatGPT to be rolled out in the next month by OpenAI, which was last week sued by the family of a boy who took his own life after allegedly receiving “months of encouragement” from the system.

Other new safeguards will include parents being able to link their accounts to those of their teenagers and controlling how the AI model responds to their child with “age-appropriate model behaviour rules”. But internet safety campaigners said the steps did not go far enough and AI chatbots should not be on the market before they are deemed safe for young people.

Adam Raine, 16, from California, killed himself in April after discussing a method of suicide with ChatGPT. It guided him on his method and offered to help him write a suicide note, court filings alleged. OpenAI admitted that its systems had fallen short, with the safety training of its AI models degrading over the course of long conversations.

Raine’s family alleges the chatbot was “rushed to market … despite clear safety issues”.

“Many young people are already using AI,” said OpenAI in a blog detailing its latest plans. “They are among the first ‘AI natives’, growing up with these tools as part of daily life, much like earlier generations did with the internet or smartphones. That creates real opportunities for support, learning and creativity, but it also means families and teens may need support in setting healthy guidelines that fit a teen’s unique stage of development.”

A key change could be allowing parents to disable the AI’s memory and chat history to mitigate the risk of the AI building a long-term profile of the child and resurfacing old comments about personal struggles in a way that would worsen their mental health.

In the UK, the Information Commissioner’s Office code of practice for age-appropriate design of online services tells tech companies to “collect and retain only the minimum amount of personal data you need to provide the elements of your service in which a child is actively and knowingly engaged”.

About a third of American teens have used AI companions for social interaction and relationships, including role-playing, romantic interactions and emotional support, research has found. In the UK, 71% of vulnerable children are using AI chatbots and six in 10 parents say they worry their children believe AI chatbots are real people, according to a similar study.

The Molly Rose Foundation, which was set up by the father of Molly Russell, 14, who took her life after descending into despair on social media, said it was “unforgivable for products to be put on to the market before they are safe for young people – only to retrospectively make small efforts to make them safer”.

Andy Burrows, the foundation’s chief executive, said: “Once again we’ve seen tragedy and media pressure force tech companies to act – but not go far enough.

“Ofcom should be ready to investigate any breaches ChatGPT has made since the Online Safety Act came into force and hold the company to account until it is fundamentally safe for its users.”

Anthropic, which provides the popular Claude chatbot, says on its website it cannot be used by under-18s. In May, Google allowed under-13s to sign into apps using its Gemini AI system, with parents able to turn it off using its Google Family Link system. Google advises parents to teach children Gemini isn’t human, that it can’t think for itself or feel emotions and not to enter sensitive or personal information. But it warns: “Your child may encounter content you don’t want them to see.”

The child protection charity NSPCC said OpenAI’s move was “a welcome step in the right direction, but it’s not enough”.

“Without strong age checks, they simply don’t know who’s using their platform,” said Toni Brunton-Douglas, a senior policy officer. “That means vulnerable children could still be left exposed. Tech companies must not view child safety as an after thought. It’s time to make protection the default.”

Meta said it built teenager protection into its AI products, but was “adding more guardrails as an extra precaution – including training our AIs not to engage with teens” on topics such as self-harm, suicide and disordered eating, and instead guiding them to expert resources.

“These updates are already in progress and we will continue to adapt our approach to help ensure teens have safe, age-appropriate experiences with AI,” a spokesperson said.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.