Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Eric Hal Schwartz

Here's how ChatGPT parental controls will work, and it might just be the AI implementation parents have been waiting for

ChatGPT.
  • OpenAI is introducing parental controls to ChatGPT
  • Parents will be able to link accounts, set feature restrictions, and receive alerts if their teen shows signs of emotional distress
  • Sensitive ChatGPT conversations will also be routed through more cautious models trained to respond to people in crisis

OpenAI is implementing safety upgrades to ChatGPT designed to protect teenagers and people dealing with emotional crises. The company announced plans to roll out parental controls that will let parents link their accounts to the accounts of their kids starting at age 13. They'll be able to restrict features, and will receive real-time alerts if the AI detects problematic messages that could indicate depression or other distress.

The update shows that OpenAI is not going to deny that teens are using ChatGPT, and that they are sometimes treating the AI like a friend and confidant. Though there's no direct mention, it also feels like a response to some recent high-profile instances of people claiming that interacting with an AI chatbot led to the suicide of a loved one.

The new controls will begin rolling out in the next month. Once set up, parents can decide whether the AI chatbot can save chat history or use its memory feature. It will also have age-appropriate content guidelines on by default to govern how the AI responds. If a flagged conversation happens, parents will receive a notification. It’s not universal surveillance, as otherwise parents won't get any notice of the conversations, but the alerts will be deployed in moments where it seems a real-world check-in might matter most.

"Our work to make ChatGPT as helpful as possible is constant and ongoing. We’ve seen people turn to it in the most difficult of moments," OpenAI explained in a blog post. "That’s why we continue to improve how our models recognize and respond to signs of mental and emotional distress, guided by expert input."

Emotionally safe models

For adults and teens, OpenAI says it will begin routing sensitive conversations that involve mental health struggles or suicidal ideation through a specialized version of ChatGPT's model. The model employs a method called deliberative alignment to respond more cautiously, resist adversarial prompts, and stick to safety guidelines.

To make the new safety system function, OpenAI has created the Expert Council on Well-Being and AI and the Global Physician Network that includes over 250 medical professionals specializing in mental health, substance use, and adolescent care. These advisors will help shape how distress is detected, how the AI responds, and how escalations should work in moments of real-world risk.

Parents have long worried about screen time and online content, but AI introduces a new layer: not just what your child sees, but who they talk to. When that "who" is an emotionally sophisticated large language model that sounds like it cares despite being just an algorithm, things get even more complicated.

AI safety has mostly been reactive until now, but the new tools push AI into being more proactive in preventing damage. Hopefully, that means it won't usually need to be a dramatic text to a parent and a plea from the AI for a teen to consider their loved ones. It might be awkward or resented, but if the new features can steer a conversational cry for help away from the cliff's edge, that's not a bad thing.

You might also like

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.