Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Guide
Tom’s Guide
Technology
Amanda Caswell

OpenAI just changed ChatGPT for teens — as a mom and AI editor, here’s what it means to me

Woman using ChatGPT AI on a laptop .

As a mother of three grade school children, I’m watching my kids grow up in a world that changes almost as fast as they do. In my work, I spend my days testing, re-testing and reviewing AI, so the news of OpenAI’s new safety protections for ChatGPT users under 18 gave me pause. I both exhaled with relief and braced myself for what’s to come as AI continues to evolve.

OpenAI changes for teens (minimum 13 years of age)

(Image credit: Future)

The new measures laid out for ChatGPT’s teen users are as follows:

  • An age-prediction system: This tool will actually determine if a user is under 18 and then route them to a version of ChatGPT designed with stricter safety rules.
  • Default restrictions when age is uncertain: If a user’s age cannot be determined by the system, it will still treat them as underage, erring on the side of caution.
  • Parental controls: Parents will now be able to link their own account to their teen’s account to apply settings such as disabling or limiting features (e.g., memory, chat history), setting “blackout hours” to make ChatGPT unavailable to the teen user and getting notifications if the system detects signs of distressful conversation from a teen user.
  • Conversation restrictions: Teen users will be limited in their conversations, meaning graphic sexual content will be blocked, as well as flirtatious chats and discussions of self-harm and suicide. In rare, extreme cases, law enforcement might be contacted if parents cannot be reached and there is imminent harm.

What ChatGPT for teens says about the future of our children and AI

(Image credit: Twin Design/Shutterstock)

The implications of stronger safeguards go so much further than tech policy decisions. For me, as a mom, they strike at something deep: the balancing act between giving my children freedom to explore, learn on their own and find help when they need it — and protecting them from harm, especially invisible harm, in digital spaces.

Although chatbots have been around for a few years now, they are still a fairly new territory as they become more integrated into our lives. We know these tools are not perfect; unexpected outcomes and gaps can still exist, making the stakes higher for vulnerable teens. Lawsuits have already been filed (for example, in the case of a 16-year-old whose family alleges ChatGPT contributed to taking his own life).

These safety measures are a start, and they show that finally, someone is thinking about the fragility of adolescence and the unpredictability of mental health by implementing oversight without being overbearing.

For me, the question is: how will this be implemented in a way that respects both? How will “age prediction” avoid reinforcing biases? How often will false positives or negatives happen? Will teens feel safe speaking with ChatGPT if they worry the chatbot might alert an adult?

What I hope to see with ChatGPT for teens

(Image credit: Shutterstock)

From a mom’s perspective, here are what I think are important for making these changes meaningful:

  • Transparency and involvement: Clear explanation to parents and teenagers about what these new controls do and when they kick in. Ideally, tools that allow teens to understand what information is being used (e.g. how the age‐prediction works).
  • Flexibility and age gradations: Not all teens are the same. A 13-year-old and an 17-year-old have different maturity, different needs. The controls should allow nuance, perhaps more control as teens get older and prove responsible.
  • Supportive, not punitive: The goal should be making help available, especially in terms of mental health resources. Human oversight when needed, rather than simply restricting or shutting down conversations. If a teen, any teen, not just mine, is distressed, I want the system to guide them to help, not make them feel judged or punished.
  • Safe defaults, strong guardrails: The default experiences should be safe for younger users, but there should also be robust guardrails so that harm is minimized. This means ongoing testing, oversight and responding to unintended effects.
  • Parental education & dialogue: Parents need to be part of the process, knowing how to use the controls, how to talk with their kids about what they do with AI, and building trust so that kids feel safe sharing what they experience online.

The takeaway

I have great relationships with those on the research and development teams in big tech. And, I truly believe tech companies like OpenAI have a moral obligation to build the safety net before tragedy strikes (again), not only in response. It’s good that OpenAI is now putting forward tools to protect teenagers, and that they’re acknowledging that privacy, freedom and safety are not always aligned.

As parents, our role remains essential. Beyond using parental controls, we need to keep open lines of communication with our kids, teach them critical thinking about what they see and hear (even from AI) and help them understand that when tech fails, it’s okay to reach out to real people, including family, therapists and trusted adults.

Follow Tom's Guide on Google News, or add us as a preferred source, to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button!

More from Tom's Guide

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.