
The next time you talk with ChatGPT for too long, you may see a prompt to take a break. OpenAI is rolling out several features to encourage healthy use of its popular chatbot.
"You've been chatting for a while — is this a good time for a break?" reads one prompt. ChatGPT will now show reminders during long sessions.
While ChatGPT can be a useful tool, it also has potential consequences if overused. Research has indicated that overdependence on AI can negatively impact critical thinking, cause your brain to atrophy, and make you lonely.
With those risks, it's unsurprising that OpenAI would put policies in place to limit use of ChatGPT.
OpenAI discussed how AI can feel more personal and responsive than other technologies, alluding to the fact that an agreeable chatbot can contribute to a positive feedback loop.
"There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency," said OpenAI.
Earlier this year, the tech giant had to adjust ChatGPT because the tool was too agreeable.
ChatGPT has now been trained to detect signs of mental or emotional distress and point people toward appropriate help. OpenAI CEO Sam Altman recently discussed why using ChatGPT as a therapist is a privacy nightmare.
While there may come a time when ChatGPT acts like a therapist, for now it makes sense for the tool to guide users toward evidence-based resources.
If you ask ChatGPT for help with personal challenges, the tool will now ask questions and encourage you to weigh pros and cons rather than giving you a direct answer. OpenAI goes as far as to say, "ChatGPT shouldn’t give you an answer" when asked about a personal challenge.
OpenAI shared that it has convened advisory groups of mental health experts and worked with over 90 physicians to better train ChatGPT to spot and respond to signs of mental or emotional distress.
Should we be afraid of ChatGPT?

OpenAI's GPT-5 is a highly anticipated AI model. Altman has raved about it for months, but he has also warned people about the technology.
In contrast to GPT-4, which Altman said "kind of sucks," GPT-5 is said to be smarter and that it "feels very fast."
But he also compared GPT-5, which could launch this month and be integrated into ChatGPT and other tools, to the Manhattan Project:
"There are moments in the history of science where you have a group of scientists look at their creation and just say, you know, ‘What have we done?’"
Altman also warned that AI is developing so rapidly it could spiral out of control. "It feels like there are no adults in the room," said the CEO.
Some have called Altman's comments marketing speak, likening them to a salesman saying, "Our prices are so low they’re scary!"
Still, AI’s rapid development poses mental and emotional health risks, which is reflected in ChatGPT’s new prompts and reminders to take a break.