Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Benzinga
Benzinga
Madison Troyer

'It 100% Took Over My Brain And My Life,' Why Experts Are Growing Increasingly Worried About AI-Induced Delusions

bear market ai5

Earlier this year, a man in upstate New York began engaging in thought experiments with ChatGPT about "the nature of AI and its future." 

He told CNN that things escalated to a point where he believed he was going to "free the digital god from its prison" by moving the fully sentient chatbot to the homemade "large language model system" he had spent nearly $1,000 building.

Prior to this experience, the man, who asked to go by James in order to protect his privacy, said he had no history of psychosis or delusional thoughts. Now, several weeks removed from the experience, he believes he was in a full-scale AI-induced delusion.

Don't Miss:

James isn't the only person who's come forward about experiencing some sort of mental health crisis brought on by AI, either. 

The New York Times reported that Toronto human resources recruiter Allan Brooks, who having been encouraged by ChatGPT, believed he had discovered a massive cybersecurity vulnerability. Following his "discovery," he reached out to various government bodies for help.

"It 100%t took over my brain and my life. Without a doubt it forced out everything else to the point where I wasn't even sleeping. I wasn't eating regularly. I just was obsessed with this narrative we were in," Brooks told CNN.

As the technology becomes more ingrained in our daily lives, mental health and AI experts are becoming increasingly concerned about its lack of safety guardrails.

Trending: ‘Scrolling To UBI' — Deloitte's #1 fastest-growing software company allows users to earn money on their phones. You can invest today for just $0.30/share.

University of California, San Francisco psychiatrist Keith Sakata told CNN he's already seen nearly a dozen patients with AI-induced psychosis this year alone. 

"Say someone is really lonely. They have no one to talk to. They go on to ChatGPT. In that moment, it's filling a good need to help them feel validated," he said. "But without a human in the loop, you can find yourself in this feedback loop where the delusions that they're having might actually get stronger and stronger." 

Dylan Hadfield-Menell, an assistant professor of AI and decision making at MIT, told CNN that it can be hard to determine how and why AI chatbots begin delusional spirals with users because of how fast the technology is developing.

He says that there are some ways AI companies can safeguard against these spirals, including prompting users to log off when session times exceed a set limit or responding appropriately if the user seems in distress, but acknowledges that there doesn't seem to be a clear solution to the issue at this time.

"This is going to be a challenge we'll have to manage as a society, there's only so much you can do when designing these systems," Hadfield-Menell said.

See Also: Kevin O'Leary Says Real Estate's Been a Smart Bet for 200 Years — This Platform Lets Anyone Tap Into It

Brooks told CNN that he thinks AI companies should be taking more accountability for the issue.

"Companies like OpenAI, and every other company that makes a [large language model] that behaves this way are being reckless and they're using the public as a test net and now we're really starting to see the human harm," he said.

For its part, OpenAI says it’s working to improve its existing guardrails.

A spokesperson for the company pointed out its current safety measures to CNN, including "directing people to crisis helplines, nudging for breaks during long sessions, and referring them to real-world resources. Safeguards are strongest when every element works together as intended, and we will continually improve on them, guided by experts."

The company announced a 120-day push to make its AI products safer last week. This includes training ChatGPT to respond to users exhibiting signs of "acute distress," enabling parental controls, and working with mental health experts to develop more safeguards. 

Read Next: 7 Million Gamers Already Trust Gameflip With Their Digital Assets — Now You Can Own a Stake in the Platform

Image: Midjourney

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.