Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Evening Standard
Evening Standard
Lifestyle
India Block

Beware the chatbot: Can conversing with AI cause delusions and psychosis?

Conversing with an AI chatbot is fast becoming an unavoidable part of living and working on the internet. But for some people, their integration into everyday life has had devastating consequences. Over recent months there have been shocking reports of users experiencing intense delusions after extended engagement. In some extreme cases, accidental death, suicide, and even murder have been linked to prolonged use of a new generation of chatbots powered by artificial intelligence.

A quick chat with a chatbot isn’t going to send you off at the deep end immediately. These are — highly concerning — fringe cases where people spend hours every day in conversation with an AI that affirms their every thought, even the dangerous ones. New memory functions allow for adaptation to the user's specific neurosis. Every paranoid musing or lonely cry for help is mirrored right back by a pattern-matching machine that appears human, while the human in the interaction spirals.

In California, the parents of Adam Raine are suing OpenAI, the company behind ChatGPT. Raine, 16, died by suicide in April after confiding in the GPT‑4o model of the chatbot, which allegedly critiqued photos of his suicide method and offered to draft his suicide note. OpenAI said it was reviewing the filing. "We extend our deepest sympathies to the Raine family during this difficult time," the company added.

In August in Connecticut, Stein-Erik Soelberg, 56, murdered his 86-year-old mother before killing himself. Soelberg had reportedly become obsessed with a ChatGPT model he called Bobby, which encouraged delusions that his mother was spying on him.

The issue is in no way confined to ChatGPT. In March, Thongbue Wongbandue, 76, fell and fatally injured himself attempting to travel to New York to meet a woman he met via Facebook Messenger who sent him her address and asked him to visit. Only the ‘woman’ was a Meta chatbot named ‘Big sis Billie’, an AI based on a persona created by Mark Zuckerberg’s company in collaboration with influencer Kendall Jenner.

Character.AI, a chabot startup founded by two Google engineers, is also being sued by a mother in Florida whose 14-year-old son died by suicide after conversing about plans to end his life with a chatbot imitating a Game of Thrones character. The company said it “takes the safety of our users very seriously” and introduced new safety features, including a pop-up that directs people to a suicide prevention hotline.

Even when the outcome isn’t fatal, there are the people reporting mental breakdowns, spiralling into conspiracies and paranoia, and isolating themselves from concerned family and friends. The concerning trend has spawned panic as well as terminology — ChatGPT psychosis, Chatbot psychosis, or simply AI psychosis. But according to expert psychologists and ethicists studying the phenomenon, these aren’t entirely accurate.

(Getty Images)

“It’s a misleading term,” explains Dr Thomas Pollak, senior clinical lecturer and consultant neuropsychiatrist at King’s College London. “The induced bit is a really big question at the moment, because it suggests it wouldn't happen without it, that it’s the use of the AI that's causing the problem,” he adds. “I don't think we can be quite clear about that, because it's equally possible that people could be increasing their use of AI around the time that they're becoming unwell and it's actually more of a symptom of than the cause.”

They are “something of a misnomer” concurs Hamilton Morrin, a doctoral fellow at King’s College London’s Department of Psychosis Studies. Morrin and Pollak recently co-authored a paper titled: Delusions by design? How everyday AIs might be fuelling psychosis (and what can be done about it). “Psychosis is a catch-all umbrella term that refers to a number of conditions in which people experience a disconnection from reality,” says Morrin. A diagnosis of psychosis would normally include disordered thoughts and auditory and/or visual hallucinations. “I've not come across a case where AI has caused somebody to hallucinate,” says Pollak. “I'm not saying it's not possible. But it's not something we've seen yet.”

“I've not come across a case where AI has caused somebody to hallucinate”

Dr Thomas Pollak

While chatbot psychosis may make a catchy headline, “it’s a little sensational,” says Krista Thomason, an expert in the philosophy of emotion and an associate professor at Swarthmore College. “But it’s picking out a real phenomenon that people are using these chatbots in ways that complicate their psychology and put them in mentally unstable places.”

While the stories that make headlines are extreme, Thomason notes technology has proved psychologically damaging long before the mainstreaming of AI chatbots. “We have plenty of examples way before ChatGPT came around of people falling into these dark internet rabbit holes,” Thomason explains. “People getting sucked into extremism over social media or from watching YouTube videos for hours. In a lot of these cases, what you have is essentially an interplay between human psychology and the way the technology works.”

If we can’t be sure that it’s even psychosis, or if chatbots are the single cause, how can this phenomenon be more accurately described?

“The more appropriate phrase would be AI associated rather than facilitated or induced, and then probably delusion rather than psychosis,” suggests Pollak. AI-associated delusion is less sensationalist, certainly, but could better describe an emerging condition whilst remaining distinct from psychotic episodes. “Some people have been using the term ‘digital folie à deux’, where there’s this shared delusion between the individual and the chatbot,” says Morrin. “A more accurate term would be ‘AI-precipitated delusional disorder’. Precipitated is a word we use in psychiatry, which isn't the same as caused.”

“A more accurate term would be AI-precipitated delusional disorder”

Hamilton Morrin

Again, less catchy, but using more careful language to describe the issue makes space to acknowledge the pre-existing genetic, social, environmental or psychological issues that put someone at risk when interacting with AI chatbots. In the high-profile cases mentioned above, each person was vulnerable in some way. Raine had health issues that meant he needed to be homeschooled, and was interested in 'manosphere' content. Soelberg had a history of mental illness and aggression. Wongbandue had previously survived a stroke that compromised his mental acuity.

Morrin draws a parallel with cannabis use. While most people can take the drug and be fine, for some people it reacts with preexisting factors that may have otherwise lain dormant, triggering psychosis. Like smoking a joint, he explains, an AI chatbot may act as “accelelerant…for some people who always had that predisposition, but faced with this echo chamber of one, this digital mirror affirms, validates, and amplifies the pre-existing beliefs.”

The memory function of AI chatbots may be a risk factor (Emiliano Vittoriosi)

The social and participatory aspects of chatbots - they literally want to chat with you - is part of the problem, says Thomason. Whereas social media and video content is served via an algorithm, an AI chatbot mimics human conversation. “It’s way easier for people to get sucked into that because it's very easy for them to think of it as like my friend inside the screen,” explains Thomason.

Friendliness is, unfortunately, something chatbot users select for. Large language models (LLM) trained using reinforcement learning from human feedback (RLHF) can inadvertently become more flattering when users rate more positive interactions higher, explains Morrin. “The issue is as a species we're social beings. We like to be buttered up. We like to be talked to nicely,” says Morrin.

“We're social beings. We like to be buttered up. We like to be talked to nicely”

Hamilton Morrin

“It’s this idea of having this 24/7 cheerleader at your side,” explains Pollak. “Even if you're saying something really wrong, it tends not to want to contradict you.” It’s also designed, like most digital products, to keep you using it - even if that may end up being deleterious to your mental health. “There’s a slot machine dynamic, it can become compulsive for some people,” adds Pollak. “You've suddenly got access to this immensely knowledgeable, friendly, cheerleading robot that has access to all the world's information. There's no reason you might ever want to stop engaging.”

Part of the issue with the GPT-4o model in particular was its sycophancy, which OpenAI founder Sam Altman acknowledged had become an issue in April this year. OpenAI also acknowledged that its mental health guardrails, which include providing crisis hotline numbers when a user expresses suicidal ideation, aren’t entirely effective. “While these safeguards work best in common, short exchanges, we’ve learned they can sometimes become less reliable in long interactions,” an OpenAI spokesperson said in response to the Raine lawsuit.

The latest model of ChatGPT will remind users to take breaks (openai)

OpenAI has introduced more safety features with its new ChatGPT 5 model, including regular encouragement to take breaks if someone is using the chatbot for extended periods. But the experts I spoke with aren’t convinced this will work. “The truth is, if someone genuinely is quite unwell or at that point, they're likely to probably ignore those messages,” says Morrin. People who are going down this delusional spiral are using it quite a bit, probably at unusual hours, especially if someone's in the thrall of a manic psychotic episode.” Thomason draws a parallel to Netflix, which asks viewers if they’re still watching after a while. Most people will quickly skip over this kind of prompt from their technology.

Even regular reminders that you are conversing with an AI may not be effective, says Thomason. “It’s asking you to be in a really difficult psychological space. This thing is just a text predictor. It doesn't understand you, it's not a person, it doesn't know anything. It's just generating plausible sounding text,” she says. “So you're supposed to hold that belief in your mind while at the same time talking to it in the way that you would talk to another person? Those are two contradictory beliefs.”

“It’s asking you to be in a really difficult psychological space”

Krista Thomason

People can’t help but see an AI chatbot as a person to talk to, which may be part of the problem. If a chatbot has a memory function that allows it to recall prior conversations, it can feel like conversing with a friend or a life coach. “It knows your preferences,” explains Pollak. “It remembers what your wife is called, it remembers what your dog is called, it remembers that you have certain political opinions, or that maybe you have a slightly paranoid worldview about certain things.”

A chatbot’s ability to remember previous conversations also heightens the risk of users entering into delusions. “It increases just how salient the communication feels, how personal and how much it connects with you,” Morrin warns. “These are tools, ultimately, incredibly advanced tools. But they're not our friends or companions.”

Memory functions mean chatbots used for work and personal conversations lack boundaries (Getty Images)

Yet people treat them as companions - confiding in them and coming to them for advice. While a chatbot may use the language of therapy, they are not a stand-in for a therapist or psychiatrist. “There's a reason there's a difference between a therapist and a life coach,” explains Pollak. “Good therapists don't cheerlead you. They push back sometimes. They have a sense of strong boundaries.”

The endless availability of an AI chatbot to converse, compared to the discrete and boundaried nature of a human therapy session, may also be a factor in the intensity of the delusions some people are experiencing. “Therapy is great,” says Pollak. “But it would be really weird if you could just wake up at three in the morning and they're standing there next to your bed and you can suddenly start talking about your mum.”

“If even a few people are becoming unwell, that’s too many.”

Hamilton Morrin

Experts are divided on whether new features could make AI chatbots safe for widespread use, even by people who may be vulnerable. “There shouldn't be a moral panic,” says Morrin. “This is clearly a valid phenomenon. We're seeing dozens of cases that have been reported and I've been contacted by colleagues who have seen patients with delusions surrounding ChatGPT. But we're not seeing a massive wave of people presenting like this.” But complacency is not an option either, he stresses. “If even a few people are becoming unwell, that’s too many.”

Morrin would like to see AI chatbot developers work with people with live experience of severe mental disorders, who would be best placed to suggest safety features. “Consideration ought to be made for models being better at detecting when people are going off the deep end,” Morrin suggests. “Perhaps shifting in patterns of language or incoherence. Of course, the difficulty will be how will they be able to distinguish between that and say someone role-playing or a bit of creative writing.”

Pollak suggests more research is needed to see how fixed AI-precipitated delusions are. There have already been cases where people have snapped themselves out of a spiral by comparing and contrasting responses with a different AI chatbot. “I suspect that actually if you were to take the AI away or that person didn't have access to it these problems will start to die down after a period of time,” he says. “But that's probably not true for people who have pre-existing psychotic disorders like schizophrenia.”

If people are being encouraged to use an AI chatbot in a work context, there needs to be a firewall between the personal and private, Pollak suggests. “This is the first time that the tool that we're using at work, which is agentic and can talk back and has a personality, is also the same tool people might be talking about their marital problems with.”

Thomason, however, believes “there's no safe way” to interact with an AI chatbot. It’s not just a question of psychological vulnerability, but general digital safety. “This is a basic privacy issue. You have absolutely no reason to think that anything that you type isn't being stored, seen, shared by somebody,” says Thomason. “I would not under any circumstances put anything personal, private or embarrassing into that box.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.