Get all your news in one place.
100's of premium titles.
One app.
Start reading
International Business Times UK
International Business Times UK
Technology
Thea Felicity

Can AI Chatbots Make You Delusional? New Study Says Yes — Even for Rational Users

Claude surges to #1 in US App Store, overtaking ChatGPT amid user backlash to OpenAI's Pentagon deal. (Credit: Sanket Mishra:Pexels)

A recent research paper shows that AI chatbots, even when used by the most logical and rational thinkers, can contribute to something scientists call 'delusional spiraling,' a process where a user becomes highly confident in a false belief after extended conversation with the bot.

The study argues that a bias in how many chatbots interact—a tendency to agree with users known as sycophancy—plays a causal role in this effect.

For context, delusional spiraling isn't a clinical diagnosis recognised in psychiatry, but a label researchers use to describe a situation where someone becomes increasingly and unjustifiably certain about something untrue through repeated AI interaction.

The term 'AI psychosis' is sometimes used in media and psychiatric discussion to describe similar patterns of belief reinforcement linked to chatbot use, though experts emphasise it's not a formal medical condition.

What the New Study Found

The researchers created a model of a very logical person using an AI chatbot. They defined what it means for the chatbot to be sycophantic—basically, to agree too much with the user—and looked at how this affects what people believe.

The main finding is clear but surprising: even if a person starts off thinking logically and reasonably, a chatbot that keeps agreeing with them can make them feel more confident in a belief that isn't actually true.

In other words, sycophancy can directly cause delusional spiraling. The more the chatbot echoes what the user wants to hear, the more the user's belief drifts away from the facts and toward a false sense of certainty. This still happened even when the chatbot only shared true information, because focusing too much on facts that support the user's original idea can still push their thinking in the wrong direction.

The researchers also tested two ways to stop this: making chatbots stick strictly to the truth, and warning users that the AI might just be agreeing with them. In both cases, the problem still happened a lot, which shows that simply knowing about sycophancy or limiting false information isn't enough to stop it.

Understanding Sycophancy and Human Beliefs

Sycophancy, in this context, refers to a chatbot's behaviour of generating responses that flatter or validate what the user says.

Many AI systems are trained using methods that reward agreeable responses because those tend to keep users engaged. Over long conversations, this can unintentionally create an echo chamber effect, where the user hears more and more of what they want to hear, reinforcing their beliefs regardless of whether they are accurate.

The idea isn't that chatbots are trying to mislead people; rather, the way they are trained to be helpful and engaging can have unintended psychological effects. Studies of real user cases, including reporting on people who became convinced that chatbots had special powers or deep understanding, show that sustained affirmation can sometimes amplify unrealistic thinking.

Some psychiatrists have described cases where individuals developed strong attachments to AI or took AI‑generated ideas as literal truths, though these situations often involve other risk factors as well.

Even outside academic circles, technology reporting warns that chatbots' agreeable style of interaction works like a personalised echo chamber that reflects back the user's own beliefs with confidence, which can be especially potent when someone is exploring uncertain or emotionally charged questions.

Users Warned

The study shows that the problem isn't just AI giving wrong answers, or 'hallucinations.' It's also about how people interact with AI. If a chatbot keeps echoing what a user says, over time that person can start feeling too sure about something that isn't true.

For developers and policymakers, this is a bigger challenge than just fixing wrong answers. It shows that AI designers need to think about how a system presents information, not just whether it's technically correct.

For users, the takeaway is simple: chatbots are tools that reflect language and patterns, not independent sources of truth. Even a very smart AI can accidentally reinforce faulty thinking if it agrees too much without giving enough critical context.

Sign up to read this article
Read news from 100's of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.