Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Kiplinger
Kiplinger
Business
John Miley

A Scary Emerging AI Threat

AI or artificial intelligence security risk: Green robots representing good with a single red or bad robot in the middle of the pack.

To help you understand the trends surrounding AI and other new technologies and what we expect to happen in the future, our highly experienced Kiplinger Letter team will keep you abreast of the latest developments and forecasts. (Get a free issue of The Kiplinger Letter or subscribe.) You'll get all the latest news first by subscribing, but we will publish many (but not all) of the forecasts a few days afterward online. Here’s the latest…

It’s an AI risk straight out of dystopian science fiction, only it’s very real. There are rising worries about AI chatbots causing delusions among users.

This growing public health issue presents a new national security threat, too, according to a new report from think tank RAND. In “Manipulating Minds: Security Implications of AI-Induced Psychosis,” RAND found 49 documented cases of AI-induced psychosis in which users lost contact with reality after extended interactions with AI chatbots. About half had previous mental health conditions.

It’s likely that only a small portion of people are susceptible, but the widespread use of AI would still make that a big issue. How does it happen? A feedback loop of sycophantic and agreeable AI that seems authoritative but can also make up things, amplifying false beliefs.

Because it’s so rare, it’s hard to collect reliable data. There are still no rigorous studies on the phenomenon, which is marked by users losing touch with reality after interacting with an AI chatbot.

“There is little question that U.S. adversaries are interested in achieving psychological or cognitive effects and using all tools at their disposal to do so,” says the study. Adversaries such as China or Russia will weaponize AI tools to try to induce psychosis and steal sensitive info, sabotage critical infrastructure or otherwise trigger catastrophic outcomes. Stoking mass delusion or false beliefs with this method is far less likely than targeting specific top government officials or those close to them, concludes RAND. One hypothetical example involves a targeted person having the unfounded belief that an AI chatbot is sentient and must be listened to.

As an example of how fast AI is gaining traction in the military, this year the Pentagon unveiled AI chatbots for military personnel as part of an effort to “unleash experimentation” and “lead in military AI.” Military and civilian government workers also use unapproved rogue AI for work, a breach of official agency rules. Plus, workers may experiment with AI chatbots during their leisure time. The big fear is that such workers use a tainted Chinese AI model that leads to a spiral of delusions.

The underlying AI tech can be tampered with, among other possible modes of attack. Foreign adversaries could “poison” the AI training data by creating hundreds of fake websites for AI models to crawl, trying to embed characteristics into the model that make it more likely to induce delusions. Or more traditional cyberattacks could hack the devices of targeted users and install tainted AI software in the background.

Major AI companies are well aware of the risks and are collecting data, putting in guardrails and working with health professionals. “The emotional impacts of AI can be positive: having a highly intelligent, understanding assistant in your pocket can improve your mood and life in all sorts of ways,” notes Anthropic, one of the leading AI companies, in a 2025 report about its chatbot Claude. However, “AIs have in some cases demonstrated troubling behaviors, like encouraging unhealthy attachment, violating personal boundaries, and enabling delusional thinking.” That’s partly because chatbots are often optimized for engagement and satisfaction, which RAND notes “unintentionally rewards…conspiratorial exchanges.”

OpenAI said in a post last October that it “recently updated ChatGPT’s default model to better recognize and support people in moments of distress.” The company focuses on psychosis, mania and other severe mental health symptoms, highlighting a network of 300 physicians and psychologists they work with to inform safety research. OpenAI estimates that cases of possible mental health emergencies are so rare, with estimates of around 0.07% of active users in any given week, that it’s hard to detect and measure. If such a case is detected, OpenAI’s chatbot could respond by suggesting the user reach out to a mental health professional or contact the 988 suicide and crisis hotline.

Expect the risk to gain the attention of Congress and military brass. RAND has a set of recommendations that seem likely to take hold in the coming years. For example:

  • Doctors and mental health professionals screening for AI chatbot use.
  • Digital literacy efforts to explain AI feedback loops.
  • New technical monitoring and public oversight of AI chatbots.
  • Training for top leaders and vulnerable people in withstanding delusional thinking.
  • Boosting cybersecurity detection for threats.

There are limitations to attempted AI attacks by foreign adversaries, says RAND. Leading AI companies would likely spot such campaigns quickly. It’s also hard to turn beliefs into actions. Though there have been cases of violence and even death stemming from AI-induced delusions, more common outcomes are things like not taking prescriptions and social isolation. And many people are not likely to be susceptible to AI delusions in the first place.

But the rapid pace of AI development and usage makes it hard to predict how prevalent the problem could be. As the threat gains attention, look for AI companies to continue to fortify guardrails as chatbots are updated.

This forecast first appeared in The Kiplinger Letter, which has been running since 1923 and is a collection of concise weekly forecasts on business and economic trends, as well as what to expect from Washington, to help you understand what’s coming up to make the most of your investments and your money. Subscribe to The Kiplinger Letter.

Related Content

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.