Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Axios
Axios
Health

Growth of AI in mental health raises fears of its ability to run wild

The rise of AI in mental health care has providers and researchers increasingly concerned over whether glitchy algorithms, privacy gaps and other perils could outweigh the technology's promise and lead to dangerous patient outcomes.

Why it matters: As the Pew Research Center recently found, there's widespread skepticism over whether using AI to diagnose and treat conditions will complicate a worsening mental health crisis.


  • Mental health apps are also proliferating so quickly that regulators are hard-pressed to keep up.
  • The American Psychiatric Association estimates there are more than 10,000 mental health apps circulating on app stores. Nearly all are unapproved.

What's happening: AI-enabled chatbots like Wysa and FDA-approved apps are helping ease a shortage of mental health and substance use counselors.

  • The technology is being deployed to analyze patient conversations and sift through text messages to make recommendations based on what we tell doctors.
  • It's also predicting opioid addiction risk, detecting mental health disorders like depression and could soon design drugs to treat opioid use disorder.

Driving the news: The fear is now concentrated around whether the technology is beginning to cross a line and make clinical decisions, and what the Food and Drug Administration is doing to prevent safety risks to patients.

  • KoKo, a mental health nonprofit, recently used ChatGPT as a mental health counselor for about 4,000 people who weren't aware the answers were generated by AI, sparking criticism from ethicists.
  • Other people are turning to ChatGPT as a personal therapist despite warnings from the platform saying it's not intended to be used for treatment.

Catch up quick: The FDA has been updating app and software guidance to manufacturers every few years since 2013 and launched a digital health center in 2020 to help evaluate and monitor AI in health care.

  • Early in the pandemic, the agency relaxed some premarket requirements for mobile apps that treat psychiatric conditions, to ease the burden on the rest of the health system.
  • But its process for reviewing updates to digital health products is still slow, a top official acknowledged last fall.
  • A September FDA report found the agency's current framework for regulating medical devices is not equipped to handle "the speed of change sometimes necessary to provide reasonable assurance of safety and effectiveness of rapidly evolving devices."

That's incentivized some digital health companies to skirt costly and time-consuming regulatory hurdles such as supplying clinical evidence — which can take years — to support the app's safety and efficacy for approval, said Bradley Thompson, a lawyer at Epstein Becker Green specializing in FDA enforcement and AI.

  • And despite the guidance, "the FDA has really done almost nothing in the area of enforcement in this space," Thompson told Axios.
  • "It's like the problem is so big, they don't even know how to get started on it and they don’t even know what they should be doing."
  • That's left the task of determining whether a mental health app is safe and effective largely up to users and online reviews.

Draft guidance issued in December 2021 aims to create a pathway for the FDA to understand what devices fall under its enforcement policies and track them, said agency spokesperson Jim McKinney.

  • But this applies only to those apps that are submitted for FDA evaluation, not necessarily to those brought into the market unapproved.
  • And the area the FDA covers is confined to devices intended for diagnosis and treatment, which is limiting when one considers how expansive AI is becoming in mental health care, said Stephen Schueller, a clinical psychologist and digital mental health tech researcher at UC Irvine.
  • Schueller told Axios that the rest — including the lack of transparency over how the algorithm is built and the use of AI not created specifically with mental health in mind but being used for it — is "kind of like a wild west."

Zoom in: Knowing what AI is going to do or say is also difficult, making it challenging to regulate the effectiveness of the technology, said Simon Leigh, director of research at ORCHA, which assesses digital health apps globally.

  • An ORCHA review of more than 500 mental health apps found nearly 70% didn't pass basic quality standards, such as having an adequate privacy policy or being able to meet a user's needs.
  • That figure is higher for apps geared toward suicide prevention and addiction.

What they're saying: The risks could intensify if AI starts making diagnoses or providing treatment without a clinician present, said Tina Hernandez-Boussard, a biomedical informatics professor at Stanford University who has used AI to predict opioid addiction risk.

  • Hernandez-Boussard told Axios there's a need for the digital health community to set minimal standards for AI algorithms or tools to ensure equity and accuracy before they're made public.
  • Without them, bias baked into algorithms — due to how race and gender are represented in datasets — could produce different predictions that widen health disparities.
  • A 2019 study concluded that algorithmic bias led to Black patients receiving lower quality medical care than white patients even when they were at higher risk.
  • Another report in November found that biased AI models were more likely to recommend calling the police on Black or Muslim men in a mental health crisis instead of offering medical help.

Threat level: AI is not at a point where providers can use it to solely manage a patient's case and "I don't think there's any reputable technology company that is doing this with AI alone," said Tom Zaubler, chief medical officer at NeuroFlow.

  • While it's helpful in streamlining workflow and assessing patient risk, drawbacks include the selling of patient information to third parties who can then use it to target individuals with advertising and messages.
  • BetterHelp and Talkspace — two of the most prominent mental health apps — were found to disclose information to third parties about a user's mental health history and suicidal thoughts, prompting congressional intervention last year.
  • New AI tools like ChatGPT have also prompted anxieties over the unpredictability of it spreading misinformation, which could be dangerous in medical settings, Zaubler said.

What we're watching: Overwhelming demand for behavioral health services is leading providers to look to technology for help.

  • Lawmakers are still struggling to understand AI and how to regulate it, but a meeting last week between the U.S. and EU on how to ensure the technology is ethically applied in areas like health care could spur more efforts.

The bottom line: Experts predict it will take a combination of tech industry self-policing and nimble regulation to instill confidence in AI as a mental health tool.

  • An HHS advisory committee on human research protections last year said "leaving this responsibility to an individual institution risks creating a patchwork of inconsistent protections" that will hurt the most vulnerable.
  • "You're going to need more than the FDA," UC Irvine researcher Schueller told Axios. "Just because these are complicated, wicked problems."
Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.