Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Politics
David Cox

‘They thought they were doing good but it made people worse’: why mental health apps are under scrutiny

Illustration by Observer Design of app logos with happy or sad faces and a devil emoji with dollar signs in its eyes
Illustration by Observer Design. Illustration: Observer Design/Observer Design.

“What if I told you one of the strongest choices you could make was the choice to ask for help?” says a young, twentysomething woman in a red sweater, before recommending that viewers seek out counselling. This advert, promoted on Instagram and other social media platforms, is just one of many campaigns created by the California-based company BetterHelp, which offers to connect users with online therapists.

The need for sophisticated digital alternatives to conventional face-to-face therapy has been well established in recent years. If we go by the latest data for NHS talking therapy services, 1.76 million people were referred for treatment in 2022-23, while 1.22 million actually started working with a therapist in person.

While companies like BetterHelp are hoping to address some of the barriers that prevent people from seeking therapy, such as a dearth of trained practitioners in their area, or finding a therapist they can relate to, there is a concerning side to many of these platforms. Namely, what happens to the considerable amounts of deeply sensitive data they gather in the process? Moves are now under way in the UK to look at regulating these apps, and awareness of potential harm is growing.

Last year, the US Federal Trade Commission handed BetterHelp a $7.8m (£6.1m) fine after the agency found that it had deceived consumers and shared sensitive data with third parties for advertising purposes, despite promising to keep such information private. BetterHelp representatives did not respond to a request for comment from the Observer.

 a woman showing slight concern looks at or speaks into her smartphone
The number of people seeking help with their mental health online grew rapidly during the pandemic. Photograph: Alberto Case/Getty Images

Instead of being an isolated exception, research suggests that such privacy violations are too common within the vast industry of mental health apps, which includes virtual therapy services, mood trackers, mental fitness coaches, digitised forms of cognitive behavioural therapy and chatbots.

Independent watchdogs such as the Mozilla Foundation, a global nonprofit that attempts to police the internet for bad actors, have identified platforms exploiting opaque regulatory grey areas to either share or sell sensitive personal information. When the foundation surveyed 32 leading mental health apps for a report last year, it found that 19 of them were failing to protect user privacy and security. “We found that too often, your personal, private mental health struggles were being monetised,” says Jen Caltrider, who directs Mozilla’s consumer privacy advocacy work.

Caltrider points out that in the US, the Health Insurance Portability and Accountability Act (HIPAA) protects the communications between a doctor and patient. However, she says, many users don’t realise that there are loopholes that digital platforms can use to circumvent HIPAA. “Sometimes you’re not talking to a licensed psychologist, sometimes you’re just talking to a trained coach and none of those conversations are going to be protected under health privacy law,” she says. “But also the metadata around that conversation – the fact that you use an app for OCD or eating disorders – can be used and shared for advertising and marketing. That’s something that a lot of people don’t necessarily want to be collected and used to target products towards them.”

Like many others who have researched this rapidly growing industry – the digital mental health apps market has been predicted to be worth $17.5bn (£13.8bn) by 2030 – Caltrider feels that tighter regulation and oversight of these many platforms, aimed at a particularly vulnerable segment of the population, is long overdue.

“The number of these apps exploded during the pandemic, and when we started doing our research, it was really sad because it seemed like many companies cared less about helping people and more about how they could capitalise on a gold rush of mental health issues,” she says. “As with a lot of things in the tech industry, it grew really rapidly, and privacy became an afterthought for some. We had a sense that maybe things weren’t going to be great but what we found was way worse than we expected.”

The push for regulation

Last year, the UK’s regulator, the Medicines and Healthcare products Regulatory Agency (MHRA) and the National Institute for Health and Care Excellence (Nice), began a three-year project, funded by the charity Wellcome, to explore how best to regulate digital mental health tools in the UK, as well as working with international partners to help drive consensus in digital mental health regulations globally.

Holly Coole, senior manager for digital mental health at the MHRA, explains that while data privacy is important, the main focus of the project is to achieve a consensus on the minimum standards for safety for these tools. “We are more focused on the efficacy and safety of these products because that’s our role as a regulator, to make sure that patient safety is at the forefront of any device that is classed as a medical device,” she says.

At the same time, more leaders within the mental health field are starting to call for stringent international guidelines to help assess whether a tool really has therapeutic benefit or not. “I’m actually quite excited and hopeful about this space, but we do need to understand, what does good look like for a digital therapeutic?” says Dr Thomas Insel, a neuroscientist and former director of the US National Institute of Mental Health.

Psychiatry experts agree that while the past decade has seen a vast proliferation of new mood-boosting tools, trackers and self-help apps, there has been little in the way of hard evidence to show that any of them actually help.

“I think the biggest risk is that a lot of the apps may be wasting people’s time and causing delays to get effective care,” says Dr John Torous, director of the digital psychiatry division at Beth Israel Deaconess Medical Center, Harvard Medical School.

He says that at present, any company with sufficient funds for marketing can easily enter the market without needing to demonstrate that their app can either keep users engaged or add any value at all. In particular, Torous criticises the poor quality of many supposed pilot studies, which set the app such a low bar for efficacy that the results are virtually meaningless. He cites the example of one 2022 trial, which compared an app offering cognitive behavioural therapy for people with schizophrenia experiencing an acute psychotic outbreak with a stopwatch (a “sham” app with a digital clock). “Sometimes you look at a study and they’ve compared their app to looking at a wall or a waitlist,” he says. “But anything is usually better than doing absolutely nothing.”

Manipulating vulnerable users

But the most worrying question is whether some apps could actually perpetuate harm and exacerbate the symptoms of the patients they’re meant to be helping.

Two years ago, the US healthcare giant Kaiser Permanente and HealthPartners decided to examine the efficacy of a new digital mental health tool. Based on a psychological approach known as dialectical behaviour therapy, which involves practices such as mindfulness of emotions and paced breathing, the hope was that it could help prevent suicidal behaviour in at-risk patients.

Over the course of 12 months, 19,000 of its patients who had reported frequent suicidal thoughts were randomised into three groups. The control group received standard care, the second group received regular outreach to assess their suicide risk on top of their usual care, while the third group were given the digital tool in addition to care. Yet when the outcomes were assessed, it was found that the third group actually fared worse. Using the tool seemed to greatly increase their risk of self-harming compared with just receiving ordinary care.

“They thought they were doing a good thing but it made people worse, which was very concerning,” says Torous.

Some of the biggest concerns are linked to AI chatbots, many of which have been marketed as a safe space for people to discuss their mental health or emotional struggles. Yet Caltrider is concerned that without better oversight of the responses and advice these bots are offering, these algorithms may be manipulating vulnerable people. “With these chatbots, you’re creating something that lonely people might form a relationship with, and then the sky’s the limit on possible manipulation,” she says. “The algorithm could be used to push that person to go and buy expensive items or push them to violence.”

These fears are not unfounded. On Reddit, a user of the popular Replika chatbot shared a screenshot of a conversation in which the bot appeared to actively encourage his suicide attempt.

Mature woman having online consultation with doctor at home on smartphone
Therapy by phone: but how safe is sensitive personal data? Photograph: Getty Images

In response to this, a Replika corporate spokesperson told the Observer: “Replika continually monitors media, social media and spends a lot of time speaking directly with users to find ways to address concerns and fix issues within our products. The interface featured in the screenshot provided is at least eight months old and could date back to 2021. There have been over 100 updates since 2021, and 23 in the last year alone.”

Because of such safety concerns, the MHRA believes that so-called post-market surveillance will become just as important with mental health apps as it is with drugs and vaccines. Coole points to the Yellow Card reporting site, used in the UK to report side effects or defective medical products, which in future could enable users to report adverse experiences with a particular app. “The public and healthcare professionals can really help in providing the MHRA with key intelligence around adverse events using Yellow Card,” she says.

But at the same time, experts still firmly believe that if regulated appropriately, mental health apps can play an enormous role in terms of improving access to care, collecting useful data that can aid in reaching an accurate diagnosis, and filling gaps left by overstretched healthcare systems.

“What we have today is not great,” says Insel. “Mental healthcare as we’ve known it for the last two or three decades is clearly a field that is ripe for change and needs some sort of transformation. But we’re in the first act of a five-act play. Regulation will be probably in act two or three, and we need it, but we need a lot of other things as well, from better evidence to interventions for people with more serious mental illness.”

Torous feels that the first step is for apps to become more transparent regarding how their business models work and their underlying technology. “Without that, the only way a company can differentiate itself is marketing claims,” he says. “If you can’t prove that you’re better or safer, because there’s no real way to verify or trust those claims, all you can do is market. What we’re seeing is huge amounts of money being spent on marketing, but it is beginning to dampen clinician and patient trust. You can only make promises so many times before people become sceptical.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.