Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Everybody Loves Your Money
Everybody Loves Your Money
Brandon Marcus

8 Crisis Hotlines That Were Caught Sharing Private Data

Image Source: 123rf.com

In a world where mental health conversations are finally being taken seriously, crisis hotlines have become sacred spaces—lifelines for people in their darkest, most vulnerable moments. These services are built on trust, privacy, and the promise of confidential support.

But what happens when that trust is broken? In recent years, troubling revelations have surfaced about some crisis hotlines misusing the very data they pledged to protect. The betrayal runs deep, not just as a breach of data, but as a violation of humanity.

1. Crisis Text Line and Its For-Profit Data Arm

The Crisis Text Line was once praised for revolutionizing mental health access through texting, offering 24/7 support for those in emotional distress. However, it faced widespread backlash after reports emerged that it had shared anonymized user data with a for-profit company it helped create. That company used the data to train artificial intelligence systems for corporate clients, sparking fears over how deeply emotional conversations were being mined. Although the hotline insisted the data was scrubbed of identifiable details, the very act of transferring sensitive information blurred ethical boundaries. Following public outrage, the organization cut ties with the company, but the damage to its reputation was already done.

2. National Eating Disorders Association Helpline’s Use of AI

The National Eating Disorders Association (NEDA) made headlines after it replaced its human-operated helpline with an AI chatbot. Soon after, users began reporting that the bot was giving harmful advice, and there were allegations that user interactions were being collected and analyzed without proper consent. Critics accused NEDA of prioritizing automation and efficiency over real human connection and data integrity. Mental health advocates pointed out that individuals seeking help for eating disorders are especially vulnerable and require the highest levels of privacy. Amid mounting pressure, NEDA suspended the AI feature, but questions about the treatment of user data persisted.

3. Trevor Project’s Data Partnerships Under Scrutiny

The Trevor Project, known for supporting LGBTQ+ youth in crisis, faced criticism when it was revealed that it partnered with Palantir, a company known for its work in surveillance and data analytics. The partnership raised eyebrows due to Palantir’s controversial ties with government agencies, including law enforcement and immigration services. Although The Trevor Project maintained that user data was protected, many supporters feared that vulnerable queer youth might unknowingly be exposed to broader data ecosystems. The backlash intensified concerns about how nonprofits balance operational efficiency with ethical responsibility. Trust, especially among marginalized communities, is difficult to rebuild once compromised.

4. BetterHelp’s Marketing Missteps

While not a traditional crisis hotline, BetterHelp offers online counseling services and has marketed itself as a mental health lifeline. The company found itself in hot water after investigations revealed it shared user data with platforms like Facebook and Snapchat for advertising purposes. Users were often unaware that their sign-up information, including IP addresses and usage details, could be leveraged for marketing. Even more concerning, some of the data allegedly came from intake forms where people described their mental health concerns. In response to public scrutiny, BetterHelp updated its privacy policy, but many mental health professionals condemned the practice as a gross violation of client confidentiality.

5. 7 Cups’ Use of Peer Chat Logs

7 Cups, an anonymous emotional support service, was found to be collecting and storing chat logs from peer-to-peer support conversations. While the platform claimed this was for quality assurance and training purposes, many users were unaware that their deeply personal messages were being preserved. Critics argued that this level of surveillance undermined the platform’s claim of being a safe, judgment-free space. The lack of transparency around how long these records were kept or who had access to them added to growing skepticism. Mental health advocates stressed that even peer support must be treated with the same confidentiality standards as professional counseling.

Image Source: 123rf.com

6. iPrevail’s Behavioral Data Sharing

iPrevail, a mental wellness platform that offers interactive therapy modules and peer support, has been scrutinized for how it handles user behavioral data. Reports indicated that the company collected detailed information about users’ emotional states and decision-making patterns. This data was then shared with undisclosed third-party partners, prompting serious concerns about informed consent. Critics said that while the platform provided disclaimers, the fine print was often buried and difficult for users in crisis to parse. The episode reignited debates about whether digital mental health platforms should be regulated like healthcare providers.

7. Talkspace’s Record-Keeping Controversy

Talkspace, another popular online therapy provider, came under fire after former employees alleged that the company stored transcripts of therapy sessions and mined them for business purposes. Though Talkspace claimed that all data use complied with privacy regulations, insiders described a culture that prioritized growth metrics over patient care. The possibility that therapy conversations—intended to be sacrosanct—were being analyzed for profit drew widespread condemnation. Some mental health professionals began advising patients to avoid the platform altogether. The company later revised its policies, but public trust remained shaky.

8. Woebot’s Ambiguous Data Practices

Woebot, an AI-powered mental health chatbot, offers therapeutic support using artificial intelligence and machine learning. Though the platform emphasizes that it’s not a replacement for a licensed therapist, it does collect significant amounts of user data to refine its services. Questions arose when it was unclear how long data was stored, whether it could be sold, and how anonymized the information truly was. The bot’s interactions often touched on sensitive emotional topics, making the lack of clarity around data usage especially concerning. Mental health experts emphasized that even machine-driven therapy must adhere to rigorous ethical standards.

A Fractured Sense of Trust

When someone reaches out to a crisis hotline, they are not just seeking help—they are offering their most vulnerable truths in exchange for compassion, safety, and confidentiality. These revelations about data-sharing and privacy violations have shaken public trust in institutions that once felt untouchable. In the age of AI and analytics, ethical lines can blur fast, but the mental health space must remain a sanctuary. Trust is the foundation of healing, and once it’s compromised, it’s hard to restore.

What are your thoughts on these violations? Share your perspective or leave a comment to continue the conversation.

Read More

10 Privacy Policies That Let Apps Record You Anyway

8 Government Databases That Contain Your Personal Info Right Now

The post 8 Crisis Hotlines That Were Caught Sharing Private Data appeared first on Everybody Loves Your Money.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.