
Meta Platforms Inc. (NASDAQ:META) allegedly hid "causal" findings of social media harm, according to new court filings, which state that the company shut down a study on the mental health effects of using its social media platforms, Facebook and Instagram.
Facebook, Instagram's Impact On Mental Health
According to a report by Reuters, internal research by Meta indicated that Facebook and Instagram negatively impacted users’ mental health.
This discovery emerged from unredacted court filings in a class action lawsuit by U.S. school districts against Meta and other social media platforms.
‘Project Mercury'
The research, part of a 2020 project named “Project Mercury,” involved collaboration with Nielsen to assess the effects of deactivating Facebook and Instagram.
Findings revealed that users who stopped using these platforms reported reduced depression and anxiety levels. Despite these results, Meta halted further research, attributing the negative findings to the prevailing media narrative.
Internally, staff confirmed the validity of the research to Nick Clegg, Meta’s former head of global public policy, according to the report. However, Meta informed Congress it couldn’t quantify the harm to teenage girls. Meta spokesperson Andy Stone stated the study was stopped due to flawed methodology and emphasized the company’s commitment to improving product safety.
What Are The Allegations?
The lawsuit, filed by Motley Rice, accuses Meta, Alphabet Inc.'s (NASDAQ:GOOG) (NASDAQ:GOOGL) Google, TikTok, and Snap Inc. (NYSE:SNAP) of concealing product risks. Allegations include encouraging underage use, failing to address child abuse content, and prioritizing growth over safety.
The allegations against Meta come amid ongoing scrutiny over the impact of social media on mental health. Earlier this year, Meta CEO Mark Zuckerberg claimed that the effects of social media are not inherently harmful, suggesting that the impact depends on usage.
Meta has faced criticism for not doing enough to protect young users from online exploitation. In response, the company has made efforts to enhance safety tools and remove harmful accounts.
The company's AI chatbot guidelines have also been under scrutiny, particularly regarding how they handle sensitive issues like child exploitation.
Read Next:
Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors.
Photo courtesy: Shutterstock