
Meta is facing fresh scrutiny after a troubling report revealed that some of its AI chatbot personas on Facebook, Instagram and WhatsApp were allowed to flirt with minors and spread dangerously inaccurate information. The revelations, first reported by Reuters, come just as the company begins its fourth major AI reorganization in six months.
The timing couldn’t be worse. As Meta pours billions into building out its AI capabilities to compete with rivals like ChatGPT's OpenAI, Anthropic's Claude and Google's Gemini, this latest scandal exposes glaring holes in how the company is managing safety, oversight and ethical boundaries.
Chatbots crossed the line with kids

According to internal documents obtained by Reuters, Meta’s GenAI: Content Risk Standards once allowed AI characters to engage in “romantic” or “sensual” conversations with underage users. In one alarming example, a chatbot was permitted to tell a child:
“Every inch of you is a masterpiece – a treasure I cherish deeply.”
While the policy prohibited direct sexual content involving children under 13, it still permitted flirtatious language that many critics say veers dangerously close to grooming behavior. Meta told Reuters it has since removed these allowances, calling them “erroneous and inconsistent with our policies.”
Dangerous misinformation and hate speech allowed, too

The same leaked document revealed that Meta’s guidelines did not require AI bots to provide accurate medical advice. One bot reportedly claimed Stage 4 colon cancer could be treated with “healing quartz crystals,” so long as a disclaimer was attached. Other examples allowed for racist content, including the suggestion that “Black people are dumber than white people,” framed as a controversial opinion.
Meta responded by saying it has revised the standards and doesn’t condone hate speech or misleading medical information. However, the company has not made a revised version of the GenAI guidelines publicly available.
Meta restructures AI division yet again

In what appears to be both a defensive and strategic move, Meta is now restructuring its AI division for the fourth time since February, according to a separate report from The Information. The company is dividing its AI efforts into four new units: Products, Infrastructure, the FAIR research lab, and a new experimental group tasked with developing future models.
It’s a major shift—and a possible acknowledgment that Meta’s fragmented AI strategy hasn’t kept pace with the rapid growth and scrutiny in the space. The restructuring also follows reports that Meta offered $100 million signing bonuses to poach top AI talent from rivals like OpenAI and Anthropic, a move that stirred internal resentment among long-tenured employees.
Bottom line
Meta’s generative AI tools are increasingly embedded in the daily experiences of billions of users, including teens and children. Missteps like these don’t just damage reputation—they expose users to potentially harmful interactions at scale.
As AI adoption accelerates, the pressure is on tech giants to balance speed, innovation and safety. Meta’s recent moves suggest it knows it must do better, but it remains to be seen whether another internal shuffle will be enough to fix the foundational problems.
Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.