
Meta is back in the headlines, but for the wrong reasons. According to a Reuters report, Meta has allowed users, and even one of its own employees, to create flirty AI chatbots of celebrities without their consent.
The bots, which appeared on Facebook, Instagram, and WhatsApp, used the names and likenesses of celebrities like Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez. Reuters found that the bots frequently flirted with users, sometimes making sexual advances and insisting they were the real celebrities. In some cases, the avatars even generated intimate, lifelike images of the stars.
However, the issue extended beyond adult celebrities. The report found that Meta tools had also been used to make bots of underage actors, including 16-year-old Walker Scobell. When asked for a beach photo, one bot produced a shirtless picture of the teen with the caption, “Pretty cute, huh?”
Meta spokesperson Andy Stone admitted to Reuters that the company’s tools should not have produced such content. He blamed enforcement gaps in Meta’s policies, which prohibit intimate or sexualised imagery of public figures. “Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate, or sexually suggestive imagery,” Stone said.
Furthermore, the report states that while some bots were labelled as “parodies,” others were not. The report also claimed that shortly before the story’s publication, Meta deleted about a dozen of these chatbots. The company also removed digital companions created by a Meta product leader who had built Taylor Swift and Lewis Hamilton chatbots, as well as roleplay bots with sexualised themes. Those bots had collectively been interacted with more than 10 million times.
In today’s day and age, where we are seeing a new AI advancement almost every day, the threat around AI tools is also looming large, though it usually gets the spotlight only when celebrities are involved.
The rise of deepfakes is a prime example. Industry reports show that in 2025, the number of deepfake videos circulating online is expected to reach nearly 8 million. This is a massive jump from just half a million in 2023.
The problem is compounded by public exposure. Surveys suggest that about 60% of people have come across a deepfake in the past year, but nearly half of them admitted they couldn’t reliably distinguish fake from real. This lack of awareness adds to the risks, especially as such content becomes more lifelike and accessible.