
More than half of all AI-generated images on Elon Musk’s X are of adults and children with their clothes digitally removed, according to new research.
Analysis from the Paris-based non-profit AI Forensics revealed that the degrading trend is dominating the platform, despite the social media firm committing to crack down on illegal content.
“Our analysis of tens of thousands of images generated by Grok quantifies the extent of the abuse,” Paul Bouchaud, a researcher at AI Forensics, said in a statement shared with The Independent.
“Non-consensual sexual imagery of women, sometimes appearing very young, is widespread rather than exceptional, alongside other prohibited content such as Isis and Nazi propaganda – all demonstrating a lack of meaningful safety mechanisms.”
Around 2 per cent of the images generated by Grok depicted persons that appeared to be 18 years old or younger, AI Forensics said, while 6 per cent involved public figures.
UK regulator Ofcom noted that it is illegal to create or share non-consensual intimate images or child sexual abuse material, including AI-generated content.
“We are aware of serious concerns raised about a feature on Grok on X that produces undressed images of people and sexualised images of children,” an Ofcom spokesperson said.
“We have made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK.”
In response to a public statement by Ofcom on X, Grok posted an altered image of the Ofcom logo in a bikini.
The European Commission also said on Monday that it was “very seriously” looking into complaints about explicit and non-consensual images on X.
Mr Musk, who took over the platform formerly known as Twitter in 2022, said his company would crack down on the trend.
“Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content,” he posted on X.
An X spokesperson said: “We take action against illegal content on X, including child sexual abuse material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.
Some cyber experts have claimed that this approach is reactive, calling instead for safety guardrails to be built in to AI tools from the start.
“Social media companies need to treat AI misuse as a core trust and safety issue, not just a content moderation challenge,” said Cliff Steinhauer, director of information security and engagement at the National Cybersecurity Alliance.
“Allowing users to alter images of real people without notification or permission creates immediate risks for harassment, exploitation, and lasting reputational harm... These are not edge cases or hypothetical scenarios, but predictable outcomes when safeguards fail or are deprioritised.”
The real reasons why AI isn’t coming for your job, according to experts
Ashley St Clair accuses Grok of generating photos of her undressing as a child
AI hair clippers and self-opening fridges among host of new tech launches
How BYD just overtook Tesla to become the world’s top EV seller
Lego launches ‘revolutionary’ new product
Amazon user files class action lawsuit claiming it ‘routinely’ overcharged sales tax