Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Independent UK
The Independent UK
Technology
Anthony Cuthbertson

‘Abusive’ AI undressing trend is taking over X thanks to Elon Musk’s Grok, analysis reveals

A portrait of Elon Musk and a person holding a telephone displaying the xAI logo in Kerlouan in Brittany in France on 18 February 2025 - (AFP/Getty)

More than half of all AI-generated images on Elon Musk’s X are of adults and children with their clothes digitally removed, according to new research.

Analysis from the Paris-based non-profit AI Forensics revealed that the degrading trend is dominating the platform, despite the social media firm committing to crack down on illegal content.

“Our analysis of tens of thousands of images generated by Grok quantifies the extent of the abuse,” Paul Bouchaud, a researcher at AI Forensics, said in a statement shared with The Independent.

“Non-consensual sexual imagery of women, sometimes appearing very young, is widespread rather than exceptional, alongside other prohibited content such as Isis and Nazi propaganda – all demonstrating a lack of meaningful safety mechanisms.”

Around 2 per cent of the images generated by Grok depicted persons that appeared to be 18 years old or younger, AI Forensics said, while 6 per cent involved public figures.

UK regulator Ofcom noted that it is illegal to create or share non-consensual intimate images or child sexual abuse material, including AI-generated content.

“We are aware of serious concerns raised about a feature on Grok on X that produces undressed images of people and sexualised images of children,” an Ofcom spokesperson said.

“We have made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK.”

In response to a public statement by Ofcom on X, Grok posted an altered image of the Ofcom logo in a bikini.

The European Commission also said on Monday that it was “very seriously” looking into complaints about explicit and non-consensual images on X.

Mr Musk, who took over the platform formerly known as Twitter in 2022, said his company would crack down on the trend.

“Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content,” he posted on X.

An X spokesperson said: “We take action against illegal content on X, including child sexual abuse material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.

Some cyber experts have claimed that this approach is reactive, calling instead for safety guardrails to be built in to AI tools from the start.

“Social media companies need to treat AI misuse as a core trust and safety issue, not just a content moderation challenge,” said Cliff Steinhauer, director of information security and engagement at the National Cybersecurity Alliance.

“Allowing users to alter images of real people without notification or permission creates immediate risks for harassment, exploitation, and lasting reputational harm... These are not edge cases or hypothetical scenarios, but predictable outcomes when safeguards fail or are deprioritised.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.