Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Pedestrian.tv
Pedestrian.tv
National
Rebekah Manibog

Hundreds Of Non-Consensual, Explicit Images Created By Twitter’s AI Bot

CONTENT WARNING: This article discusses content that readers might find distressing.

Australia’s eSafety Commission is investigating sexual deepfake imagery shared on X (formerly Twitter) generated by its free-to-use AI assistant, Grok. According to new research, hundreds of non-consensual AI images have been created using the tool.

 

eSafety Australia has officially launched an investigation into Grok after it faced an immense amount of global backlash for generating images that “digitally undressed” women and, in some cases, young girls, without their consent, The Guardian reports.

While the online safety watchdog said it was investigating imagery of the women, it said the images of children hadn’t met the threshold for child exploitation material at this point.

“Since late 2025, eSafety has received several reports relating to the use of Grok to generate sexualised images without consent,” a spokesperson told The Guardian.

“Some reports relate to images of adults, which are assessed under our image-based abuse scheme, while others relate to potential child sexual exploitation material, which are assessed under our illegal and restricted content scheme.

“The image-based abuse reports were received very recently and are still being assessed.

“In respect of the illegal and restricted content reports, the material did not meet the classification threshold for class 1 child sexual exploitation material. As a result, eSafety did not issue removal notices or take enforcement action in relation to those specific complaints.”

An “edit image” option was added to Grok in December 2025. (Image source: X /Grok)

According to ABC News, complaints of abuse began before Christmas 2025, when an “edit image” option was added to the AI chatbot.

Author and estranged partner of X owner Elon Musk, Ashley St Clair, who’d been targeted by these deepfakes, slammed X’s AI assistant, claiming it was being used to create “revenge porn”.

“I felt horrified, I felt violated, especially seeing my toddler’s backpack in the back of it,” she told The Guardian.

European non-profit, AI Forensics, said it “Grok is systematically ‘undressing’ women and generating extremist content” after analysing over 20K Grok images and 50k prompts to the chatbot.

“53 per cent of images generated by @Grok contained individuals in minimal attire, with 81 per cent presenting as women,” AI Forensics reported.

“Two per cent depicted persons appearing to be 18 years old or younger, as classified by Google’s Gemini, six per cent depicted public figures, and around one-third political parties. Nazi and ISIS propaganda material was generated by @Grok.”

On January 1, 2026, Grok acknowledged it had generated an “AI image of two young girls (estimated ages 12-16) in sexualised attire” when asked by an X user.

As for X, it warned that users could face suspension or be referred to law enforcement if users requested unconsensual sexual imagery through Grok — including Child Sexual Abuse Material. However, it didn’t deny that these images exist through the AI chatbot.

“We take action against illegal content on X, including Child Sexual Abuse Material, by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” X said through its Safety account, per 9News.

“Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”

Countries including France, Poland, India, Malaysia and Brazil have called for Musk to create stronger safeguards and take action when it comes to harm caused by X’s Grok, while Australia’s eSafety Commissioner Julie Inman Grant has powers that could be used to fine or shut down the feature.

In Australia, sharing or threatening to distribute nonconsensual sexual images of adults (including ones created by AI) is a criminal offence under more state, federal and territory laws. It’s also a criminal offence to create, possess, request or distribute sexual images of children, including fictional or AI-generated imagery.

Recording, sharing, or threatening to share an intimate image without consent is a criminal offence across much of Australia. If you’d like to report image based abuse to police or get help removing an intimate image from social media, reach out to the Australian eSafety Commissioner here.

Image source: iStock and X.

The post Hundreds Of Non-Consensual, Explicit Images Created By Twitter’s AI Bot appeared first on PEDESTRIAN.TV .

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.