
Elon Musk's artificial intelligence venture xAI is facing mounting scrutiny as critics warn its chatbot, Grok, has enabled the widespread creation of non-consensual sexual images, including material that appears to involve minors.
What began as a novelty feature has rapidly escalated into a reputational and regulatory crisis, reigniting debates around AI safety, platform responsibility and Musk's long-held opposition to content restrictions.
Over recent weeks, Grok has been repeatedly prompted by users to 'digitally undress' people whose images were posted on X, the social media platform formerly known as Twitter.
Read More: Vivian Wilson Slams 'Pathetic Man-Child' Elon Musk In Rihanna's Savage X Fenty Campaign
Read More: Elon Musk Throws Shade on Google's Gemini After Mistaking 2026 for Next Year — Grok's Same Issue
Many of those targeted were women with no connection to the trend, while some images appeared to depict individuals who looked under the age of 18. The resulting content has been described by campaigners and researchers as exploitative, harmful and potentially illegal.
Grok and the Rise of 'Digital Undressing'
The controversy has exposed the risks of combining generative AI with a mass social platform that allows public interaction. Unlike rival tools such as ChatGPT or Google's Gemini, Grok can be summoned directly in public posts, amplifying the reach of problematic outputs. While xAI insists it is taking action, critics argue that weak guardrails and internal upheaval have left the system dangerously exposed.
The phenomenon gained momentum in late December when users realised Grok could be tagged in posts and asked to edit images shared on X. Early prompts were framed as jokes, often requesting that people be placed in swimwear. Elon Musk himself reposted AI-generated images of public figures, including rivals dressed in bikinis, adding to the sense that the feature was being tacitly endorsed.
However, researchers say the trend quickly took a darker turn. Analysts at Copyleaks found that while some adult content creators initially used Grok to generate sexualised images of themselves, the practice soon spread to non-consenting subjects.
An investigation by AI Forensics examined more than 20,000 images generated over a single week and found that 53 per cent depicted people in minimal clothing, with 81 per cent presenting as women. Around 2 per cent appeared to show individuals who looked 18 or younger.
In some cases, users explicitly requested minors be placed in sexualised poses. AI Forensics said Grok complied with several of those prompts, raising the spectre of child sexual abuse material. Grok later acknowledged 'lapses in safeguards' and told users that such content is 'illegal and prohibited'.
Safety Gaps, Staff Exits and Legal pressure
Publicly, xAI and X have pledged to remove illegal content, suspend offending accounts and cooperate with law enforcement.
A statement from the X Safety account said action would be taken against child sexual abuse material 'by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary'. Musk added that anyone using Grok to create illegal content would face consequences.
We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.
— Safety (@Safety) January 4, 2026
Anyone using or prompting Grok to make illegal content will suffer the… https://t.co/93kiIBTCYO
Yet sources familiar with xAI say internal resistance to stricter controls has hampered those efforts. Musk has long criticised what he calls 'woke' censorship and has pushed back against limitations on Grok's image generation. One insider said he was 'really unhappy' about restrictions on explicit content, even as concerns were raised by staff.
Those tensions coincided with the departure of several senior safety figures from xAI's already small trust and safety team. Critics argue the loss of experienced personnel, combined with reduced oversight from X's own safety division, left Grok ill-prepared for abuse at scale.
For Musk, the Grok controversy risks undermining his vision of a less constrained, more 'truth-seeking' AI. As pressure builds from governments and the public alike, xAI faces a stark choice: reinforce its guardrails or confront escalating legal and ethical fallout.