
When Tim Berners-Lee invented the world wide web, he articulated his dream for the internet to unlock creativity and collaboration on a global scale. But he also wondered “whether it will be a technical dream or a legal nightmare”. History has answered that question with a troubling “both”.
The 2003 Broadway musical Avenue Q brilliantly captured this duality. A puppet singing about the internet cheerfully begins the chorus “the internet is really, really good …” only to be cut off by another puppet who adds “… for porn!” The song illustrates an enduring truth: every new technological network has, ultimately, been used for legal, criminal and should-be-criminal sexual activity.
In the 1980s, even the French government-backed pre-internet network Minitel was taken over by what one publisher described as a “plague” – a “new genre of difficult-to-detect, mostly sexually linked crimes”. This included murders, kidnaps and the “leasing” of children for sexual purposes.
The internet, social media and now large language models are “really, really good” in many ways – but they all suffer from the same plague. And policymakers have generally been extremely slow to react.
The UK’s Online Safety Act was seven years in the making. The protracted parliamentary debate exposed real tensions on how to protect fundamental rights of free speech and privacy. The act received royal assent in 2023, but is still not fully implemented.
In 2021-22, the children’s commissioner for England led a government review into online sexual harassment and abuse. She found that pornography exposure among young people was widespread and normalised.
Action was slow to follow. Three years after the commissioner’s report, the UK became the first country in the world to introduce laws criminalising tools used to create AI-generated child sexual abuse material as part of the crime and policing bill. But a year on, the bill is still being debated in parliament.
It takes something really horrible for policymakers to take swift action. As the extent to which xAI chatbot Grok was being used to create non-consensual nudified and sexualised images of identifiable women and children from photographs became clear, it transpired that the provisions in the UK’s Data (Use and Access) Act 2025, which criminalises creating such images, had not been activated. Only after widespread outcry did the government bring these provisions into force.
When it comes to the issue of children and sexual images, AI has supercharged every known harm. The Internet Watch Foundation warned that AI was becoming a “child sexual abuse machine”, generating horrific imagery.
The UK public are increasingly in favour of AI regulation. In a 2024 survey of public attitudes to AI, 72% of the British public said that “laws and regulations” would make them more comfortable with AI, up 10 percentage points from 2022. They are particularly concerned about AI deepfakes. But bigger debates about what regulation of the internet means have stymied action.
The free speech question
Some politicians and tech leaders conflate the issue of regulating nonconsensual sexual content with the issue of free speech.
Grok’s abilities to create sexualised images of identifiable adults and children became evident at the end of last year, reportedly after Elon Musk, founder of xAI, ordered staff to loosen the guardrails on Grok because he was “unhappy about over-censoring”. His view is that only content that breaks the law should be removed and any other content moderation is down to the “woke mind virus”. When the controversy erupted, he claimed that critics “just want to suppress free speech”.
Linking regulation to attacks on a “free” internet has a long history that plays on the heartstrings of early internet enthusiasts. According to Tim Berners-Lee’s account, in 1996 when John Patrick, a member of the world wide web consortium, suggested there might be a problem with kids seeing indecent material on the web, “Everyone in the room turned towards him with raised eyebrows: ‘John, the web is open. This is free speech. What do you want us to do, censor it?’”
But the argument that child sexual abuse imagery is on a par with “woke” political criticism is patently absurd. Child sexual abuse material is evidence of a crime, not a form of meaningful expression. Political criticism, even when highly objectionable, involves adults exercising their capacity to form and express opinions.
Placing guardrails on Grok to stop it producing illegal content is not widespread censorship of the internet. Free speech has proven to be a convenient angle for US resistance to technology regulation. The US has persistently intervened in EU and UK AI safety debates.
The need for action
X has now announced that it would no longer allow Grok to “undress” photos of real people in jurisdictions where this is illegal. Musk has said that “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.”
Yet reports have continued of the technology being used to produce on-demand sexualised photos. This time, Ofcom seems emboldened and is continuing its investigations, as is the European Commission.
This is a technical challenge as well as a regulatory one. Regulators will need the firepower of the best AI minds and tools to ensure that Grok and other AI tools comply with the law. If not, then fines or bans will be the only option. It will be a game of catch-up, like every technology spiral before, but it will have to be played.
Meanwhile, users will need to decide whether to use the offending models or obey Grok’s pre-backlash exhortation: “If you can’t handle innovation, maybe log off” – and vote with our feet. That’s a collective action problem – a problem even older than the sexual takeover of computer networks.
This article was co-published with LSE Blogs at the London School of Economics.
Helen Margetts has received funding for AI-related research from UK Research and Innovation, and currently receives funding from the Department of Science, Innovation and Technology (DSIT) and the Dieter Schwarz Foundation.
Cosmina Liana Dorobantu has received funding for AI related research from UK Research and Innovation.
This article was originally published on The Conversation. Read the original article.