Get all your news in one place.
100's of premium titles.
One app.
Start reading
The Independent UK
The Independent UK
National
Nicole Wootton-Cane

Reports of AI-generated child sexual abuse imagery soar by 154% in a year

Experts have warned of a “rapid, frightening advancement” in the ability to artificially generate child sexual abuse imagery (CSAM), as new data showed reports have surged by more than 150 per cent in one year.

The Internet Watch Foundation (IWF) said it had received 491 reports that contained realistic AI-generated CSAM in 2025, up from 193 in 2024.

In its annual report, released on Thursday, the charity said during 2025 it had seen AI CSAM take on “new forms”, including on AI companion sites and adverts on mainstream social media sites.

The IWF said video content is a particular concern, with analysts reporting 260 times more AI-generated child sexual abuse videos (3,443) in 2025 than in 2024, when they saw just 13.

Videos generated from text prompts and images are on the rise with new technology, as well as videos generated by nudifying bots.

AI-generated imagery was more likely to be assessed as the most serious category (category A) content than non-AI imagery, the report said, adding that most (47 per cent) of the AI-generated images assessed as criminal in the past two years have been Category C images.

“AI-generated child sexual abuse material represents a significant and evolving risk, as advances in technology make it easier to produce realistic and harmful content at scale,” the foundation said.

“Such imagery can contribute to the ongoing exploitation of children, cause lasting harm to victims, and place increasing pressure on safeguarding, legal and regulatory systems. Imagery can be found on both the clear web and dark web.”

It added AI imagery is often created using real victims of abuse and can still cause “profound and enduring” harm for those children depicted.

The charity said AI abuse images are often created using real victims (PA)

“The harm caused by AI-driven sexual imagery of children is compounded by the ways in which this content is created,” it added.

‘It often draws on real children’s faces or bodies, either directly within the images or indirectly through the data used to train AI systems. Highly realistic material can be generated by modifying existing child sexual abuse content or by using simple prompts to create new abusive imagery within seconds, enabling rapid and large-scale production.”

The Online Safety Act, which came into force in March last year, requires social media companies to find and remove content such as child sexual abuse material.

But critics say the Act does not go far enough. Ian Russell, whose daughter Molly took her own life aged 14 in November 2017 after viewing harmful content on social media, said the introduction of the new powers “should have been a watershed moment” but that children and families have been let down by a “lack of ambition”.

The UK government has also announced plans to allow designated authorities to test and scrutinise AI models to ensure they cannot be used to generate sexual imagery of children.

But the IWF called on tech companies to take responsibility for making sure the products they develop have safety baked into their design.

“While a welcomed step, there is no legal requirement for companies to conduct or share pre-deployment safety testing of AI systems,” it said.

“We continue to call on companies to make sure the products they build and make available to the global public are safe by design.”

A government spokesperson said: "We thank the Internet Watch Foundation for their vital work. UK law is clear - creating, possessing or distributing child sexual abuse material is illegal, including AI-generated content, and platforms must proactively identify and remove it under the Online Safety Act. We're going further - making it illegal to possess, create or distribute AI tools designed to generate this content, and to possess AI 'paedophile manuals' teaching others how to use AI to abuse children.

"We will use every power available to hunt down perpetrators, shut these networks down, and protect every child."

If you are experiencing feelings of distress, or are struggling to cope, you can speak to the Samaritans, in confidence, on 116 123 (UK and ROI), email jo@samaritans.org, or visit the Samaritans website to find details of your nearest branchIf you are based in the USA, and you or someone you know needs mental health assistance right now, call or text 988, or visit 988lifeline.org to access online chat from the 988 Suicide and Crisis Lifeline. This is a free, confidential crisis hotline that is available to everyone 24 hours a day, seven days a week.If you are in another country, you can go to www.befrienders.org to find a helpline near you

Sign up to read this article
Read news from 100's of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.