- The Internet Watch Foundation (IWF) has reported a 'frightening' and rapid advancement in AI technology leading to a surge in AI-generated child sexual abuse imagery (CSAM), with reports increasing by over 150 per cent in one year.
- In 2025, the IWF received 491 reports of realistic AI-generated CSAM, a significant rise from 193 in 2024, with AI-generated video content showing an even more dramatic increase from 13 to 3,443 instances.
- AI-generated imagery is often assessed as the most serious category (Category A) of abuse content, and the IWF warns that this material can be created using real children's faces or bodies, causing profound and enduring harm to victims.
- While the Online Safety Act requires social media companies to remove such content, critics like Ian Russell argue it lacks ambition, and the IWF calls for tech companies to ensure 'safety by design' in their products, noting there is no legal requirement for pre-deployment safety testing of AI systems.
- The UK government has announced plans to make it illegal to possess, create, or distribute AI tools designed to generate CSAM and to possess AI 'paedophile manuals', reaffirming its commitment to prosecuting perpetrators and protecting children.
IN FULL
Reports of AI-generated child sexual abuse imagery soar by 154% in a year