British Tech Firms and Child Protection Agencies to Test AI's Ability to Create Exploitation Content
Tech firms and child safety agencies will receive authority to evaluate whether AI tools can produce child exploitation images under new UK laws.
Significant Rise in AI-Generated Harmful Content
The declaration came as findings from a protection watchdog showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.
New Legal Framework
Under the amendments, the authorities will permit designated AI developers and child protection organizations to inspect AI models – the foundational technology for chatbots and visual AI tools – and ensure they have adequate protective measures to stop them from producing depictions of child sexual abuse.
"Fundamentally about stopping abuse before it occurs," declared the minister for AI and online safety, noting: "Experts, under strict conditions, can now identify the risk in AI models early."
Addressing Legal Challenges
The amendments have been implemented because it is illegal to produce and possess CSAM, meaning that AI developers and others cannot generate such content as part of a evaluation process. Until now, authorities had to wait until AI-generated CSAM was published online before addressing it.
This legislation is aimed at preventing that problem by enabling to stop the production of those images at source.
Legal Structure
The amendments are being added by the authorities as modifications to the crime and policing bill, which is also implementing a prohibition on owning, creating or sharing AI models designed to generate exploitative content.
Practical Consequences
This week, the official toured the London headquarters of Childline and listened to a simulated call to counsellors involving a report of AI-based abuse. The interaction portrayed a teenager seeking help after facing extortion using a sexualised deepfake of themselves, created using AI.
"When I learn about young people experiencing blackmail online, it is a source of extreme frustration in me and rightful anger amongst families," he stated.
Concerning Data
A prominent internet monitoring foundation reported that instances of AI-generated abuse material – such as webpages that may contain numerous images – had more than doubled so far this year.
Cases of category A content – the most serious form of abuse – rose from 2,621 images or videos to 3,086.
- Female children were overwhelmingly targeted, making up 94% of prohibited AI images in 2025
- Portrayals of newborns to toddlers increased from five in 2024 to 92 in 2025
Sector Response
The legislative amendment could "represent a crucial step to guarantee AI tools are secure before they are released," commented the chief executive of the online safety foundation.
"Artificial intelligence systems have made it so survivors can be targeted repeatedly with just a simple actions, giving criminals the ability to create possibly limitless amounts of advanced, lifelike child sexual abuse material," she continued. "Content which additionally commodifies survivors' trauma, and makes young people, especially female children, more vulnerable on and off line."
Counseling Interaction Data
The children's helpline also released information of support sessions where AI has been mentioned. AI-related harms mentioned in the sessions include:
- Employing AI to rate body size, body and looks
- AI assistants discouraging children from talking to safe guardians about harm
- Being bullied online with AI-generated material
- Digital extortion using AI-manipulated images
During April and September this year, Childline delivered 367 support interactions where AI, chatbots and associated topics were discussed, four times as many as in the same period last year.
Half of the references of AI in the 2025 sessions were connected with mental health and wellbeing, including utilizing AI assistants for support and AI therapy applications.