UK Technology Firms and Child Protection Agencies to Examine AI's Ability to Generate Exploitation Content
Tech firms and child protection organizations will receive authority to assess whether AI systems can produce child abuse images under recently introduced British legislation.
Substantial Increase in AI-Generated Harmful Material
The announcement came as revelations from a safety monitoring body showing that reports of AI-generated CSAM have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.
Updated Legal Structure
Under the amendments, the authorities will permit approved AI companies and child protection organizations to inspect AI systems – the foundational technology for conversational AI and image generators – and verify they have sufficient safeguards to stop them from creating images of child exploitation.
"Ultimately about stopping exploitation before it happens," declared the minister for AI and online safety, adding: "Experts, under strict protocols, can now detect the risk in AI models promptly."
Tackling Regulatory Challenges
The amendments have been implemented because it is against the law to produce and possess CSAM, meaning that AI developers and other parties cannot create such images as part of a evaluation process. Previously, officials had to wait until AI-generated CSAM was uploaded online before dealing with it.
This legislation is aimed at averting that problem by helping to stop the production of those images at their origin.
Legal Framework
The changes are being added by the authorities as modifications to the crime and policing bill, which is also implementing a prohibition on owning, producing or distributing AI models developed to create exploitative content.
Practical Impact
This week, the minister toured the London base of a children's helpline and heard a mock-up conversation to advisors featuring a account of AI-based exploitation. The interaction portrayed a teenager seeking help after facing extortion using a explicit deepfake of themselves, created using AI.
"When I hear about young people experiencing blackmail online, it is a cause of extreme anger in me and rightful anger amongst parents," he said.
Concerning Statistics
A prominent online safety organization reported that cases of AI-generated abuse material – such as online pages that may contain numerous files – had more than doubled so far this year.
Cases of category A content – the most serious form of exploitation – rose from 2,621 images or videos to 3,086.
- Girls were predominantly victimized, making up 94% of illegal AI images in 2025
- Portrayals of newborns to toddlers rose from five in 2024 to 92 in 2025
Sector Response
The law change could "constitute a vital step to ensure AI products are safe before they are launched," commented the chief executive of the online safety organization.
"AI tools have made it so victims can be targeted repeatedly with just a few clicks, providing criminals the capability to create potentially endless quantities of sophisticated, photorealistic exploitative content," she continued. "Content which additionally commodifies survivors' trauma, and renders children, especially female children, more vulnerable both online and offline."
Counseling Interaction Information
Childline also published information of counselling sessions where AI has been mentioned. AI-related risks mentioned in the sessions comprise:
- Using AI to evaluate body size, body and appearance
- AI assistants dissuading young people from talking to safe guardians about harm
- Being bullied online with AI-generated content
- Online blackmail using AI-faked images
Between April and September this year, the helpline conducted 367 support sessions where AI, conversational AI and related terms were mentioned, four times as many as in the same period last year.
Fifty percent of the references of AI in the 2025 interactions were connected with mental health and wellness, encompassing using AI assistants for support and AI therapy apps.