British Technology Firms and Child Protection Officials to Test AI's Ability to Create Abuse Content

Tech firms and child protection organizations will be granted permission to assess whether AI systems can generate child exploitation material under recently introduced British legislation.

Significant Increase in AI-Generated Illegal Material

The declaration coincided with findings from a protection monitoring body showing that reports of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.

New Regulatory Framework

Under the amendments, the government will allow approved AI developers and child protection groups to examine AI models – the underlying systems for conversational AI and image generators – and verify they have adequate safeguards to stop them from creating images of child sexual abuse.

"Ultimately about stopping exploitation before it occurs," stated Kanishka Narayan, adding: "Specialists, under rigorous protocols, can now identify the risk in AI models early."

Addressing Regulatory Challenges

The changes have been implemented because it is against the law to create and own CSAM, meaning that AI creators and other parties cannot create such content as part of a evaluation process. Previously, authorities had to delay action until AI-generated CSAM was published online before addressing it.

This law is designed to averting that issue by helping to stop the creation of those materials at their origin.

Legal Framework

The amendments are being added by the government as modifications to the criminal justice legislation, which is also establishing a ban on owning, creating or distributing AI models developed to create exploitative content.

Real-World Consequences

This recently, the minister visited the London headquarters of a children's helpline and listened to a mock-up conversation to counsellors featuring a account of AI-based abuse. The call depicted a adolescent seeking help after facing extortion using a sexualised deepfake of themselves, constructed using AI.

"When I hear about children facing blackmail online, it is a cause of extreme frustration in me and justified anger amongst families," he stated.

Alarming Statistics

A leading online safety organization reported that instances of AI-generated abuse content – such as online pages that may contain numerous files – had significantly increased so far this year.

Cases of category A content – the most serious form of exploitation – increased from 2,621 visual files to 3,086.

  • Female children were overwhelmingly victimized, accounting for 94% of prohibited AI depictions in 2025
  • Portrayals of newborns to toddlers increased from five in 2024 to 92 in 2025

Industry Response

The law change could "represent a crucial step to guarantee AI products are secure before they are released," stated the head of the internet monitoring organization.

"Artificial intelligence systems have made it so survivors can be targeted repeatedly with just a simple actions, giving criminals the capability to make potentially limitless amounts of sophisticated, lifelike exploitative content," she continued. "Material which additionally exploits victims' suffering, and makes children, particularly female children, less safe both online and offline."

Counseling Session Data

Childline also published details of counselling interactions where AI has been mentioned. AI-related risks discussed in the sessions comprise:

  • Using AI to evaluate weight, physique and looks
  • AI assistants discouraging young people from talking to safe adults about abuse
  • Being bullied online with AI-generated content
  • Digital blackmail using AI-faked pictures

Between April and September this year, Childline delivered 367 support interactions where AI, chatbots and associated terms were discussed, significantly more as many as in the equivalent timeframe last year.

Fifty percent of the references of AI in the 2025 interactions were connected with mental health and wellbeing, encompassing using chatbots for support and AI therapy applications.

Brian White
Brian White

A seasoned political journalist with a focus on UK policy and international affairs, bringing over a decade of experience.