British Technology Companies and Child Safety Agencies to Test AI's Capability to Generate Exploitation Images

Tech firms and child safety agencies will be granted authority to assess whether artificial intelligence systems can generate child abuse images under new UK legislation.

Significant Increase in AI-Generated Illegal Material

The announcement came as findings from a protection monitoring body showing that reports of AI-generated child sexual abuse material have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.

Updated Legal Structure

Under the changes, the authorities will permit approved AI companies and child protection organizations to inspect AI models – the foundational technology for chatbots and visual AI tools – and ensure they have adequate safeguards to stop them from producing depictions of child exploitation.

"Ultimately about preventing exploitation before it occurs," stated Kanishka Narayan, noting: "Specialists, under rigorous protocols, can now detect the danger in AI models promptly."

Tackling Legal Challenges

The changes have been introduced because it is against the law to create and possess CSAM, meaning that AI creators and others cannot create such images as part of a evaluation regime. Until now, officials had to wait until AI-generated CSAM was uploaded online before addressing it.

This law is designed to averting that issue by helping to halt the creation of those images at their origin.

Legal Structure

The amendments are being added by the government as revisions to the crime and policing bill, which is also establishing a prohibition on possessing, creating or distributing AI systems developed to create exploitative content.

Real-World Impact

This recently, the minister visited the London headquarters of Childline and heard a simulated call to advisors involving a report of AI-based exploitation. The call portrayed a adolescent seeking help after being blackmailed using a sexualised AI-generated image of themselves, constructed using AI.

"When I learn about young people facing extortion online, it is a source of intense anger in me and justified concern amongst parents," he said.

Alarming Statistics

A leading online safety foundation reported that instances of AI-generated abuse material – such as online pages that may include multiple images – had significantly increased so far this year.

Cases of the most severe material – the gravest form of exploitation – rose from 2,621 visual files to 3,086.

  • Female children were overwhelmingly victimized, accounting for 94% of illegal AI images in 2025
  • Depictions of newborns to toddlers rose from five in 2024 to 92 in 2025

Sector Reaction

The legislative amendment could "represent a vital step to ensure AI tools are secure before they are launched," commented the chief executive of the online safety organization.

"Artificial intelligence systems have made it so victims can be victimised repeatedly with just a simple actions, providing criminals the capability to make potentially endless amounts of sophisticated, lifelike child sexual abuse material," she continued. "Material which additionally exploits victims' trauma, and renders children, particularly female children, less safe on and off line."

Counseling Session Data

Childline also published details of support sessions where AI has been mentioned. AI-related harms discussed in the conversations comprise:

  • Using AI to evaluate weight, body and looks
  • Chatbots discouraging young people from consulting trusted adults about abuse
  • Facing harassment online with AI-generated material
  • Online blackmail using AI-manipulated pictures

During April and September this year, Childline delivered 367 support interactions where AI, chatbots and associated topics were discussed, four times as many as in the same period last year.

Fifty percent of the mentions of AI in the 2025 sessions were connected with mental health and wellbeing, encompassing using AI assistants for assistance and AI therapeutic applications.

Joann Johnson
Joann Johnson

Experienced journalist specializing in Central European affairs and political commentary.