British Technology Firms and Child Safety Officials to Examine AI's Capability to Generate Exploitation Content

Technology companies and child safety agencies will be granted permission to assess whether artificial intelligence systems can generate child abuse material under recently introduced UK laws.

Substantial Increase in AI-Generated Harmful Content

The declaration coincided with findings from a protection monitoring body showing that cases of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.

Updated Legal Structure

Under the amendments, the government will permit designated AI companies and child protection organizations to inspect AI systems – the underlying systems for conversational AI and image generators – and verify they have adequate protective measures to stop them from creating images of child exploitation.

"Ultimately about preventing abuse before it occurs," stated Kanishka Narayan, noting: "Experts, under rigorous protocols, can now detect the danger in AI systems promptly."

Addressing Regulatory Obstacles

The changes have been introduced because it is illegal to produce and possess CSAM, meaning that AI developers and others cannot generate such content as part of a testing regime. Until now, authorities had to delay action until AI-generated CSAM was published online before dealing with it.

This law is aimed at preventing that problem by helping to stop the creation of those materials at their origin.

Legislative Structure

The amendments are being added by the government as modifications to the criminal justice legislation, which is also implementing a ban on owning, producing or sharing AI models designed to generate child sexual abuse material.

Real-World Consequences

This recently, the minister toured the London headquarters of a children's helpline and heard a mock-up call to advisors involving a report of AI-based exploitation. The call depicted a teenager requesting help after facing extortion using a explicit AI-generated image of themselves, constructed using AI.

"When I learn about children experiencing extortion online, it is a source of intense anger in me and rightful concern amongst families," he stated.

Alarming Data

A leading internet monitoring organization stated that cases of AI-generated abuse content – such as online pages that may contain numerous images – had significantly increased so far this year.

Instances of the most severe content – the gravest form of exploitation – increased from 2,621 visual files to 3,086.

  • Female children were overwhelmingly targeted, making up 94% of prohibited AI depictions in 2025
  • Depictions of newborns to toddlers increased from five in 2024 to 92 in 2025

Industry Response

The law change could "represent a crucial step to guarantee AI tools are safe before they are released," stated the chief executive of the online safety foundation.

"AI tools have made it so survivors can be victimised repeatedly with just a few clicks, providing offenders the ability to make possibly limitless amounts of advanced, photorealistic exploitative content," she continued. "Material which additionally commodifies victims' suffering, and makes children, particularly girls, more vulnerable on and off line."

Support Interaction Data

The children's helpline also published information of support interactions where AI has been mentioned. AI-related harms discussed in the sessions comprise:

  • Employing AI to rate body size, body and appearance
  • Chatbots dissuading children from consulting safe adults about harm
  • Being bullied online with AI-generated material
  • Digital extortion using AI-manipulated pictures

During April and September this year, Childline conducted 367 counselling interactions where AI, chatbots and associated topics were mentioned, four times as many as in the equivalent timeframe last year.

Half of the mentions of AI in the 2025 interactions were related to psychological wellbeing and wellbeing, including utilizing AI assistants for support and AI therapy apps.

Alexis Anthony
Alexis Anthony

A passionate writer and performance coach dedicated to helping others unlock their full potential through actionable advice.