British Technology Companies and Child Protection Agencies to Examine AI's Ability to Generate Exploitation Images
Technology companies and child protection agencies will receive permission to assess whether AI systems can produce child abuse material under recently introduced British laws.
Substantial Increase in AI-Generated Harmful Material
The declaration came as findings from a protection watchdog showing that cases of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
New Regulatory Structure
Under the amendments, the government will permit approved AI companies and child protection organizations to examine AI systems – the underlying technology for conversational AI and image generators – and ensure they have adequate safeguards to prevent them from producing depictions of child sexual abuse.
"Fundamentally about preventing abuse before it happens," stated the minister for AI and online safety, adding: "Experts, under strict conditions, can now identify the danger in AI systems promptly."
Tackling Legal Obstacles
The changes have been introduced because it is against the law to produce and possess CSAM, meaning that AI developers and others cannot create such images as part of a evaluation regime. Previously, officials had to delay action until AI-generated CSAM was published online before addressing it.
This law is aimed at averting that problem by enabling to halt the production of those materials at their origin.
Legislative Structure
The changes are being added by the government as modifications to the criminal justice legislation, which is also implementing a prohibition on owning, producing or sharing AI systems designed to create exploitative content.
Real-World Consequences
This week, the official visited the London base of a children's helpline and listened to a mock-up call to counsellors featuring a report of AI-based exploitation. The interaction portrayed a teenager requesting help after facing extortion using a sexualised AI-generated image of himself, constructed using AI.
"When I hear about children experiencing extortion online, it is a cause of extreme frustration in me and rightful anger amongst families," he stated.
Alarming Statistics
A prominent internet monitoring foundation stated that cases of AI-generated exploitation content – such as webpages that may contain multiple images – had more than doubled so far this year.
Cases of the most severe material – the gravest form of abuse – increased from 2,621 visual files to 3,086.
- Girls were overwhelmingly victimized, making up 94% of illegal AI images in 2025
- Portrayals of infants to two-year-olds increased from five in 2024 to 92 in 2025
Industry Response
The law change could "constitute a vital step to ensure AI tools are safe before they are released," stated the chief executive of the internet monitoring organization.
"Artificial intelligence systems have enabled so survivors can be victimised all over again with just a few clicks, giving criminals the ability to create potentially limitless quantities of sophisticated, lifelike child sexual abuse material," she added. "Material which additionally exploits victims' trauma, and renders young people, especially female children, less safe both online and offline."
Counseling Interaction Information
The children's helpline also published information of support sessions where AI has been mentioned. AI-related risks discussed in the conversations comprise:
- Employing AI to rate weight, body and looks
- Chatbots dissuading children from consulting trusted guardians about harm
- Facing harassment online with AI-generated material
- Digital extortion using AI-faked images
During April and September this year, Childline conducted 367 counselling sessions where AI, conversational AI and associated topics were mentioned, significantly more as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 interactions were connected with mental health and wellbeing, encompassing using AI assistants for support and AI therapeutic apps.