Technology companies and child protection organizations will be granted authority to evaluate whether AI systems can produce child abuse material under new UK laws.
The announcement coincided with revelations from a protection monitoring body showing that cases of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Under the amendments, the authorities will permit designated AI companies and child safety groups to examine AI systems β the underlying technology for chatbots and image generators β and verify they have adequate protective measures to prevent them from creating images of child sexual abuse.
"Fundamentally about stopping abuse before it occurs," declared Kanishka Narayan, adding: "Experts, under rigorous conditions, can now identify the danger in AI models promptly."
The amendments have been implemented because it is against the law to create and own CSAM, meaning that AI creators and others cannot generate such images as part of a testing regime. Until now, officials had to delay action until AI-generated CSAM was published online before dealing with it.
This legislation is aimed at preventing that issue by enabling to halt the creation of those materials at their origin.
The amendments are being added by the authorities as modifications to the criminal justice legislation, which is also establishing a prohibition on possessing, creating or distributing AI models designed to create exploitative content.
This week, the official visited the London base of a children's helpline and heard a mock-up conversation to advisors involving a report of AI-based exploitation. The call depicted a adolescent requesting help after facing extortion using a sexualised AI-generated image of themselves, constructed using AI.
"When I learn about children experiencing blackmail online, it is a source of intense anger in me and rightful anger amongst parents," he stated.
A leading online safety foundation reported that instances of AI-generated exploitation material β such as webpages that may contain numerous files β had significantly increased so far this year.
Cases of category A content β the most serious form of abuse β rose from 2,621 images or videos to 3,086.
The legislative amendment could "constitute a vital step to ensure AI products are secure before they are launched," commented the chief executive of the internet monitoring foundation.
"AI tools have enabled so survivors can be victimised all over again with just a simple actions, providing criminals the ability to make potentially endless quantities of sophisticated, photorealistic exploitative content," she added. "Content which further commodifies victims' trauma, and makes children, particularly girls, less safe both online and offline."
Childline also published information of counselling sessions where AI has been mentioned. AI-related harms discussed in the sessions include:
Between April and September this year, the helpline delivered 367 counselling sessions where AI, chatbots and related terms were discussed, four times as many as in the same period last year.
Fifty percent of the references of AI in the 2025 interactions were connected with psychological wellbeing and wellbeing, encompassing using AI assistants for support and AI therapeutic applications.