A tech enthusiast and web developer with over 10 years of experience in helping beginners build their first websites affordably.
Tech firms and child safety agencies will receive permission to evaluate whether artificial intelligence systems can generate child abuse images under recently introduced British laws.
The announcement came as findings from a safety monitoring body showing that cases of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.
Under the changes, the authorities will permit designated AI developers and child safety groups to examine AI models β the underlying systems for chatbots and visual AI tools β and ensure they have adequate protective measures to stop them from producing images of child sexual abuse.
"Fundamentally about stopping abuse before it happens," declared Kanishka Narayan, adding: "Specialists, under rigorous conditions, can now identify the risk in AI models promptly."
The changes have been introduced because it is against the law to produce and own CSAM, meaning that AI developers and others cannot create such images as part of a evaluation process. Previously, officials had to delay action until AI-generated CSAM was uploaded online before dealing with it.
This law is aimed at averting that problem by enabling to stop the creation of those materials at source.
The amendments are being introduced by the authorities as modifications to the crime and policing bill, which is also implementing a ban on owning, creating or distributing AI models designed to create exploitative content.
This week, the minister toured the London base of a children's helpline and heard a simulated conversation to advisors involving a report of AI-based exploitation. The call depicted a adolescent requesting help after facing extortion using a explicit AI-generated image of themselves, constructed using AI.
"When I hear about young people experiencing blackmail online, it is a source of intense anger in me and justified anger amongst families," he said.
A leading online safety organization reported that cases of AI-generated exploitation content β such as online pages that may include numerous images β had significantly increased so far this year.
Instances of category A content β the most serious form of abuse β rose from 2,621 visual files to 3,086.
The legislative amendment could "constitute a vital step to guarantee AI products are safe before they are released," commented the head of the online safety foundation.
"Artificial intelligence systems have made it so victims can be victimised repeatedly with just a few clicks, providing offenders the capability to create potentially limitless amounts of sophisticated, photorealistic child sexual abuse material," she added. "Content which additionally exploits survivors' suffering, and makes young people, particularly girls, less safe on and off line."
Childline also released details of counselling sessions where AI has been referenced. AI-related risks mentioned in the conversations include:
During April and September this year, the helpline conducted 367 support interactions where AI, chatbots and related topics were discussed, four times as many as in the equivalent timeframe last year.
Fifty percent of the mentions of AI in the 2025 interactions were related to psychological wellbeing and wellness, encompassing using chatbots for support and AI therapeutic applications.
A tech enthusiast and web developer with over 10 years of experience in helping beginners build their first websites affordably.
Ruth Martin
Ruth Martin