🔗 Share this article UK Tech Firms and Child Safety Agencies to Test AI's Capability to Create Abuse Images Tech firms and child safety agencies will be granted permission to evaluate whether AI tools can generate child exploitation images under recently introduced British legislation. Significant Increase in AI-Generated Harmful Material The declaration came as findings from a safety monitoring body showing that reports of AI-generated CSAM have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025. Updated Regulatory Structure Under the changes, the authorities will permit approved AI developers and child protection groups to examine AI systems – the underlying systems for chatbots and image generators – and ensure they have sufficient protective measures to prevent them from producing depictions of child sexual abuse. "Ultimately about preventing abuse before it occurs," declared Kanishka Narayan, noting: "Experts, under rigorous protocols, can now detect the risk in AI models early." Tackling Regulatory Challenges The changes have been implemented because it is against the law to create and possess CSAM, meaning that AI developers and other parties cannot generate such images as part of a evaluation regime. Previously, authorities had to delay action until AI-generated CSAM was uploaded online before dealing with it. This law is aimed at averting that issue by helping to stop the creation of those images at source. Legislative Structure The changes are being added by the government as revisions to the crime and policing bill, which is also establishing a prohibition on possessing, creating or sharing AI systems designed to generate child sexual abuse material. Practical Consequences This recently, the minister visited the London headquarters of a children's helpline and heard a mock-up conversation to counsellors featuring a report of AI-based abuse. The call portrayed a adolescent seeking help after facing extortion using a explicit deepfake of himself, constructed using AI. "When I hear about children facing blackmail online, it is a source of extreme frustration in me and rightful anger amongst parents," he stated. Alarming Data A leading internet monitoring organization reported that instances of AI-generated exploitation content – such as online pages that may include numerous files – had more than doubled so far this year. Cases of the most severe material – the gravest form of abuse – rose from 2,621 images or videos to 3,086. Girls were overwhelmingly targeted, accounting for 94% of illegal AI images in 2025 Portrayals of newborns to two-year-olds increased from five in 2024 to 92 in 2025 Industry Response The legislative amendment could "represent a crucial step to ensure AI products are secure before they are released," commented the chief executive of the online safety organization. "AI tools have enabled so victims can be victimised all over again with just a few clicks, giving offenders the capability to create possibly endless quantities of advanced, lifelike exploitative content," she continued. "Content which further commodifies victims' trauma, and makes young people, especially girls, more vulnerable both online and offline." Counseling Session Information The children's helpline also published information of support interactions where AI has been referenced. AI-related risks mentioned in the sessions comprise: Employing AI to evaluate body size, physique and appearance Chatbots dissuading children from consulting safe guardians about harm Being bullied online with AI-generated material Digital extortion using AI-manipulated images Between April and September this year, Childline conducted 367 support sessions where AI, chatbots and associated topics were discussed, four times as many as in the equivalent timeframe last year. Fifty percent of the mentions of AI in the 2025 interactions were related to psychological wellbeing and wellness, including utilizing chatbots for support and AI therapeutic apps.