On Air Now

Non-Stop Music

Midnight - 7:00am

Now Playing

New law could help tackle AI-generated child abuse at source, says watchdog

Wednesday, 12 November 2025 00:52

By Mickey Carroll, science and technology reporter

Groups tackling AI-generated child sexual abuse material could be given more powers to protect children online under a proposed new law.

Organisations like the Internet Watch Foundation (IWF), as well as AI developers themselves, will be able to test the ability of AI models to create such content without breaking the law.

That would mean they could tackle the problem at the source, rather than having to wait for illegal content to appear before they deal with it, according to Kerry Smith, chief executive of the IWF.

The IWF deals with child abuse images online, removing hundreds of thousands every year.

Ms Smith called the proposed law a "vital step to make sure AI products are safe before they are released".

How would the law work?

The changes are due to be tabled today as an amendment to the Crime and Policing Bill.

The government said designated bodies could include AI developers and child protection organisations, and it will bring in a group of experts to ensure testing is carried out "safely and securely".

The new rules would also mean AI models can be checked to make sure they don't produce extreme pornography or non-consensual intimate images.

"These new laws will ensure AI systems can be made safe at the source, preventing vulnerabilities that could put children at risk," said Technology Secretary Liz Kendall.

"By empowering trusted organisations to scrutinise their AI models, we are ensuring child safety is designed into AI systems, not bolted on as an afterthought."

AI abuse material on the rise

The announcement came as new data was published by the IWF showing reports of AI-generated child sexual abuse material have more than doubled in the past year.

According to the data, the severity of material has intensified over that time.

The most serious category A content - images involving penetrative sexual activity, sexual activity with an animal, or sadism - has risen from 2,621 to 3,086 items, accounting for 56% of all illegal material, compared with 41% last year.

Read more from Sky News:
Protesters storm COP30
UK stops some intel sharing with US

The data showed girls have been most commonly targeted, accounting for 94% of illegal AI images in 2025.

The NSPCC called for the new laws to go further and make this kind of testing compulsory for AI companies.

"It's encouraging to see new legislation that pushes the AI industry to take greater responsibility for scrutinising their models and preventing the creation of child sexual abuse material on their platforms," said Rani Govender, policy manager for child safety online at the charity.

"But to make a real difference for children, this cannot be optional.

"Government must ensure that there is a mandatory duty for AI developers to use this provision so that safeguarding against child sexual abuse is an essential part of product design."

Sky News

(c) Sky News 2025: New law could help tackle AI-generated child abuse at source, says watchdog

More from National News

  • The Business Hour

    Listen again to the latest Business Hour with Tony Delahunty. The show is brought to you in association with Nottingham Trent University and West Notts College.

  • Supporting The Stags

    Mansfield 103.2 is a proud supporter of Mansfield Town Football Club - head to their website for all the latest Stags related news.

  • Send Us A Message

    Want to get in touch with our presenters or our news team? Then a great way to do it is through our website

  • The Mansfield 103.2 Business Club

    Check out our brand new business directory and if you want to join call our sales team now on 01623 646666.

News