Paedophiles using AI to 'de-age' celebrities, 'nudify' clothed children, and depict sexual abuse scenarios

The Internet Watch Foundation warned that AI-generated child sexual abuse images "threaten to overwhelm" the Internet. Warning: this article contains information readers may find distressing.
Watch more of our videos on Shots! 
and live on Freeview channel 276
Visit Shots! now

The "worst nightmares" about artificial intelligence (AI) are coming true, a watchdog has warned, as it revealed it had found thousands of AI-generated child sex abuse images on a dark web forum.

The Internet Watch Foundation (IWF) announced on Wednesday (25 October) that in September alone, nearly 3,000 child sexual abuse images had been shared on the online platform. More than half depicted primary school-aged children, and more than one in five depicted the most serious kind of imagery, including rape, torture, and bestiality.

Hide Ad
Hide Ad

The majority of the illegal images, created by AI, were so realistic that it would be difficult for trained analysts to distinguish them from photographs. This could distract police from protecting legitimate victims, the charity warned, as they will need to spend time differentiating between real and artificial children.

Some of the images found were of real children, with paedophiles inputting existing images of sexual abuse into AI models, which then produce new depictions of them. In other cases, offenders were using AI to "nudify" pictures of fully-clothed children.

The developing technology was also being used to "de-age" celebrities, meaning they would appear as children, and the paedophiles would then depict them in sexual abuse scenarios.

Paedophiles are using AI to 'de-age' celebrities and 'nudify' clothed children. Credit: Getty ImagesPaedophiles are using AI to 'de-age' celebrities and 'nudify' clothed children. Credit: Getty Images
Paedophiles are using AI to 'de-age' celebrities and 'nudify' clothed children. Credit: Getty Images

The IWF warned in July that "astoundingly realistic" images of child sex abuse were being found online, and now, in the space of three months, it said the situation has escalated to the point where these images threaten to "overwhelm" the Internet.

Hide Ad
Hide Ad

Susie Hargreaves, chief executive of the IWF, said: "Our worst nightmares have come true. We are seeing criminals deliberately training their AI on real victims’ images. Children who have been raped in the past are now being incorporated into new scenarios because someone, somewhere, wants to see it.

“This is not a hypothetical situation. We are seeing this happening now. The numbers are rising, and the sophistication and realism of this imagery is reaching new levels. If we don’t get a grip on this threat, this material threatens to overwhelm the Internet.”

Meanwhile, Ian Critchley, National Police Chiefs’ Council lead for child protection in the UK, said that the "rapid" increase in online child sexual abuse offending over the past five years serves to "normalise" the rape and abuse of real-life children.

This means more paedophiles could start turning to real-life offending, thereby putting children in increasing danger - a concern Baroness Finlay previously told NationalWorld about.

Hide Ad
Hide Ad

Mr Critchley also added that "new methods and ways of offending being discovered on a regular basis", with some perpetrators "making their own imagery to their own specifications", and others producing AI imagery of children "for commercial gain".

"We continue to work at pace to ensure that industry prevents these appalling images being created, shared, and distributed on their platforms, and to ensure we identify and bring to justice the abhorrent offenders who seek to abuse children," he said.

"It is also why the Online Safety Act is the most important piece of legislation in many years; to ensure the safety of all children from abusive and harmful material, an increasing number of which is AI-generated."

The Online Safety Act aims to keep children safer online and hold social media bosses more responsible for the content published on their platforms. However, campaigners have said there is not enough being done to crack down on AI in the legislation, particularly as the offending in this realm is so fast-developing.

Hide Ad
Hide Ad

A Home Office spokesperson said: "Online child sexual abuse is one of the key challenges of our age, and the rise in AI-generated child sexual abuse material is deeply concerning. We are working at pace with partners across the globe to tackle this issue, including the Internet Watch Foundation.

"Last month, the Home Secretary announced a joint commitment with the US Government to work together to innovate and explore development of new solutions to fight the spread of this sickening imagery."

Comment Guidelines

National World encourages reader discussion on our stories. User feedback, insights and back-and-forth exchanges add a rich layer of context to reporting. Please review our Community Guidelines before commenting.