AI generated child sexual abuse images which are ‘astoundingly realistic’ being found online, warns watchdog

The Internet Watch Foundation has warned that AI-created images could normalise child sex abuse - and make it harder for authorities to identify and protect real victims
Watch more of our videos on Shots! 
and live on Freeview channel 276
Visit Shots! now

Artificial Intelligence is already being used to generate child sexual abuse material online, a watchdog has warned.

The Internet Watch Foundation (IWF) has said it has found “astoundingly realistic” AI-created images of children online - which it warned could distract from protecting legitimate victims as authorities will need to spend time differentiating between real and artificial children.

Hide Ad
Hide Ad

There is also the danger, the organisation continued, that these types of images contribute to the normalisation of child sexual abuse - and could lead to offenders moving to real life. “We know there is a link between viewing child sexual abuse imagery and going on to commit contact offences against children,” a spokesperson said.

In its report, the IWF confirmed that it had found seven URLs containing AI-generated child sex abuse material. It could not yet give locations for which countries the URLs were hosted in, but said the images contained both Category A and B material – some of the most severe kinds of abuse – with children as young as three years old depicted.

Analysts also discovered an online “manual” written by offenders, which aimed to help other criminals create more realistic results when using AI.

AI is already being used to generate child sexual abuse material online, a watchdog has warned. Credit: Mark Hall / NationalWorldAI is already being used to generate child sexual abuse material online, a watchdog has warned. Credit: Mark Hall / NationalWorld
AI is already being used to generate child sexual abuse material online, a watchdog has warned. Credit: Mark Hall / NationalWorld

Currently, the number of these types of images being identified online is small, but the IWF warned the potential exists for criminals to produce “unprecedented quantities of life-like child sexual abuse imagery”. It voiced concerns then that the rapidly advancing nature of AI - alongside its increasing accessibility - could mean the problem escalates to the point where the law is unable to keep up.

Hide Ad
Hide Ad

IWF chief executive Susie Hargreaves has, as a result, called on Prime Minister Rishi Sunak to treat the issue as a “top priority” when Britain hosts the world’s first global summit on AI this autumn. She said: “AI is getting more sophisticated all the time. We are sounding the alarm and saying the PM needs to treat the serious threat it poses as the top priority when he hosts the first global AI summit later this year.

“For members of the public, some of this material would be utterly indistinguishable from a real image of a child being sexually abused - and having more of this material online makes the internet a more dangerous place.”

She added that the continued abuse of this technology “could have profoundly dark consequences”, with “more and more people exposed to this harmful content”, and warned that a failure to act would be “devastating” for the safety of children online.

Rishi Sunak recently announced that the UK will host a global summit on safety in artificial intelligence in the autumn. Credit: Getty ImagesRishi Sunak recently announced that the UK will host a global summit on safety in artificial intelligence in the autumn. Credit: Getty Images
Rishi Sunak recently announced that the UK will host a global summit on safety in artificial intelligence in the autumn. Credit: Getty Images

Dan Sexton, chief technical officer at the IWF, said that if AI imagery becomes “indistinguishable” from real imagery, one of his main worries is that the organisation’s analysts - who are responsible for finding and removing child sexual abuse material on the internet - “could waste precious time” attempting to protect children that do not exist.

Hide Ad
Hide Ad

“This would mean real victims could fall between the cracks, and opportunities to prevent real life abuse could be missed,” he said.

Meanwhile, a government spokesperson told Sky News: “AI-generated child sexual exploitation and abuse content is illegal, regardless of whether it depicts a real child or not, meaning tech companies will be required to proactively identify content and remove it under the Online Safety Bill, which is designed to keep pace with emerging technologies like AI.

“The Online Safety Bill will require companies to take proactive action in tackling all forms of online child sexual abuse including grooming, live-streaming, child sexual abuse material and prohibited images of children - or face huge fines.”

NationalWorld previously spoke to Baroness Ilora Finlay, who warned that paedophiles were using the Metaverse to virtually sexually assault children - and that this could push them closer towards “real life abuse”. She explained: “It is now possible for an individual to order a virtual reality experience of abusing the image of a child whom they know.

Hide Ad
Hide Ad

“And since the intention of virtual reality is to trick the human nervous system into experiencing perceptual and bodily reactions, while such a virtual assault may not involve physical touching, the psychological, neurological, and emotional experience can be similar to a physical assault.”

She warned that this fuels an addiction: “Once the offender has engaged with virtual reality abuse material, there is no desire to go back to 2D material. [Instead], offenders report that they want more. In the case of VR, that would be moving to real life abuse.”

The National Crime Agency also previously warned that the sexual abuse of children in online spaces is “increasing in scale, severity, and complexity”, as it estimated there are more than half a million people in the UK who pose a sexual risk to children.

Speaking to NationalWorld, a spokesperson said the industry is “detecting and reporting an increasing number of illegal images” each year, meaning “the threat to children is more severe than it has ever been”.

Comment Guidelines

National World encourages reader discussion on our stories. User feedback, insights and back-and-forth exchanges add a rich layer of context to reporting. Please review our Community Guidelines before commenting.