AI-Generated Child Sexual Abuse Material 'Nightmare' Is Now A Reality: Report

By Hera Rizwan

In its latest report, the U.K.-based Internet Watch Foundation (IWF) has flagged a flood of AI-generated images of child sexual abuse on the internet. The watchdog has urged the governments and technology providers to act quickly before this "nightmare" overwhelms law enforcement investigators with an expanding pool of potential victims.

The report has highlighted that criminals are leveraging downloadable open-source generative AI models, capable of generating images, with highly alarming consequences. This technology is now being employed to generate new images featuring previously victimised children and are even beginning to offer monthly subscription services for AI-generated child sexual abuse material (CSAM).

Also Read:More Than 3 Billion People Remain Unconnected To Mobile Internet Globally: Report

What are the key findings of the report?

Back in June, the IWF reported the discovery of seven URLs in the public domain that appeared to contain AI-generated content. In a recent investigation into a dark web forum associated with CSAM, the IWF has unveiled thousands of AI-generated images that are deemed illegal under UK law, shedding light on the extent of AI's application in this context. A dark web is a section of the internet that can only be accessed with a specialised browser.

Here's what the report found-

-A dark web forum dedicated to CSAM, assessed by IWF had a total of 20,254 AI-generated images shared within a month. Out of this, 11,108 images were chosen for evaluation by IWF analysts as they were explicitly criminal in nature. The remaining 9,146 AI-generated images either did not depict children or depicted children in situations that were evidently non-criminal in nature.

\- The watchdog report also suggests that there are substantial evidence that AI-generated CSAM has elevated the risk of re-victimising known survivors of child sexual abuse and exposing famous children, as well as children associated with the perpetrators.

-According to the report, most AI CSAM found is now realistic enough to be treated as ‘real’ CSAM. "The most convincing AI CSAM is visually indistinguishable from real CSAM, even for trained analysts," the report read.

-The disturbing AI-generated images encompass images of rape of babies and toddlers, famous preteen children being abused, as well as BDSM content featuring teenagers. The images likely depicted children aged between 7 and 13 years old, and were 99.6 percent female.

\- It added that the technology was also being used to create images of celebrities who have been “de-aged” and subsequently portrayed in explicit situations involving minors. Other examples of CSAM included using AI tools to “nudify” pictures of clothed children found on the internet.

-Perpetrators frequently made reference to "Stable Diffusion", an AI model provided by the UK-based company Stability AI, as observed by the IWF.

Also Read:‘Shoot Heroin’: AI Chatbots’ Advise Can Worsen Eating Disorder, Finds Study

"Worst nightmare" is coming true

Susie Hargreaves, the chief executive of the IWF, said that the watchdog’s “worst nightmares have come true”. According to her, the foundation had earlier warned that AI imagery could soon become indistinguishable from real pictures of children suffering sexual abuse, and that we could start to see this imagery proliferating in much greater numbers. "We have now passed that point," she said.

Hargreaves highlighted that this dangerous trend will reopen the emotional wounds for the victims as now not only do they have to grapple with the distressing knowledge that their abuse might be circulating in the shadows of the internet, but they also face the added threat of encountering fresh images depicting their abuse in previously unimaginable and horrifying ways.

The watchdog has cautioned the governments to take appropriate steps. Dan Sexton, the watchdog group's chief technology officer warned that unless action is taken to prevent it, the deluge of deepfake child sexual abuse images could overwhelm investigators, making it challenging to differentiate between actual children and virtual characters, potentially impeding the rescue efforts. Additionally, criminals may exploit these images for grooming and coercing new victims.

Also Read:Instagram Is The Most Important Platform For Pedophile Networks: Report

© BOOM Live