Image-generating AI is leading to an increase in images depicting child abuse. This complicates the already difficult task of identifying victims and combating actual abuse.
Image-generating AI has reportedly led to an alarming increase in lifelike images of child sexual abuse, the washington post reports, citing research by Thorn, a nonprofit child safety group.
Investigators worry that the volume of fake images is hampering the search for victims and the fight against actual child abuse. The existing tracking system could be confused by the fake images, overwhelming officials. They would have to figure out which images are real and which are fake in addition to their normal research.
Yiota Souras, director of legal affairs at the National Center for Missing and Exploited Children, confirms a sharp increase in AI-generated abuse images. The mass production of these images could also be used to entice real children to behave as they see in the images, she says.
AI arms race in pedophile forums
Generative AI tools have sparked a full-blown arms race in pedophile forums, according to the report. Thousands of images have been discovered on Dark Web forums, where participants also share instructions on how other pedophiles can create their own images. They use open source models.
In a survey of one forum with more than 3,000 participants, about 80 percent said they had used or planned to use an AI image generator to create child abuse images, said Avi Jager, director of child protection and human exploitation at ActiveFence. The organization helps social media and streaming sites prevent child pornography on their platforms.
Child advocates and US justice officials say AI-generated images should be punishable by law. However, there are no rulings yet that would determine the level of punishment. Because the images don’t include real children, it’s questionable whether they violate federal child-protection laws.
In the future, systems that can reliably distinguish between AI images and real photos could help, allowing them to be integrated into online recognition systems. Another approach is to label AI images directly in the software that creates them.
In early June, the FBI warned about an approach that uses generative AI to create additional and explicit images from existing images, such as those from social media. The US investigative agency has seen an increase in these so-called “sextortion victims” since April 2023.