AI generators are being trained to generate pornographic images of children, study finds

On today’s episode of things that need to be immediately filed under “OH, HELL NO!’ in the public Zeitgeist, a new report from Associated Press found that “AI image-generators are being trained on explicit photos of children.”

“Hidden inside the foundation of popular artificial intelligence image-generators are thousands of images of child sexual abuse, according to a new report that urges companies to take action to address a harmful flaw in the technology they built,” the AP reports. “Those same images have made it easier for AI systems to produce realistic and explicit imagery of fake children as well as transform social media photos of fully clothed real teens into nudes, much to the alarm of schools and law enforcement around the world.”

The report from the Stanford Internet Observatory states, “Generative Machine Learning models have been well documented as being able to produce explicit adult content, including child sexual abuse material (CSAM) as well as to alter benign imagery of a clothed victim to produce nude or explicit content.”

According to the AP, the Stanford University-based watchdog group found “more than 3,200 images of suspected child sexual abuse in the giant AI database LAION, an index of online images and captions that’s been used to train leading AI image-makers such as Stable Diffusion.” The group “worked with the Canadian Centre for Child Protection and other anti-abuse charities to identify the illegal material and report the original photo links to law enforcement. It said roughly 1,000 of the images it found were externally validated.”

LAION, short for the non-profit Large-scale Artificial Intelligence Open Network, responded the very night of the Wednesday release of the Stanford Internet Observatory’s report.

In a statement to the Associated Press, LAION said it “has a zero tolerance policy for illegal content and in an abundance of caution, we have taken down the LAION datasets to ensure they are safe before republishing them.”

ADVERTISEMENT

LAION’s index includes some 5.8 billion images, of which the vile photos of abuse are just a fraction.

But, the Stanford group warns, that fraction is probably influencing “the ability of AI tools to generate harmful outputs and reinforcing the prior abuse of real victims who appear multiple times,” the AP reports.

The author of the alarming report, Stanford Internet Observatory’s chief technologist, David Thiel, cites the competitive field and the numerous AI projects that are being “effectively rushed to market” and says this problem won’t be an easy one to fix.

“Taking an entire internet-wide scrape and making that dataset to train models is something that should have been confined to a research operation, if anything, and is not something that should have been open-sourced without a lot more rigorous attention,” he stated.

While this development is almost too horrible for a sane head to wrap around, at least one user on X says, “No one should be surprised by this.

ADVERTISEMENT

“Appalled, absolutely,” the user said, “but not surprised.”

And, sadly, the user is right.

Still, people didn’t want to believe it:

ADVERTISEMENT

ADVERTISEMENT

Melissa Fine

Comment

We have no tolerance for comments containing violence, racism, profanity, vulgarity, doxing, or discourteous behavior. If a comment is spam, instead of replying to it please click the ∨ icon below and to the right of that comment. Thank you for partnering with us to maintain fruitful conversation.

Latest Articles