An internet watchdog is sounding the alarm over a growing trend of sex offenders collaborating online to use open source artificial intelligence to generate child pornography.
“There is a technical community within the offender space, particularly on the Dark Web forums, where they discuss this technology,” said Dan Sexton, chief technology officer at the Internet Watch Foundation (IWF). , to the Guardian in a report last week. “They share images, they share (AI) models. They share guides and tips.”
Sexton’s organization has found that offenders are increasingly turning to open source AI models to create illegal child sexual abuse material (CSAM) and distribute it online. Unlike closed AI models such as OpenAI’s Dall-E or Google’s Imagen, open source AI technology can be downloaded and adjusted by users, according to the report. Sexton said the ability to use such technology has become widespread among offenders, who are turning to the dark web to create and distribute realistic images.
NEW AI OFFERS “PERSONAL PROTECTION” AGAINST KIDNAPPING AND CRIMINAL THREATS
“We believe that the content we saw was actually generated using open source software, which was downloaded and run locally on users’ computers and then modified. And that’s a much more difficult problem to resolve,” Sexton said. “We learned what child sexual abuse material is and how to create it.”
Sexton said online discussions that take place on the dark web include images of famous children and publicly available images of children. In some cases, images of child abuse victims are used to create entirely new content.
“All of these ideas are concerning and we have seen discussions about them,” Sexton said.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
Christopher Alexander, chief analytics officer at Pioneer Development Group, told Fox News Digital that one of the new dangers of this technology is that it could be used to introduce more people to CSAM. On the other hand, AI could be used to help search the web for missing people, even using “age progressions and other factors that could help locate trafficked children.”
“So generative AI is a problem, AI and machine learning is a tool to combat it, even just by doing detection,” Alexander said.
“The extreme dangers created by this technology will have far-reaching implications for the well-being of the Internet. When these companies fail, Congress must aggressively step in and act to protect both children and the Internet as a whole.”
Meanwhile, Jonathan D. Askonas, assistant professor of politics and fellow at the Center for the Study of Statesmanship at the Catholic University of America, told Fox News Digital that “lawmakers must act now to strengthen laws against production, distribution, and ownership of AI-based CSAM, and to fill the gaps of the previous era.
CLICK HERE FOR MORE NEWS FROM US
The IWF, which searches the web for CSAM content and helps coordinate its removal, could find itself overwhelmed by advice to remove such content from the web in the AI era, Sexton said, noting that the The proliferation of this type of material was already widespread on the Web. .
“Online child sexual abuse is already, as we believe, a public health epidemic,” Sexton said, according to The Guardian. “So it’s not going to make the problem better. It’s just going to potentially make it worse.”
Ziven Havens, policy director of the Bull Moose Project, told Fox News Digital that it would be up to Congress to act to protect both children and the Internet.
CLICK HERE TO GET THE FOX NEWS APP
“Using already available footage of real victims of abuse, AI CSAM differs very little from non-AI CSAM. It is just as morally corrupt and disgusting. The extreme dangers created by this technology will have implications massive impact on the well-being of the Internet,” Havens said. “Where these companies fail, Congress must step in aggressively and act to protect both children and the Internet as a whole.”