An internet watchdog has spoken out about the alarming trend of pedophiles collaborating online to use open source artificial intelligence to create child sexual abuse material.
Dan Sexton, the chief technology officer of the Internet Watch Foundation, said that there is a “technical community in the perpetrator space, particularly dark web forums where this technology is discussed,” adding that these people “images share, share them again [AI] Models. They pass on instructions and tips.”
The guard reported Earlier this month the warning about online development was announced after the chairman of the UK government’s AI task force, Ian Hogarthexpressed concern about child abuse material and how open source models were being used to make “some of the most vile things ever.”
Open source technology can apparently be downloaded and modified by users. This is not the case with OpenAI’s Dall-E or Google’s Imagen, which the general public cannot access or modify, the report said.
Sexton has suggested that pedophiles interested in child abuse material have used the dark web to create and distribute realistic images.
“We believe the content we saw was actually generated using open source software that was downloaded and run locally on people’s computers and then modified. And that’s a much harder problem to fix,” Sexton said. “It was taught what child sexual abuse material was and taught how to create it.”
He noted that online discussions that take place in the dark corners of the internet include publicly available images of children as well as celebrity children. There are even cases in which victims of child sexual abuse are used as a basis for the creation of new content, it is said Fox News.
“All of these ideas are concerning and we have seen discussions about them,” Sexton added.
A major concern about creating AI materials on child sexual abuse is that it could expose a large group of people to the images. Christopher Alexander, chief analytics officer at Pioneer Development Group, said AI could also be used to search for missing people, possibly using “age histories and other factors that could help find trafficked children.”
“So generative AI is a problem, AI and machine learning are a tool to combat it, even if it’s just detection,” Alexander said.
There have been calls for the government to curb the development of such technology before it gets out of control.
Jonathan D. Askonas, assistant professor of politics and fellow at the Center for the Study of Statesmanship at the Catholic University of America, said: “Lawmakers must act now to strengthen laws against the production, distribution and ownership of AI.” on CSAM [child sex abuse material]and to close loopholes from the past.”
While the IWF actively searches the internet for and assists in removing child abuse material, it could quickly become overwhelmed if tips of AI-generated material come in. Sexton said distribution of the material is already widespread in the digital world.
“Online child sexual abuse is, in our view, already a public health epidemic,” Sexton said. “So that’s not going to improve the problem. It will potentially just make it worse.”
IWF CTO Dan Sexton on how Nominet’s social impact funding enables us to develop new technologieswww.youtube.com
Do you like Blaze News? Bypass the censorship, sign up for our newsletter and get stories like this delivered straight to your inbox. Login here!