AI-Driven child exploitation imagery poses a grave online threat; UK watchdog issues urgent warning
A new report says that there could be more bad pictures of child abuse on the internet. This might happen if we don't stop AI tools that make fake photos.

Highlights
- There is a growing problem of AI-generated child abuse images on the internet
- AI technology challenges law enforcement efforts
- OpenAI's DALL-E and closed AI models are better at blocking misuse
A warning from a watchdog agency, the Internet Watch Foundation (IWF), in the UK highlights a growing problem of child sexual abuse images spreading on the internet, and it could get much worse.
The cause of concern is AI technology that can create fake photos, making it challenging for law enforcement to keep up. In South Korea, someone was sentenced to 2.5 years for using AI to make inappropriate images.
The IWF found that there was a demand for images of previously abused children, using their real content to generate more, which is deeply concerning.
Even kids are using similar tools to play pranks. This report sheds light on a troubling side of AI advancement, where people can describe what they want, and AI creates it, sometimes with harmful consequences.
Watchdog Group’s chief technology officer said
We're not talking about the harm it might do. This is happening right now and it needs to be addressed right now.
The perils of AI-generated deep fake photos
The increase in fake child sexual abuse images created through AI technology could make it harder for authorities to help real children who are in danger.
What's really shocking is that they're using old pictures and making new, deep-fake images of these victims. It also gives wrongdoers a tool to manipulate and harm victims by using these fake images.
Sexton’s charity organisation, which fights online child sexual abuse, started getting reports about bad AI-made images this year. They looked into hidden parts of the internet called the "Dark Web," which you can only access with special tools that hide your identity.
The IWF's report raises awareness about the AI-generated abuse issue and calls on governments, especially in the European Union, to update laws to fight it. They want to stop the reuse of old abuse photos to protect past victims.
Closed AI models
Last year, new AI image generators amazed people by making all sorts of images. Those who produce child abuse images do not favor the widely popular AI image generators because these popular AI models include features designed to prevent the generation of such harmful content.
Tech providers with full control, like OpenAI's DALL-E, have been better at preventing misuse, according to Sexton.