While the progress of AI has brought a wave of useful tools, it’s also amplifying some of the darkest corners of the internet.
A new report from the IWF (Internet Watch Foundation) revealed a sharp increase in AI-generated child sexual abuse material (CSAM) online in 2025. It highlights how generative AI is being misused at scale. This isn’t just a small jump either; it’s proof of how this kind of content is being created and distributed.
Why this news is alarming
According to the IWF, over 8,000 AI-generated images and videos of abusive content were identified in 2025, which marked a 14% increase year on year. But what’s more concerning is the rise of video content. The report stated there’s an over 260-fold increase in AI-generated videos, many of which fall into the most severe category of abuse.
In fact, around 65% of the videos analyzed were classified as the most extreme type. This underscores just how serious the problem has become.
How AI is lowering the barrier to harmful content
The biggest change isn’t even the volume — it’s the accessibility.
Experts say generative AI tools are making it significantly easier to create realistic abuse material. Some of these systems can generate lifelike images and videos, manipulate existing photos, produce content at scale with minimal effort. That combination allow bad actors to create and distribute harmful material faster and more cheaply than ever before.

In the past, dangerous content such as this was typically associated with the dark web. But the report highlighted how much of AI-generated material is now being found on the open web, rather than being limited to hidden corners of the internet. This makes detection harder, moderation more complex, and increases the risk of exposure.
Why there’s no easy fix
AI-generated content introduces a new layer of difficulty for law enforcement and platforms. Since the material can be entirely synthetic or derived from real images, tracing the origin or identifying the victims becomes much harder. Removing this type of content is another hurdle. With the rapid evolution of AI tools ,safeguards are often play catch-up.
