Skip to main content
  1. Home
  2. Computing
  3. News

AI boosted one of the worst forms of abusive content on the internet

Add as a preferred source on Google
Faces in a grid generated by Gemini.
Gemini Nano Banana

While the progress of AI has brought a wave of useful tools, it’s also amplifying some of the darkest corners of the internet.

A new report from the IWF (Internet Watch Foundation) revealed a sharp increase in AI-generated child sexual abuse material (CSAM) online in 2025. It highlights how generative AI is being misused at scale. This isn’t just a small jump either; it’s proof of how this kind of content is being created and distributed.

Why this news is alarming

According to the IWF, over 8,000 AI-generated images and videos of abusive content were identified in 2025, which marked a 14% increase year on year. But what’s more concerning is the rise of video content. The report stated there’s an over 260-fold increase in AI-generated videos, many of which fall into the most severe category of abuse.

Recommended Videos

In fact, around 65% of the videos analyzed were classified as the most extreme type. This underscores just how serious the problem has become.

How AI is lowering the barrier to harmful content

The biggest change isn’t even the volume — it’s the accessibility.

Experts say generative AI tools are making it significantly easier to create realistic abuse material. Some of these systems can generate lifelike images and videos, manipulate existing photos, produce content at scale with minimal effort. That combination allow bad actors to create and distribute harmful material faster and more cheaply than ever before.

In the past, dangerous content such as this was typically associated with the dark web. But the report highlighted how much of AI-generated material is now being found on the open web, rather than being limited to hidden corners of the internet. This makes detection harder, moderation more complex, and increases the risk of exposure.

Why there’s no easy fix

AI-generated content introduces a new layer of difficulty for law enforcement and platforms. Since the material can be entirely synthetic or derived from real images, tracing the origin or identifying the victims becomes much harder. Removing this type of content is another hurdle. With the rapid evolution of AI tools ,safeguards are often play catch-up.

Vikhyaat Vivek
Tech journalist and product reviewer specializing in consumer electronics. Sean has covered everything from flagship…
Next-gen AI breakthrough promises chatbots that can read the room better
Researchers are teaching AI chatbots to read between the lines
Generative AI

Have you ever asked a chatbot something and felt like it completely missed your point? You say something with a bit of nuance, and the AI misses the subtlety entirely. That is exactly the problem researchers are trying to solve.

Even though the emotional connection with AI can feel deeper than human conversation for many users, most AI systems today still treat a sentence as a single block of sentiment. If you mix praise and criticism, the nuance often gets lost.

Read more
ChatGPT is not getting an erotic mode, after all
OpenAI pulls back as “adult mode” runs into bigger concerns
ChatGPT-to-rollback-to-friendly-and-adulttt

If you were expecting ChatGPT to get an “erotic mode,” that idea is officially off the table. According to Financial Times, OpenAI’s spicy mode is on hold “indefinitely.”

Inside OpenAI's struggle to bring the adult mode to life

Read more
Turns out, if you ask an AI to play an expert, it gets less reliable
Asking AI to pretend it's an expert can backfire, but researchers may have found a fix.
AI

you’ve probably seen the tip floating around: tell AI to act like an expert in a field, and you’ll get better answers. It’s popular advice, and it does work, sometimes. However, a new study suggests that using AI personas may not be as effective as we thought it would be.

Researchers from the University of California tested 12 different personas across six language models. The personas ranged from math and coding experts to creative writers and safety monitors. The goal was to find out how well AI performs when it is instructed to act as an expert.

Read more