Skip to main content
  1. Home
  2. Computing
  3. News

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

AI has turbocharged the worst form of abusive content on the internet and guardians can’t keep up

AI is making child abuse content explode online

Add as a preferred source on Google
Faces in a grid generated by Gemini.
Gemini Nano Banana

Artificial intelligence has undoubtedly brought plenty of useful tools to the internet. But it has also handed one of the most horrific forms of abuse a grim new boost. Recent reporting and watchdog findings point to the same ugly pattern of generative AI helping offenders create child sexual abuse imagery on a greater scale.

These are now becoming increasingly realistic, and in formats that are becoming harder for platforms, regulators, and child-safety groups to deal with.

How AI is making the scale worse and content more extreme

Recommended Videos

Back in February, Reuters revealed that actionable reports of AI-generated child sexual abuse imagery had more than doubled over the past two years, while the Internet Watch Foundation later said it identified 8,029 AI-generated images and videos of child sexual abuse in 2025 alone. This grim picture was also laid out in a Bloomberg report on how generative AI is changing the child sexual abuse material landscape in the US.

Investigators aren’t just dealing with AI-generated pornographic images and videos anymore, there are even manipulated images of real children and even chatbot conversations where offenders allegedly seek grooming advice or role-play sexual abuse. Meanwhile, law enforcement is burning time trying to figure out whether a child in an image is real, digitally altered, or entirely fake.

Real cases are getting more disturbing

The report points to a Minnesota case involving William Michael Haslach, a school lunch monitor and traffic guard accused of using AI tools to digitally undress children in photos he had taken at work. Federal agents identified more than 90 victims and found nearly 800 AI-generated abuse images on his devices. This showcases how offenders are increasingly using everyday photos pulled from social media to create explicit material.

Investigators are drowning in volume and bad leads

The scale is getting ugly fast. Bloomberg reports that NCMEC received 1.5 million AI-linked CSAM reports in 2025, up from 67,000 a year earlier and 4,700 in 2023. At the same time, investigators say automated moderation systems are generating a flood of junk tips, swamping already overstretched task forces. And every wrong call burns time that could have gone toward a child facing immediate harm.

Vikhyaat Vivek
Vikhyaat Vivek is a tech journalist and reviewer with seven years of experience covering consumer hardware, with a focus on…
Microsoft revamps Windows Insider Program with simpler structure and more user control
I’m glad Microsoft simplified the Insider program - it was overdue
A man sits, using a laptop running the Windows 11 operating system.

Microsoft is rolling out a major overhaul of its Windows Insider Program, aiming to simplify how early Windows features are tested while addressing long-standing user complaints around complexity and control. The update marks one of the biggest structural changes to the program in years, signaling a shift in how Microsoft wants to collaborate with its testing community.

A Simpler Insider Program Built Around Feedback And Control

Read more
Discord users breach access controls to reach Anthropic’s Mythos model
This AI security breach shows why your data still needs protection
Representative Image

A recent security incident involving Anthropic has highlighted just how fragile the safeguards around advanced AI systems can be. A Wired report suggests that a small group of users, operating through private Discord channels, managed to gain unauthorized access to the company’s highly restricted Mythos AI model - an experimental system designed for cybersecurity applications.

A Breach That Exposes Bigger Risks Around AI Control

Read more
I never thought AI would add typos – but it kind of makes sense
“Anti-Grammarly” tool uses AI to make writing imperfect on purpose
AI tool

A new AI tool is flipping one of the oldest rules of digital communication on its head: perfect grammar is no longer the goal. Instead, the latest trend is to make emails look deliberately human - even if that means adding typos.

When AI Starts Making You Sound Less Perfect

Read more