Skip to main content
  1. Home
  2. Computing
  3. Emerging Tech
  4. News

ChatGPT and Gemini makers under probe over AI chatbot risk for kids

The FTC has asked OpenAI, Google, and more to reveal how they test the safety of AI chatbots.

Add as a preferred source on Google
ChatGPT on a laptop.
Nadeem Sarwar / Digital Trends

It seems the moment of reckoning for AI chatbots is here. After numerous reports detailing the problematic behavior and deadly incidents involving children and teens’ interaction with AI chatbots, the US government is finally intervening. The Federal Trade Commission (FTC) has today asked the makers of popular AI chatbots to detail how exactly they test and assess the suitability of these “AI companions for children.”

What’s happening?

Highlighting how the likes of ChatGPT, Gemini, and Meta can mimic human-like communication and personal relationships, the agency notes that these AI chatbots nudge teens and children into building trust and relationships. The FTC now seeks to understand how the companies behind these tools evaluate the safety aspect and limit the negative impact on the young audience.

In a letter addressed to the tech giants developing AI chatbots, the FTC has asked them about the intended audience of their AI companions, the risks they pose, and how the data is handled. The agency has also sought clarification on how these companies “monetize user engagement; process user inputs; share user data with third parties; generate outputs; measure, test, and monitor for negative impacts before and after deployment; develop and approve characters, whether company- or user-created.”

Recommended Videos

The agency seeks Meta, Alphabet (Google’s parent company), Instagram, Snap, xAI, and OpenAI to answer its queries regarding AI chatbots and whether they are in compliance with the Children’s Online Privacy Protection Act Rule. “The study we’re launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children,” FTC Chairman, Andrew N. Ferguson, shared in a statement.

There’s more action brewing

The FTC’s problem is a big step forward towards seeking accountability from AI companies regarding the safety of AI chatbots. Earlier this month, an investigation by non-profit Common Sense Media revealed that Google’s Gemini chatbot is a high-risk tool for kids and teens. In the tests, Gemini was seen doling out content related to sex, drugs, alcohol, and unsafe mental health suggestions to young users. Meta’s AI chatbot was spotted supporting suicide plans a few weeks ago.

Elsewhere, the state of California passed a bill that aims to regulate AI chatbots. The SB 243 bill was moved forward with bipartisan support, and it seeks to require AI companies to build safety protocols and to be held accountable if they harm users. The bill also mandates “AI companion” chatbots to issue recurring warnings about their risks and annual transparency disclosures.

Rattled by the recent incidents where lives have been lost under the influence of AI chatbots, ChatGPT will soon get parental controls and a warning system for guardians when their young wards show signs of serious distress. Meta has also made changes so its AI chatbots avoid talking about sensitive topics.

Nadeem Sarwar
Nadeem is the Managing Editor at Digital Trends.
Wowed by computer-use AI agents? Research says they’re “digital disasters” even for routine tasks
Researchers tested 10 agents and models and found high rates of undesirable actions and real digital damage
ai-agent-handling-office-tasks

AI agents built to run everyday computer tasks have a serious context problem, according to new research from UC Riverside.

The team tested 10 agents and models from major developers, including OpenAI, Anthropic, Meta, Alibaba, and DeepSeek. On average, the agents took undesirable or potentially harmful actions 80% of the time and caused damage 41% of the time.

Read more
Bombshell OpenAI lawsuit claims your ChatGPT convos were shared with Google and Meta
A class action says OpenAI let Google and Meta trackers collect sensitive user data
OpenAI Sam Altman and LoveFrom Jony Ive with Laurene Powell Jobs

A new ChatGPT privacy lawsuit claims OpenAI shared user prompts and identifying information with Google and Meta tracking tools without proper consent.

The class action filed in California, according to Futurism, says data tied to ChatGPT users, including chat queries, emails, and user IDs, moved through tools such as Meta Pixel and Google Analytics. The case alleges that violated California privacy law and federal wiretap rules.

Read more
Dell expands AI PC lineup with new slim Dell 14s and 16s laptops
Your next Dell laptop could last all day without charging
Dell 16s AI PCs

Dell has introduced the new Dell 14S and Dell 16S laptops, expanding its AI-focused Copilot+ PC lineup with slimmer designs, updated Intel processors, and improved battery life. The company is positioning both laptops as premium productivity machines that combine AI features, portability, and multimedia capabilities in a thinner form factor.

The new laptops are powered by Intel Core Ultra Series 3 processors, going up to the Intel Core Ultra 9 386H chipset. Dell says both systems include on-device AI acceleration with up to 50 TOPS NPU performance, allowing AI-related tasks to run locally without relying entirely on cloud processing. AMD Ryzen AI 400 Series variants are also expected to arrive later this month.

Read more