Skip to main content
  1. Home
  2. Computing
  3. Emerging Tech
  4. News

ChatGPT and Gemini makers under probe over AI chatbot risk for kids

The FTC has asked OpenAI, Google, and more to reveal how they test the safety of AI chatbots.

Add as a preferred source on Google
ChatGPT on a laptop.
Nadeem Sarwar / Digital Trends

It seems the moment of reckoning for AI chatbots is here. After numerous reports detailing the problematic behavior and deadly incidents involving children and teens’ interaction with AI chatbots, the US government is finally intervening. The Federal Trade Commission (FTC) has today asked the makers of popular AI chatbots to detail how exactly they test and assess the suitability of these “AI companions for children.”

What’s happening?

Highlighting how the likes of ChatGPT, Gemini, and Meta can mimic human-like communication and personal relationships, the agency notes that these AI chatbots nudge teens and children into building trust and relationships. The FTC now seeks to understand how the companies behind these tools evaluate the safety aspect and limit the negative impact on the young audience.

In a letter addressed to the tech giants developing AI chatbots, the FTC has asked them about the intended audience of their AI companions, the risks they pose, and how the data is handled. The agency has also sought clarification on how these companies “monetize user engagement; process user inputs; share user data with third parties; generate outputs; measure, test, and monitor for negative impacts before and after deployment; develop and approve characters, whether company- or user-created.”

Recommended Videos

The agency seeks Meta, Alphabet (Google’s parent company), Instagram, Snap, xAI, and OpenAI to answer its queries regarding AI chatbots and whether they are in compliance with the Children’s Online Privacy Protection Act Rule. “The study we’re launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children,” FTC Chairman, Andrew N. Ferguson, shared in a statement.

There’s more action brewing

The FTC’s problem is a big step forward towards seeking accountability from AI companies regarding the safety of AI chatbots. Earlier this month, an investigation by non-profit Common Sense Media revealed that Google’s Gemini chatbot is a high-risk tool for kids and teens. In the tests, Gemini was seen doling out content related to sex, drugs, alcohol, and unsafe mental health suggestions to young users. Meta’s AI chatbot was spotted supporting suicide plans a few weeks ago.

Elsewhere, the state of California passed a bill that aims to regulate AI chatbots. The SB 243 bill was moved forward with bipartisan support, and it seeks to require AI companies to build safety protocols and to be held accountable if they harm users. The bill also mandates “AI companion” chatbots to issue recurring warnings about their risks and annual transparency disclosures.

Rattled by the recent incidents where lives have been lost under the influence of AI chatbots, ChatGPT will soon get parental controls and a warning system for guardians when their young wards show signs of serious distress. Meta has also made changes so its AI chatbots avoid talking about sensitive topics.

Nadeem Sarwar
Nadeem is the Managing Editor at Digital Trends.
A simple coding mistake is exposing API keys across thousands of websites
Security gaps that are easier to miss than you think
Computer, Electronics, Laptop

After analyzing 10 million webpages, researchers have found thousands of websites accidentally exposing sensitive API credentials, including keys linked to major services like Amazon Web Services, Stripe, and OpenAI.

This is a serious issue because APIs act as the backbone of the apps we use today. They allow websites to connect to services like payments, cloud storage, and AI tools, but they rely on digital keys to stay secure. Once exposed, API keys can allow anyone to interact with those services with malicious intent.

Read more
AMD’s latest Ryzen 9 9950X3D2 pushes X3D to the limit
Dual 3D V-Cache, higher power, and a focus on enthusiast performance
AMD Ryzen 9 9950X3D2 FEatured

AMD has unveiled what might be its most extreme desktop CPU yet, the Ryzen 9 9950X3D2. And it’s going all-in on one thing: cache.

https://twitter.com/jackhuynh/status/2037159705395491033?s=20

Read more
Next-gen AI breakthrough promises chatbots that can read the room better
Researchers are teaching AI chatbots to read between the lines
Generative AI

Have you ever asked a chatbot something and felt like it completely missed your point? You say something with a bit of nuance, and the AI misses the subtlety entirely. That is exactly the problem researchers are trying to solve.

Even though the emotional connection with AI can feel deeper than human conversation for many users, most AI systems today still treat a sentence as a single block of sentiment. If you mix praise and criticism, the nuance often gets lost.

Read more