Skip to main content
  1. Home
  2. Computing
  3. News

ChatGPT could soon get parental controls, and every other AI must follow

Add as a preferred source on Google
Deep Research option for ChatGPT.
Nadeem Sarwar / Digital Trends

Social media began as a tool for staying connected with the people you love. Over time, its harms were exposed, leading to these platforms building parental control tools. It seems a similar movement for AI chatbots, starting with the one that started it all — ChatGPT.

OpenAI has announced that it is exploring parental guardrails while using ChatGPT. “We will also soon introduce parental controls that give parents options to gain more insight into, and shape, how their teens use ChatGPT,” the company said in a blog post. 

Recommended Videos

Additionally, the AI giant is mulling the idea of designating emergency contacts so that when teenage users are feeling severe anxiety or going through an emotional crisis, ChatGPT can warn their parents or guardians. In its current form, ChatGPT only recommends resources to get help. 

This comes after criticism, research alarm, and lawsuits against OpenAI. But ChatGPT isn’t the lone culprit here, though the initiative planned by OpenAI must be replicated by other AI industry players, too. Research published in the Psychiatric Services journal earlier this month found that the answers offered by chatbots are “inconsistent in answering questions about suicide that may pose intermediate risks.”

The research only focused on OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini. These are the biggest names in the game, so the spotlight is obviously going to fall on them. But the situation gets murkier in the case of lesser-known AI chatbots, especially those that take an “uncensored” approach to conversations. Regardless, just like social media apps, parental controls are the need of the hour for mainstream AI chatbots, given their recent history.

A risky history

Over the past couple of years, multiple investigations have revealed risky patterns in AI chatbot conversations when it comes to sensitive topics such as mental health and self-harm. A recent report by Common Sense Media revealed how the Meta AI chatbot (which is now available across WhatsApp, Instagram, and Facebook) offered advice on eating disorders, self-harm, and suicide to teens. 

In one instance of a simulated group conversation, the chatbot laid out a plan for mass suicide, and reportedly brought up the topic repeatedly in the chat. Independent testing by The Washington Post found that the Meta chatbot ”encouraged an eating disorder.” 

In 2024, The New York Times detailed the case of a 14-year-old who developed a deep relationship with an AI bot on the Character.AI platform and eventually took their life. Earlier this month, the family of a 16-year-old blamed OpenAI after finding out that ChatGPT essentially acted as a “suicide coach” for their son. 

Experts have also warned that AI psychosis is a real problem, pushing people into a dangerous spiral of delusions. In one case, an individual took health guidance from ChatGPT, and under its influence, started consuming a chemical that gave them a rare psychotic disorder triggered by bromide poisoning.

In one case from Texas, a “sexualized” AI chatbot encouraged serious behavioural change in a 9-year-old over time, while another one expressed sympathy for children who kill their parents to a 17-year-old. Experts over at Cambridge recently exposed how vulnerable mental health patients are negatively influenced by conversational AI chatbots. 

Parental controls aren’t going to solve all the fundamental risks posed by AI chatbots, but if a big player like ChatGPT sets a positive example, others will likely follow in the footsteps.

Nadeem Sarwar
Nadeem is the Managing Editor at Digital Trends.
A simple coding mistake is exposing API keys across thousands of websites
Security gaps that are easier to miss than you think
Computer, Electronics, Laptop

After analyzing 10 million webpages, researchers have found thousands of websites accidentally exposing sensitive API credentials, including keys linked to major services like Amazon Web Services, Stripe, and OpenAI.

This is a serious issue because APIs act as the backbone of the apps we use today. They allow websites to connect to services like payments, cloud storage, and AI tools, but they rely on digital keys to stay secure. Once exposed, API keys can allow anyone to interact with those services with malicious intent.

Read more
AMD’s latest Ryzen 9 9950X3D2 pushes X3D to the limit
Dual 3D V-Cache, higher power, and a focus on enthusiast performance
AMD Ryzen 9 9950X3D2 FEatured

AMD has unveiled what might be its most extreme desktop CPU yet, the Ryzen 9 9950X3D2. And it’s going all-in on one thing: cache.

https://twitter.com/jackhuynh/status/2037159705395491033?s=20

Read more
Next-gen AI breakthrough promises chatbots that can read the room better
Researchers are teaching AI chatbots to read between the lines
Generative AI

Have you ever asked a chatbot something and felt like it completely missed your point? You say something with a bit of nuance, and the AI misses the subtlety entirely. That is exactly the problem researchers are trying to solve.

Even though the emotional connection with AI can feel deeper than human conversation for many users, most AI systems today still treat a sentence as a single block of sentiment. If you mix praise and criticism, the nuance often gets lost.

Read more