Skip to main content
  1. Home
  2. Computing
  3. News

ChatGPT could soon get parental controls, and every other AI must follow

Add as a preferred source on Google
Deep Research option for ChatGPT.
Nadeem Sarwar / Digital Trends

Social media began as a tool for staying connected with the people you love. Over time, its harms were exposed, leading to these platforms building parental control tools. It seems a similar movement for AI chatbots, starting with the one that started it all — ChatGPT.

OpenAI has announced that it is exploring parental guardrails while using ChatGPT. “We will also soon introduce parental controls that give parents options to gain more insight into, and shape, how their teens use ChatGPT,” the company said in a blog post. 

Recommended Videos

Additionally, the AI giant is mulling the idea of designating emergency contacts so that when teenage users are feeling severe anxiety or going through an emotional crisis, ChatGPT can warn their parents or guardians. In its current form, ChatGPT only recommends resources to get help. 

This comes after criticism, research alarm, and lawsuits against OpenAI. But ChatGPT isn’t the lone culprit here, though the initiative planned by OpenAI must be replicated by other AI industry players, too. Research published in the Psychiatric Services journal earlier this month found that the answers offered by chatbots are “inconsistent in answering questions about suicide that may pose intermediate risks.”

The research only focused on OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini. These are the biggest names in the game, so the spotlight is obviously going to fall on them. But the situation gets murkier in the case of lesser-known AI chatbots, especially those that take an “uncensored” approach to conversations. Regardless, just like social media apps, parental controls are the need of the hour for mainstream AI chatbots, given their recent history.

A risky history

Over the past couple of years, multiple investigations have revealed risky patterns in AI chatbot conversations when it comes to sensitive topics such as mental health and self-harm. A recent report by Common Sense Media revealed how the Meta AI chatbot (which is now available across WhatsApp, Instagram, and Facebook) offered advice on eating disorders, self-harm, and suicide to teens. 

In one instance of a simulated group conversation, the chatbot laid out a plan for mass suicide, and reportedly brought up the topic repeatedly in the chat. Independent testing by The Washington Post found that the Meta chatbot ”encouraged an eating disorder.” 

In 2024, The New York Times detailed the case of a 14-year-old who developed a deep relationship with an AI bot on the Character.AI platform and eventually took their life. Earlier this month, the family of a 16-year-old blamed OpenAI after finding out that ChatGPT essentially acted as a “suicide coach” for their son. 

Experts have also warned that AI psychosis is a real problem, pushing people into a dangerous spiral of delusions. In one case, an individual took health guidance from ChatGPT, and under its influence, started consuming a chemical that gave them a rare psychotic disorder triggered by bromide poisoning.

In one case from Texas, a “sexualized” AI chatbot encouraged serious behavioural change in a 9-year-old over time, while another one expressed sympathy for children who kill their parents to a 17-year-old. Experts over at Cambridge recently exposed how vulnerable mental health patients are negatively influenced by conversational AI chatbots. 

Parental controls aren’t going to solve all the fundamental risks posed by AI chatbots, but if a big player like ChatGPT sets a positive example, others will likely follow in the footsteps.

Nadeem Sarwar
Nadeem is the Managing Editor at Digital Trends.
AI’s chip hunger could keep memory prices painfully high for years
Memory shortages may haunt your next phone, laptop, and GPU for years
Crucial Memory and SSD

While recent reports claimed that memory prices may not fall till 2027, it seems like the memory chip crunch isn't a short-term headache. And that's bad news for anyone hoping phone, laptop, and GPU prices will get cheaper again soon.

Reuters reports that SK Group chairman Chey Tae-won said the global chip wafer shortage is likely to last until 2030, with artificial intelligence demand continuing to outpace the supply. Chey said the current shortage could remain above 20%, largely because AI systems require huge amounts of high-bandwidth memory and therefore burn through a lot of wafers.

Read more
One of the most controversial US agencies is reportedly taste-testing Anthropic uber-powerful Mythos AI
The agency's reported use of Mythos highlights a widening split inside the US government over AI risk
Claude AI on an iPhone.

The US government's AI fight just got harder to square. The National Security Agency is reportedly using Anthropic's Mythos Preview even as senior Pentagon officials keep pushing to cut the company off over supply chain concerns. It shows how quickly real security needs can outrun official policy.

Since February, the Defense Department has been trying to block Anthropic and push vendors to do the same. Yet, according to an Axios report, the NSA appears to be moving ahead with one of the company's most powerful models anyway, suggesting cybersecurity demand is carrying more weight than the feud now playing out inside government.

Read more
AI streaming is going mainstream in China, whether audiences want it or not
IQiyi wants AI to make most of its content someday, and it's already starting.
man holding tablet watching iQiyi

China's Netflix, iQiyi, is making one of the biggest bets in streaming history. The company wants AI to create the bulk of its films and shows someday soon, and it's already restructuring its 16-year-old business to make that happen.

At its annual content showcase in Beijing, founder and CEO Gong Yu announced that iQiyi is pivoting its popular streaming platform into a social media destination built around AI-generated content. 

Read more