Skip to main content
  1. Home
  2. Computing
  3. News

ChatGPT gets safety rules to protect teens and encourage human relations over virtual pals

ChatGPT gets new teen safety rules focused on prevention and transparency

Add as a preferred source on Google
openai-chatgpt-os
Levart_Photographer / Unsplash

OpenAI has just updated its “Model Spec” – basically the rulebook for its AI – with a specific set of Under-18 (U18) Principles designed to change how ChatGPT talks to teenagers aged 13 to 17. The move is a clear admission that teens aren’t just “mini adults”; they have different emotional and developmental needs that require stronger guardrails, especially when conversations get heavy or risky.

A new framework for teen-focused AI interactions

Recommended Videos

This update spells out exactly how ChatGPT should handle teen users while still following the general rules that apply to everyone else. OpenAI says the point is to create an experience that feels safer and age-appropriate, focusing on prevention and transparency.

These aren’t just random rules, either; the U18 Principles are based on developmental science and were vetted by outside experts, including the American Psychological Association.

The framework is built on four main promises: putting teen safety above everything else (even if it makes the AI less “helpful” in the moment), pushing teens toward real-world support instead of letting them rely on a chatbot, treating them like actual teenagers rather than small children or full-grown adults, and being honest about the AI’s limitations.

These principles formalize how ChatGPT steps in with extra caution when topics come up like self-harm, sexual roleplay, dangerous challenges, substance use, body image issues, or requests to keep secrets about unsafe behavior.

What this means for families and what comes next

This matters because AI is quickly becoming a standard tool for how young people learn and find answers. Without clear boundaries, there is a real danger that teens might turn to AI during moments when they actually need a parent, a doctor, or a counselor.

OpenAI claims these new rules ensure that when a chat drifts into dangerous territory, the assistant will offer safer alternatives, set hard boundaries, and tell the teen to find a trusted adult. If things look like an immediate emergency, the system is rigged to point them toward crisis hotlines or emergency services.

For parents, this offers a bit more reassurance. OpenAI is linking these new principles to its Teen Safety Blueprint and existing parental controls. The protections are also expanding to cover newer features like group chats, the ChatGPT Atlas browser, and the Sora app, along with built-in reminders to take a break so kids aren’t glued to the screen.

Looking ahead, OpenAI is starting to roll out an age-prediction tool for personal ChatGPT accounts. This system will try to guess if a user is a minor and automatically switch on these teen safeguards.

If it isn’t sure, it defaults to the safer U18 experience just in case. The company says this isn’t a “one and done” fix; they plan to keep tweaking these protections based on new research and feedback, making it clear that teen safety is going to be a long-term project.

Moinak Pal
Moinak Pal is has been working in the technology sector covering both consumer centric tech and automotive technology for the…
Wowed by computer-use AI agents? Research says they’re “digital disasters” even for routine tasks
Researchers tested 10 agents and models and found high rates of undesirable actions and real digital damage
ai-agent-handling-office-tasks

AI agents built to run everyday computer tasks have a serious context problem, according to new research from UC Riverside.

The team tested 10 agents and models from major developers, including OpenAI, Anthropic, Meta, Alibaba, and DeepSeek. On average, the agents took undesirable or potentially harmful actions 80% of the time and caused damage 41% of the time.

Read more
Bombshell OpenAI lawsuit claims your ChatGPT convos were shared with Google and Meta
A class action says OpenAI let Google and Meta trackers collect sensitive user data
OpenAI Sam Altman and LoveFrom Jony Ive with Laurene Powell Jobs

A new ChatGPT privacy lawsuit claims OpenAI shared user prompts and identifying information with Google and Meta tracking tools without proper consent.

The class action filed in California, according to Futurism, says data tied to ChatGPT users, including chat queries, emails, and user IDs, moved through tools such as Meta Pixel and Google Analytics. The case alleges that violated California privacy law and federal wiretap rules.

Read more
Dell expands AI PC lineup with new slim Dell 14s and 16s laptops
Your next Dell laptop could last all day without charging
Dell 16s AI PCs

Dell has introduced the new Dell 14S and Dell 16S laptops, expanding its AI-focused Copilot+ PC lineup with slimmer designs, updated Intel processors, and improved battery life. The company is positioning both laptops as premium productivity machines that combine AI features, portability, and multimedia capabilities in a thinner form factor.

The new laptops are powered by Intel Core Ultra Series 3 processors, going up to the Intel Core Ultra 9 386H chipset. Dell says both systems include on-device AI acceleration with up to 50 TOPS NPU performance, allowing AI-related tasks to run locally without relying entirely on cloud processing. AMD Ryzen AI 400 Series variants are also expected to arrive later this month.

Read more