Skip to main content
  1. Home
  2. Computing
  3. News

Over a million users are emotionally attached to ChatGPT, but there’s an even darker side

ChatGPT will lend an ear, but it won't try to be your shoulder.

Add as a preferred source on Google
openai-chatgpt
Tim Witzdam / Pexels

What’s happened? OpenAI has made changes to how ChatGPT handles delicate conversations when people turn to it for emotional support. The company has updated its Model Spec and default model, GPT-5, to reflect how ChatGPT handles sensitive conversations related to psychosis/mania, self-harm and suicide, and emotional reliance on the assistant.

This is important because: AI-driven emotional reliance is real, where users form one-sided attachments to chatbots.

  • OpenAI estimates that about 0.15% of weekly active users show signs of emotional attachment to the model, and 0.03% of messages point to the same risk.
  • That same 0.15% figure applies to conversations indicating suicidal planning or intent, while 0.05% of messages show suicidal ideation.
  • Scaled to ChatGPT’s massive user base, that’s over a million people forming emotional ties with AI.
  • OpenAI reports big improvements after the update. Undesirable responses in these domains fell by 65–80%, and emotional-reliance related bad outputs dropped about 80%.

How it’s improving: The updated version introduces new rules around mental health safety and real-world relationships, ensuring the AI responds compassionately without pretending to be a therapist or a friend.

  • OpenAI worked with more than 170 mental-health experts to reshape model behavior, add safety tooling, and expand guidance.
  • GPT-5 can now detect signs of mania, delusions, or suicidal intent, and responds safely by acknowledging feelings while gently directing users to real-world help.
  • A new rule ensures ChatGPT doesn’t act like a companion or encourage emotional dependence; it reinforces human connection instead.
  • The model can now prioritize trusted tools or expert resources when those align with user safety.

Why should I care? Emotional and ethical questions don’t just concern adults forming attachments with chatbots; they also touch on how AI interacts with kids who may not fully understand its impact.

  • If you have ever confided in ChatGPT during a rough patch, this update is about ensuring your emotional safety.
  • Now, ChatGPT will be more attuned to emotional cues and help users find real-world help instead of replacing it.
Manisha Priyadarshini
Manisha Priyadarshini is a tech and entertainment writer with over nine years of editorial experience.
Wowed by computer-use AI agents? Research says they’re “digital disasters” even for routine tasks
Researchers tested 10 agents and models and found high rates of undesirable actions and real digital damage
ai-agent-handling-office-tasks

AI agents built to run everyday computer tasks have a serious context problem, according to new research from UC Riverside.

The team tested 10 agents and models from major developers, including OpenAI, Anthropic, Meta, Alibaba, and DeepSeek. On average, the agents took undesirable or potentially harmful actions 80% of the time and caused damage 41% of the time.

Read more
Bombshell OpenAI lawsuit claims your ChatGPT convos were shared with Google and Meta
A class action says OpenAI let Google and Meta trackers collect sensitive user data
OpenAI Sam Altman and LoveFrom Jony Ive with Laurene Powell Jobs

A new ChatGPT privacy lawsuit claims OpenAI shared user prompts and identifying information with Google and Meta tracking tools without proper consent.

The class action filed in California, according to Futurism, says data tied to ChatGPT users, including chat queries, emails, and user IDs, moved through tools such as Meta Pixel and Google Analytics. The case alleges that violated California privacy law and federal wiretap rules.

Read more
Dell expands AI PC lineup with new slim Dell 14s and 16s laptops
Your next Dell laptop could last all day without charging
Dell 16s AI PCs

Dell has introduced the new Dell 14S and Dell 16S laptops, expanding its AI-focused Copilot+ PC lineup with slimmer designs, updated Intel processors, and improved battery life. The company is positioning both laptops as premium productivity machines that combine AI features, portability, and multimedia capabilities in a thinner form factor.

The new laptops are powered by Intel Core Ultra Series 3 processors, going up to the Intel Core Ultra 9 386H chipset. Dell says both systems include on-device AI acceleration with up to 50 TOPS NPU performance, allowing AI-related tasks to run locally without relying entirely on cloud processing. AMD Ryzen AI 400 Series variants are also expected to arrive later this month.

Read more