Skip to main content
  1. Home
  2. Computing
  3. News

ChatGPT could ask for ID, says OpenAI chief

It's also rolling out parental controls and an automated age-prediction system.

Add as a preferred source on Google
ChatGPT on a phone.
Matheus Bertelli / Pexels

OpenAI recently talked about introducing parental controls for ChatGPT before the end of this month.

The company behind ChatGPT has also revealed it’s developing an automated age-prediction system designed to work out if a user is under 18, after which it will offer an age-appropriate experience with the popular AI-powered chatbot.

Recommended Videos

If, in some cases, the system is unable to predict a user’s age, OpenAI could ask for ID so that it can offer the most suitable experience.

The plan was shared this week in a post by OpenAI CEO Sam Altman, who noted that ChatGPT is intended for people 13 years and older.

Altman said that a user’s age will be predicted based on how people use ChatGPT. “If there is doubt, we’ll play it safe and default to the under-18 experience,” the CEO said. “In some cases or countries, we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff.”

Altman said he wanted users to engage with ChatGPT in the way they want, “within very broad bounds of safety.”

Elaborating on the issue, the CEO noted that the default version of ChatGPT is not particularly flirtatious, but said that if a user asks for such behavior, the chatbot will respond accordingly. 

Altman also said that the default version should not provide instructions on how someone can take their own life, but added that if an adult user is asking for help writing a fictional story that depicts a suicide, then “the model should help with that request.” 

“‘Treat our adult users like adults’ is how we talk about this internally; extending freedom as far as possible without causing harm or undermining anyone else’s freedom,” Altman wrote.

But he said that in cases where the user is identified as being under 18, flirtatious talk and also comments about suicide will be excluded across the board.

Altman added if a user who is under 18 expresses suicidal thoughts to ChatGPT, “we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.”

OpenAI’s move toward parental controls and age verification follows a high-profile lawsuit filed against the company by a family alleging that ChatGPT acted as a “suicide coach” and contributed to the suicide of their teenage son, Adam Raine, who reportedly received detailed advice about suicide methods over many interactions with OpenAI’s chatbot.

It also comes amid growing scrutiny by the public and regulators over the risks AI chatbots pose to vulnerable minors in areas such as mental health harms and exposure to inappropriate content.

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
A simple coding mistake is exposing API keys across thousands of websites
Security gaps that are easier to miss than you think
Computer, Electronics, Laptop

After analyzing 10 million webpages, researchers have found thousands of websites accidentally exposing sensitive API credentials, including keys linked to major services like Amazon Web Services, Stripe, and OpenAI.

This is a serious issue because APIs act as the backbone of the apps we use today. They allow websites to connect to services like payments, cloud storage, and AI tools, but they rely on digital keys to stay secure. Once exposed, API keys can allow anyone to interact with those services with malicious intent.

Read more
AMD’s latest Ryzen 9 9950X3D2 pushes X3D to the limit
Dual 3D V-Cache, higher power, and a focus on enthusiast performance
AMD Ryzen 9 9950X3D2 FEatured

AMD has unveiled what might be its most extreme desktop CPU yet, the Ryzen 9 9950X3D2. And it’s going all-in on one thing: cache.

https://twitter.com/jackhuynh/status/2037159705395491033?s=20

Read more
Next-gen AI breakthrough promises chatbots that can read the room better
Researchers are teaching AI chatbots to read between the lines
Generative AI

Have you ever asked a chatbot something and felt like it completely missed your point? You say something with a bit of nuance, and the AI misses the subtlety entirely. That is exactly the problem researchers are trying to solve.

Even though the emotional connection with AI can feel deeper than human conversation for many users, most AI systems today still treat a sentence as a single block of sentiment. If you mix praise and criticism, the nuance often gets lost.

Read more