Skip to main content
  1. Home
  2. Computing
  3. News

OpenAI wants to hire someone to handle ChatGPT risks that can’t be predicted

The role focuses on predicting, testing, and reducing real-world AI harms

Add as a preferred source on Google
openai-chatgpt-os
Levart_Photographer / Unsplash

OpenAI is betting big on a role designed to stop AI risks before they spiral. The company has posted a new senior role called Head of Preparedness, a position focused on identifying and reducing the most serious dangers that could emerge from advanced AI chatbots. Along with the responsibility comes a headline-grabbing compensation package of $555,000 plus equity.

In a public post announcing the opening, Sam Altman called it “a critical role at an important time,” noting that while AI models are now capable of “many great things,” they are also “starting to present some real challenges.”

We are hiring a Head of Preparedness. This is a critical role at an important time; models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges. The potential impact of models on mental health was something we…

— Sam Altman (@sama) December 27, 2025

What the Head of Preparedness will actually do

The person holding this position will focus on extreme but realistic AI risks, including misuse, cybersecurity threats, biological concerns, and broader societal harm. Sam Altman said OpenAI now needs a “more nuanced understanding” of how growing capabilities could be abused without blocking the benefits.

Recommended Videos

He also did not sugarcoat the job. “This will be a stressful job,” Altman wrote, adding that whoever takes it on will be jumping “into the deep end pretty much immediately.”

The hire comes at a sensitive moment for OpenAI, which has faced growing regulatory scrutiny over AI safety in the past year. That pressure has intensified amid allegations linking ChatGPT interactions to several suicide cases, raising broader concerns about AI’s impact on mental health.

In one case, parents of a 16-year-old sued OpenAI after alleging the chatbot encouraged their son to plan his own suicide, prompting the company to roll out new safety measures for users under 18.

Another lawsuit claims ChatGPT fueled paranoid delusions in a separate case that ended in murder and suicide, leading OpenAI to say it is working on better ways to detect distress, de-escalate conversations, and direct users to real-world support.

OpenAI’s safety push comes at a time when millions report emotional reliance on ChatGPT and regulators are probing risks for children, underscoring why preparedness matters beyond just engineering.

Manisha Priyadarshini
Manisha Priyadarshini is a tech and entertainment writer with over nine years of editorial experience.
Wowed by computer-use AI agents? Research says they’re “digital disasters” even for routine tasks
Researchers tested 10 agents and models and found high rates of undesirable actions and real digital damage
ai-agent-handling-office-tasks

AI agents built to run everyday computer tasks have a serious context problem, according to new research from UC Riverside.

The team tested 10 agents and models from major developers, including OpenAI, Anthropic, Meta, Alibaba, and DeepSeek. On average, the agents took undesirable or potentially harmful actions 80% of the time and caused damage 41% of the time.

Read more
Bombshell OpenAI lawsuit claims your ChatGPT convos were shared with Google and Meta
A class action says OpenAI let Google and Meta trackers collect sensitive user data
OpenAI Sam Altman and LoveFrom Jony Ive with Laurene Powell Jobs

A new ChatGPT privacy lawsuit claims OpenAI shared user prompts and identifying information with Google and Meta tracking tools without proper consent.

The class action filed in California, according to Futurism, says data tied to ChatGPT users, including chat queries, emails, and user IDs, moved through tools such as Meta Pixel and Google Analytics. The case alleges that violated California privacy law and federal wiretap rules.

Read more
Dell expands AI PC lineup with new slim Dell 14s and 16s laptops
Your next Dell laptop could last all day without charging
Dell 16s AI PCs

Dell has introduced the new Dell 14S and Dell 16S laptops, expanding its AI-focused Copilot+ PC lineup with slimmer designs, updated Intel processors, and improved battery life. The company is positioning both laptops as premium productivity machines that combine AI features, portability, and multimedia capabilities in a thinner form factor.

The new laptops are powered by Intel Core Ultra Series 3 processors, going up to the Intel Core Ultra 9 386H chipset. Dell says both systems include on-device AI acceleration with up to 50 TOPS NPU performance, allowing AI-related tasks to run locally without relying entirely on cloud processing. AMD Ryzen AI 400 Series variants are also expected to arrive later this month.

Read more