Skip to main content
  1. Home
  2. Computing
  3. News

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

OpenAI gets called out for opposing a proposed AI safety bill

Add as a preferred source on Google

Ex-OpenAI employees William Saunders and Daniel Kokotajlo have written a letter to California Gov. Gavin Newsom arguing that the company’s opposition to a state bill that would impose strict safety guidelines and protocols on future AI development is disappointing but not surprising.

“We joined OpenAI because we wanted to ensure the safety of the incredibly powerful AI systems the company is developing,” Saunders and Kokotajlo wrote. “But we resigned from OpenAI because we lost trust that it would safely, honestly, and responsibly develop its AI systems.”

Recommended Videos

The two argue that further development without sufficient guardrails “poses foreseeable risks of catastrophic harm to the public,” whether that’s “unprecedented cyberattacks or assisting in the creation of biological weapons.”

The duo was also quick to point out OpenAI CEO Sam Altman’s hypocrisy on the matter of regulation. They point to his recent congressional testimony calling for regulation of the AI industry but note “when actual regulation is on the table, he opposes it.”

Per a 2023 survey by the MITRE corporation and the Harris Poll, only 39% of respondents believed that today’s AI tech is “safe and secure.”

The bill in question, SB-1047, the Safe and Secure Innovation for Frontier Artificial Models Act, would, “among other things, require that a developer, before beginning to initially train a covered model … comply with various requirements, including implementing the capability to promptly enact a full shutdown … and implement a written and separate safety and security protocol.” OpenAI has suffered multiple data leaks and system intrusions in recent years.

OpenAI reportedly strongly disagrees with the researchers’ “mischaracterization of our position on SB 1047,” as a spokesperson told Business Insider. The company instead argues that “a federally-driven set of AI policies, rather than a patchwork of state laws, will foster innovation and position the US to lead the development of global standards,” OpenAI’s Chief Strategy Officer Jason Kwon said in a letter to California state Sen. Scott Wiener in February.

Saunders and Kokotajlo counter that OpenAI’s push for federal regulations is not in good faith. “We cannot wait for Congress to act — they’ve explicitly said that they aren’t willing to pass meaningful AI regulation,” the pair wrote. “If they ever do, it can preempt CA legislation.”

The bill has found support from a surprising source as well: xAI CEO Elon Musk. “This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill,” he wrote on X on Monday. “For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk.” Musk, who recently announced the construction of “the most powerful AI training cluster in the world” in Memphis, Tennessee, had previously threatened to move the headquarters of his X (formerly Twitter) and SpaceX companies to Texas to escape industry regulation in California.

Update: This post has been updated to include the comments from Elon Musk.

Andrew Tarantola
Former Computing Writer
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
Wowed by computer-use AI agents? Research says they’re “digital disasters” even for routine tasks
Researchers tested 10 agents and models and found high rates of undesirable actions and real digital damage
ai-agent-handling-office-tasks

AI agents built to run everyday computer tasks have a serious context problem, according to new research from UC Riverside.

The team tested 10 agents and models from major developers, including OpenAI, Anthropic, Meta, Alibaba, and DeepSeek. On average, the agents took undesirable or potentially harmful actions 80% of the time and caused damage 41% of the time.

Read more
Bombshell OpenAI lawsuit claims your ChatGPT convos were shared with Google and Meta
A class action says OpenAI let Google and Meta trackers collect sensitive user data
OpenAI Sam Altman and LoveFrom Jony Ive with Laurene Powell Jobs

A new ChatGPT privacy lawsuit claims OpenAI shared user prompts and identifying information with Google and Meta tracking tools without proper consent.

The class action filed in California, according to Futurism, says data tied to ChatGPT users, including chat queries, emails, and user IDs, moved through tools such as Meta Pixel and Google Analytics. The case alleges that violated California privacy law and federal wiretap rules.

Read more
Dell expands AI PC lineup with new slim Dell 14s and 16s laptops
Your next Dell laptop could last all day without charging
Dell 16s AI PCs

Dell has introduced the new Dell 14S and Dell 16S laptops, expanding its AI-focused Copilot+ PC lineup with slimmer designs, updated Intel processors, and improved battery life. The company is positioning both laptops as premium productivity machines that combine AI features, portability, and multimedia capabilities in a thinner form factor.

The new laptops are powered by Intel Core Ultra Series 3 processors, going up to the Intel Core Ultra 9 386H chipset. Dell says both systems include on-device AI acceleration with up to 50 TOPS NPU performance, allowing AI-related tasks to run locally without relying entirely on cloud processing. AMD Ryzen AI 400 Series variants are also expected to arrive later this month.

Read more