Skip to main content
  1. Home
  2. Computing
  3. News

Microsoft already has its legal crosshairs set on DeepSeek

Add as a preferred source on Google

DeepSeek AI running on an iPhone.
The home page chat interface of DeepSeek AI. Nadeem Sarwar / Digital Trends

Microsoft, a primary investor in OpenAI, is now exploring whether the Chinese company DeepSeek used nefarious methods to train its reasoning models. According to Bloomberg Law the company now believes DeepSeek violated its terms of service by using its application programming interface (API) to train its recently announced R1 model.

Recommended Videos

The news comes not long after White House AI and crypto czar, David Sacks, told Fox News in an interview on Tuesday it was “possible” DeepSeek “stole intellectual property from the United States.”

“There’s substantial evidence that what DeepSeek did here is they distilled the knowledge out of OpenAI’s models,” Sacks told the outlet.

The AI industry has been raving about DeepSeek’s ability to quickly and cost-effectively train AI models in one year with just $5.6 million. There is an underlying possibility the reason for the company’s efficiency is that it has used another company’s model as its baseline.

DeepSeek may have used a process called distillation, which entails two models having a teacher-student dynamic so one can collect information from the other. On one hand, this could explain the company’s inexpensive operating costs and use of less powerful Nvidia H800 chips. DeepSeek may now be on the hook to prove whether it took all unlawful actions when developing its models.

Before this recent development, industry experts previously speculated that DeepSeek likely used reverse engineering to train its models. This process analyzes models to identify their patterns and biases for improving future models. Reverse engineering is a common practice among open-source developers that is considered legal.

Security researchers sanctioned by Microsoft have already pieced together that DeepSeek may have exhumed a considerable amount of code from OpenAI’s API during the fall of 2024. Microsoft supposedly made OpenAI aware of the breach at the time. The R1 model was announced last week, bringing attention to the Chinese AI company, and associated parties.

DeepSeek has also been lauded as an open-source AI application, on which anyone can develop. This is from where much of the excitement surrounding the platform comes — in addition to its comparison to top tools such as ChatGPT and Google Gemini. OpenAI is not an open-source service; however, anyone can sign up to access its API. The company does make clear in its terms of services that other entities cannot use output to train other AI models, TechCrunch noted.

An OpenAI spokesperson told Reuters that regardless of regulations, various international companies trying to copy models from well-known companies in the U.S. is now a common occurrence.

“We engage in counter-measures to protect our IP, including a careful process for which frontier capabilities to include in released models, and believe as we go forward that it is critically important that we are working closely with the U.S. government to best protect the most capable models from efforts by adversaries and competitors to take U.S. technology,” the spokesperson said.

Fionna Agomuoh
Fionna Agomuoh is a Computing Writer at Digital Trends. She covers a range of topics in the computing space, including…
Wowed by computer-use AI agents? Research says they’re “digital disasters” even for routine tasks
Researchers tested 10 agents and models and found high rates of undesirable actions and real digital damage
ai-agent-handling-office-tasks

AI agents built to run everyday computer tasks have a serious context problem, according to new research from UC Riverside.

The team tested 10 agents and models from major developers, including OpenAI, Anthropic, Meta, Alibaba, and DeepSeek. On average, the agents took undesirable or potentially harmful actions 80% of the time and caused damage 41% of the time.

Read more
Bombshell OpenAI lawsuit claims your ChatGPT convos were shared with Google and Meta
A class action says OpenAI let Google and Meta trackers collect sensitive user data
OpenAI Sam Altman and LoveFrom Jony Ive with Laurene Powell Jobs

A new ChatGPT privacy lawsuit claims OpenAI shared user prompts and identifying information with Google and Meta tracking tools without proper consent.

The class action filed in California, according to Futurism, says data tied to ChatGPT users, including chat queries, emails, and user IDs, moved through tools such as Meta Pixel and Google Analytics. The case alleges that violated California privacy law and federal wiretap rules.

Read more
Dell expands AI PC lineup with new slim Dell 14s and 16s laptops
Your next Dell laptop could last all day without charging
Dell 16s AI PCs

Dell has introduced the new Dell 14S and Dell 16S laptops, expanding its AI-focused Copilot+ PC lineup with slimmer designs, updated Intel processors, and improved battery life. The company is positioning both laptops as premium productivity machines that combine AI features, portability, and multimedia capabilities in a thinner form factor.

The new laptops are powered by Intel Core Ultra Series 3 processors, going up to the Intel Core Ultra 9 386H chipset. Dell says both systems include on-device AI acceleration with up to 50 TOPS NPU performance, allowing AI-related tasks to run locally without relying entirely on cloud processing. AMD Ryzen AI 400 Series variants are also expected to arrive later this month.

Read more