Skip to main content
  1. Home
  2. Computing
  3. News

Microsoft wants you to know Copilot AI is not just for entertainment

Add as a preferred source on Google
Microsoft Copilot Banner Featured
Microsoft

Microsoft appears to be trying to clear up an awkward contradiction around its Copilot AI. After one of its own documents made the AI sound a lot less useful than the company’s marketing would suggest.

Users recently noticed Microsoft’s Copilot terms of use included a warning that the service is for “entertainment purposes only,” adding that it can make mistakes, may not work as intended, and should not be relied on for important advice. The same section also added that users must use Copilot at their own risk, which raised many eyebrows, given how aggressively Microsoft has been pitching Copilot as a productivity tool across Windows, Microsoft 365, and enterprise software.

How is Microsoft defending this?

According to Microsoft, the wording used in the document contains legacy language dating back to Copilot’s earlier life as a Bing-based search companion. In a statement to Windows Latest, the company said the “entertainment purposes” phrasing no longer reflects how Copilot is used today and will be updated in the next revision of the terms.

Recommended Videos

Copilot has changed a lot since the Bing Chat era, and Microsoft now positions it as far more than a casual chatbot. But this isn’t the whole story.

Why the contradiction is still hard to ignore

A legal disclaimer saying “don’t rely on Copilot for important advice” is not unusual in the AI world, but pairing that with “for entertainment purposes only” landed differently when attached to a product Microsoft wants people to use for documents, presentations, workplace workflows, and Windows tasks.

Microsoft doesn’t suddenly think Copilot is useless. But with the user backlash and low adoption rates, it is clear that Copilot is going from “AI-everywhere” to a more focused approach. So the company does not want users to think Copilot is just for entertainment anymore. But it’s a good reminder than even these brands selling AI the hardest still feel the need to tell users not to trust it too much.

Vikhyaat Vivek
Vikhyaat Vivek is a tech journalist and reviewer with seven years of experience covering consumer hardware, with a focus on…
Microsoft says it’s prepping a fix for Outlook bug that blanked out documents
The bug hit Office documents opened from OneDrive and SharePoint links in classic Outlook
how to delete a user on a Mac

Microsoft says it has started releasing a service-side update for a classic Outlook bug that caused Office files to load blank, show repair prompts, or trigger corruption warnings.

The Outlook documents problem affects Word, Excel, and PowerPoint files opened through OneDrive and SharePoint links in classic Outlook. For workers who live in Microsoft 365, the bigger headache isn’t only the blank screen. It’s the uncertainty Outlook creates when a file that may be fine suddenly looks broken.

Read more
This PC is big enough to live in and has it’s own AC for cooling the giant internals
A room-sized RGB PC makes one thing clear, even imaginary giant components still need serious cooling.
Person, Performer, Shop

A Chinese creator has built a walk-in PC that turns desktop cooling into a human-scale spectacle. The fish-tank-style tower has enough room for a person, a compact desk, and a gaming setup, making the creator look like one of the tiny figures builders sometimes place inside flashy cases.

The build comes from TechTuber Soda Baka, who shared the project on Bilibili. It scales up familiar PC modding cues, including wall-sized fan housings, a huge graphics card prop, chunky cooler parts, and plenty of RGB lighting.

Read more
AI chatbots continue feeding into our worst delusions, finds worrying report on ChatGPT and Grok
AI companions may be making mental health crises even worse
Grok

AI chatbots were meant to help answer your questions, maybe summarize questions, and even help you with your emails. But the darker problem is what happens when people start trusting it like an actual companion. A new report highlights several cases where users say chatbot conversations are feeding into their delusional thinking.

ChatGPT and Grok were both often named in the report. BBC spoke to 14 people who spiraled into delusions while using AI, including one case where a Grok user believed people from xAI were coming to kill him, and another where a ChatGPT user’s wife said his personality changed before he attacked her.

Read more