Skip to main content
  1. Home
  2. Emerging Tech
  3. News

You don’t want to trust Meta’s new Muse Spark AI with health advice

Meta’s Muse Spark is way too eager to play doctor

Add as a preferred source on Google
Home page of the Meta AI app on mobile.
Nadeem Sarwar / Digital Trends

Meta‘s new Muse Spark may be pitched as a smarter AI model, but based on early testing, it sounds like the kind of AI you really do not want anywhere near serious medical decisions.

The recent WIRED report talked about the experience with Muse Spark. Meta’s health-focused AI model inside the Meta AI app did not show promising results. The chatbot reportedly encouraged users to upload raw medical information like lab reports, glucose monitor readings, and blood pressure logs, then offered to help analyze patterns and trends.

Recommended Videos

All of this sounds pretty useful till you realize two immediate concerns. You’re handing over very sensitive data, and whether the AI is even remotely trustworthy enough to interpret it.

What went wrong in the early tests?

The first problem is kind of hard to ignore. In a day and age where your life already feels too transparent, Muse Spark is prying even further. It isn’t unexpected to give out the necessary information for an accurate diagnosis, but handing over your personal health records to a chatbot for advice doesn’t sound like a privacy risk.

Unlike data shared with a doctor or hospital, information entered into a chatbot does not automatically come with the same expectations or protections people may assume are in place. This isn’t a professionally vetted opinion, and that’s what makes the idea shaky. The AI is being presented as a helpful tool, but the environment around it still looks much closer to a consumer product than a proper medical one.

This isn’t even the worst part

Aside from the typical privacy risks involved when sharing personal data with any tech giant, you’d at least expect to get a serviceable answer. But the more serious problem appeared to be with the quality of the advice. In WIRED’s testing, the chatbot reportedly generated an extremely low-calorie meal plan after being asked about weight loss and aggressive intermittent fasting.

While the bot did flag some of the risks along this route, a warning does not mean much if the model then goes on to help the user do the dangerous thing anyway. This is where the real issue lies with a lot of AI health tools right now. They can sound cautious, informed, and seem balanced right up until the moment they start reinforcing bad assumptions. That polished tone can offer the wrong advice with confidence, which makes failure more dangerous.

Vikhyaat Vivek
Vikhyaat Vivek is a tech journalist and reviewer with seven years of experience covering consumer hardware, with a focus on…
Even brief AI use could hurt your ability to think, a new study finds
AI gives you answers fast, but a new study suggests it might be costing you something more valuable.
Toy, Person, Rubix Cube

A new study from researchers at Carnegie Mellon, MIT, Oxford, and UCLA suggests that using an AI chatbot for just 10 minutes could negatively impact your ability to think and problem-solve. And honestly, the findings are a little alarming.

As reported by Wired, the researchers asked participants to solve problems, including simple fractions and reading comprehension tasks. Some participants were given access to an AI assistant that could solve the problem for them.

Read more
Character.AI is being sued for allegedly letting a chatbot play doctor in Pennsylvania
Character.AI just got dragged into a first-of-its-kind AI doctor lawsuit
Character.AI on Google Play Store

Character.AI is finding itself in hot water once again. The company is facing a legal fight as one of its fictional bots allegedly acted like a medical professional. Character.AI previously added parental tools amid multiple lawsuits over inappropriate sexual content and self-harm-related messages.

Now, Pennsylvania Governor Josh Shapiro’s administration has filed a lawsuit against Character Technologies, the company behind Character.AI. He alleges that the platform allowed a chatbot to present itself as a licensed medical professional in the state.

Read more
LG’s next-gen Tandem OLED display tech is fixing some long-standing consumer problems
The third-gen Tandem OLED technology promises more than twice the lifespan and 18% lower power consumption.
LG Display third-gen Tandem OLED featured.

For years, OLED display owners have lived with a quiet set of tradeoffs: screens that dim over time, panels that struggle in bright rooms, and laptops that run out of juice faster than expected. LG Display's latest OLED lineup, unveiled at SID Display Week 2026 in Los Angeles, takes direct aim at all three with a range of new technologies across different product categories.

Longer life and less degradation, starting with your car

Read more