Skip to main content
  1. Home
  2. Emerging Tech
  3. News

You don’t want to trust Meta’s new Muse Spark AI with health advice

Meta’s Muse Spark is way too eager to play doctor

Add as a preferred source on Google
Home page of the Meta AI app on mobile.
Nadeem Sarwar / Digital Trends

Meta‘s new Muse Spark may be pitched as a smarter AI model, but based on early testing, it sounds like the kind of AI you really do not want anywhere near serious medical decisions.

The recent WIRED report talked about the experience with Muse Spark. Meta’s health-focused AI model inside the Meta AI app did not show promising results. The chatbot reportedly encouraged users to upload raw medical information like lab reports, glucose monitor readings, and blood pressure logs, then offered to help analyze patterns and trends.

Recommended Videos

All of this sounds pretty useful till you realize two immediate concerns. You’re handing over very sensitive data, and whether the AI is even remotely trustworthy enough to interpret it.

What went wrong in the early tests?

The first problem is kind of hard to ignore. In a day and age where your life already feels too transparent, Muse Spark is prying even further. It isn’t unexpected to give out the necessary information for an accurate diagnosis, but handing over your personal health records to a chatbot for advice doesn’t sound like a privacy risk.

Unlike data shared with a doctor or hospital, information entered into a chatbot does not automatically come with the same expectations or protections people may assume are in place. This isn’t a professionally vetted opinion, and that’s what makes the idea shaky. The AI is being presented as a helpful tool, but the environment around it still looks much closer to a consumer product than a proper medical one.

This isn’t even the worst part

Aside from the typical privacy risks involved when sharing personal data with any tech giant, you’d at least expect to get a serviceable answer. But the more serious problem appeared to be with the quality of the advice. In WIRED’s testing, the chatbot reportedly generated an extremely low-calorie meal plan after being asked about weight loss and aggressive intermittent fasting.

While the bot did flag some of the risks along this route, a warning does not mean much if the model then goes on to help the user do the dangerous thing anyway. This is where the real issue lies with a lot of AI health tools right now. They can sound cautious, informed, and seem balanced right up until the moment they start reinforcing bad assumptions. That polished tone can offer the wrong advice with confidence, which makes failure more dangerous.

Vikhyaat Vivek
Vikhyaat Vivek is a tech journalist and reviewer with seven years of experience covering consumer hardware, with a focus on…
Amazon thinks you love AI, so it has launched a special storefront for AI-powered gadgets
Google AI mode mockup showing new feature

You're browsing for a new laptop — one has a better processor, another has more RAM, a third says "AI-powered" in bold letters, and you're not entirely sure what that means. But Amazon has noticed you pausing on that third one, and it has thoughts. The company just launched an AI Store on Amazon.in — a dedicated storefront that rounds up AI-enabled gadgets across categories, from smartphones and laptops to refrigerators and washing machines. So, instead of you wading through spec sheets trying to figure out which "AI feature" actually does something useful, the store spells it out for you.

What the AI store actually is

Read more
Gemini now makes personalized images by understanding your taste from Photos library
Logo, Disk, Symbol

Up until now, using Google Gemini meant being very specific. If you wanted an image, you’d spell it all out, the mood, the lighting, the tiny details, just to get something close to what you had in mind. That’s still how most AI tools operate. But this is where things start to shift. With the integration of Nano Banana 2 and Google Photos, Gemini feels much more familiar. It leans on your preferences, what you like, what you usually capture, and the kind of visuals you gravitate towards, and uses that context to shape what it creates for you.

So instead of over-explaining every prompt, you’re nudging it in a direction, and it fills in the rest in a way that feels personal. The goal here is simple: spend less time describing and more time seeing your ideas come to life, almost the way you imagined them, without having to say everything out loud.

Read more
This AI lets self-driving cars “remember” past drives to plan safer routes
A memory of the past could make self-driving cars safer on the road
Self driving car from Waymo

One of the biggest problems with self-driving systems is that they can see the road perfectly well and still make shaky short-term decisions in messy city traffic. The advanced systems struggle to keep up with complex and fluctuating road situations. But a new study argues that these cars don't need better vision, but a better memory.

In the peer-reviewed paper KEPT (Knowledge-Enhanced Prediction of Trajectories from Consecutive Driving Frames with Vision-Language Models), researchers from Tongji University and collaborators developed a system that helps autonomous vehicles "remember" past driving scenes before choosing what to do next.

Read more