Skip to main content
  1. Home
  2. Computing
  3. News

Turns out, if you ask an AI to play an expert, it gets less reliable

Asking AI to pretend it's an expert can backfire, but researchers may have found a fix.

Add as a preferred source on Google
AI
Unsplash

you’ve probably seen the tip floating around: tell AI to act like an expert in a field, and you’ll get better answers. It’s popular advice, and it does work, sometimes. However, a new study suggests that using AI personas may not be as effective as we thought it would be.

Researchers from the University of California tested 12 different personas across six language models. The personas ranged from math and coding experts to creative writers and safety monitors. The goal was to find out how well AI performs when it is instructed to act as an expert.

Recommended Videos

The results were mixed. Adopting a persona made the AI sound more professional and follow the rules better. But it also made the AI worse at recalling facts. According to the study, using an AI persona shifts it into an instruction-following mode rather than a knowledge-retrieval mode, and that tradeoff costs you accuracy.

What’s the solution?

To fix this problem, the researchers developed PRISM, which stands for Persona Routing via Intent-based Self-Modeling. Instead of always using a persona or never using one, PRISM teaches AI to decide what’s best for itself.

When you ask a question, PRISM generates two answers: one from its default mode and one from its persona. It then compares the two and delivers the answer that performs better for a specific query. 

The expert answer isn’t discarded even when the default answer wins. Instead, the reasoning style is saved in a lightweight component called a LoRA adapter, which the AI can draw from later when needed. The solution sounds simple, and yet, it’s effective.

How did PRISM perform?

PRISM raised AI’s overall score by one to two points on the MT-Bench, a test that measures how well an AI follows instructions and stays helpful. For writing and safety tasks, personas helped. For raw knowledge questions, skipping the persona proved to be the better option.

The researchers plan to test PRISM with more personas and refine its ability to provide better answers. It’s early days, but this could change how we prompt AI for good.

Rachit Agarwal
Rachit is a seasoned tech journalist with over seven years of experience covering the consumer technology landscape.
Next-gen AI breakthrough promises chatbots that can read the room better
Researchers are teaching AI chatbots to read between the lines
Generative AI

Have you ever asked a chatbot something and felt like it completely missed your point? You say something with a bit of nuance, and the AI misses the subtlety entirely. That is exactly the problem researchers are trying to solve.

Even though the emotional connection with AI can feel deeper than human conversation for many users, most AI systems today still treat a sentence as a single block of sentiment. If you mix praise and criticism, the nuance often gets lost.

Read more
ChatGPT is not getting an erotic mode, after all
OpenAI pulls back as “adult mode” runs into bigger concerns
ChatGPT-to-rollback-to-friendly-and-adulttt

If you were expecting ChatGPT to get an “erotic mode,” that idea is officially off the table. According to Financial Times, OpenAI’s spicy mode is on hold “indefinitely.”

Inside OpenAI's struggle to bring the adult mode to life

Read more
Samsung brings its browser to PC with plenty of cool tricks in tow
One browser to rule your phone, your PC, and the infinite number of tabs you have open right now.
Samsung Browser open on Microsoft laptop

Samsung has officially launched Samsung Browser for Windows, and it’s more than just a desktop version of your phone’s browser. It comes with cross-device continuity and an AI assistant that seems genuinely useful.

Continuity browsing to help keep your tabs in check

Read more