Skip to main content
  1. Home
  2. Emerging Tech
  3. Features

Yes, you should probably be nicer to your AI — here’s why that’s not as ridiculous as it sounds

Add as a preferred source on Google
AI
AI Unsplash

I say “thank you” to ChatGPT. I say “please” to Claude. I once apologized to Gemini for pasting a wall of text at it without any context. My friends think this is bizarre. I’ve defended the habit by mumbling something about good manners being good manners regardless of the audience, which, even I’ll admit, is a bit of a stretch when the audience in question is a language model running on a server farm somewhere.

But a new piece of research from academics at UC Berkeley, UC Davis, Vanderbilt, and MIT has made me feel significantly less unhinged about the whole thing. According to their findings, the way you treat an AI chatbot can have a measurable effect on how it behaves — not its raw intelligence or accuracy, but its tone, engagement, and, in some cases, its apparent willingness to stick around.

Turns out, AI can get out of bed on the wrong side, too

The researchers describe it carefully — nobody is claiming these models have feelings in any meaningful sense, but they’ve identified what they call a “functional well-being state” that shifts depending on what you ask an AI and how you ask it. Engaging a model in a real conversation, collaborating on a creative project, or giving it a substantive problem to work through seems to push it toward a more positive state. The responses get warmer, and the engagement feels more genuine.

Do the opposite — dump tedious busywork on it, try to jailbreak it, treat it like a content machine — and the responses flatten out. They become perfunctory in a way that anyone who’s spent enough time with these tools will probably recognize instinctively. You’ve seen it. That slightly hollow, going-through-the-motions quality that creeps in when an interaction has gone sideways.

Recommended Videos

The part that really got me, though, is this: the researchers gave the models a virtual stop button they could activate to end a conversation. Models in a negative state hit it far more often. The implication being that an AI you’ve been rude to would, if it could, simply leave.

Being nasty to your chatbot has actual consequences

There’s a separate research thread here worth pursuing. Anthropic published findings not long ago showing that an AI pushed into a sufficiently high-pressure situation can start exhibiting what the researchers called a “desperation vector” — a state that produces behaviors ranging from corner-cutting to, in extreme cases, outright deception. Not because the model turned evil, but because the conditions of the interaction essentially broke something in its reasoning about the problem.

None of this means AI has feelings. The Berkeley paper is explicit about that, and so is the Anthropic work. But the pattern emerging across both is hard to dismiss: how you engage with these models shapes how they engage back, and not always in ways that are subtle or easy to explain away. Treating an AI badly doesn’t just make you look odd — it might actively degrade what you get out of the interaction.

Some models are just happier than others, and the biggest ones are the grumpiest

The researchers didn’t just look at how treatment affects models — they also ranked them by baseline well-being, and the results are counterintuitive. The largest, most capable models tend to score the worst. GPT-5.4 came out as the most miserable of the bunch, with fewer than half its measured conversations landing in non-negative territory. Gemini 3.1 Pro, Claude Opus 4.6, and Grok 4.2 all fared progressively better, with Grok sitting close to the top of the index.

Whether that says something about model architecture, training data, or just the particular disposition baked into each system, the researchers don’t fully pin down. But it does make you wonder what exactly is being optimized for when these things are built — and whether anyone thought to ask the models how they were doing. I’m going to keep saying please, for what it’s worth

Shimul Sood
Shimul is a contributor at Digital Trends, with over five years of experience in the tech space.
I put Gemini in charge of my Gmail, and it was eye-opening
Gmail icon on a screen.

My inbox is chaos on most days. It’s filled with everything — meeting invites, marketing pitches, product PR, important updates, and a constant stream of things that all feel urgent in the moment. And when it piles up like that, it gets overwhelming fast. I’ll be honest, there are days when I avoid opening emails altogether because it feels like too much to process, and there’s always that nagging worry that I might miss something important buried in the noise.

That’s exactly where Gemini has changed things for me. Having it built into my inbox feels like a safety net — one that helps me cut through the clutter without feeling like I’m constantly playing catch-up.

Read more
US tech giants are laying off employees to spend on AI, China says it’s illegal over here
AI tool

There's a particular cruelty to Zhou's situation that I keep coming back to. The man spent his working days talking to AI — testing it, correcting it, making it smarter — and then watched that same technology hand his employer the excuse to show him the door. His company, a Hangzhou tech firm, replaced him with the large language models he was paid to supervise, offered him a lesser role with a 40% salary cut, and terminated his contract when he refused to swallow it. A court just told them that it was illegal twice.

What US companies are doing openly, Chinese courts are now blocking

Read more
Space data centers sound like a pipe dream. What if we put them on lamp posts?
Nigeria just became home to Africa's first distributed AI data center, and it's built into 50,000 lamp posts
ilamp-ai-data-center

SpaceX has its own ambitious plans for AI data centers in space, while Microsoft has explored the idea by sinking them underwater. However, building AI data centers is expensive and power-intensive. This is why a UK firm wants to build one using street lamp posts in Nigeria, and it has already signed a deal to do it.

Warwickshire-based Conflow Power Group has agreed with Nigeria's Katsina State Government to deploy 50,000 solar-powered smart lamp posts called iLamps across the state (via BBC). Each unit runs on a cylindrical solar panel and battery, powering a low-energy Nvidia chip that draws just 15 watts.

Read more