Skip to main content
  1. Home
  2. Computing
  3. News

ChatGPT can laugh now, and it’s downright creepy

Add as a preferred source on Google
OpenAI's Mira Murati introduces GPT-4o.
OpenAI

We all saw it coming, and the day is finally here — ChatGPT is slowly morphing into your friendly neighborhood AI, complete with the ability to creepily laugh alongside you if you say something funny, or go “aww” if you’re being nice — and that’s just scratching at the surface of today’s announcements. OpenAI just held a special Spring Update Event, during which it unveiled its latest large language model (LLM) — GPT-4o. With this update, ChatGPT gets a desktop app, will be better and faster, but most of all, it becomes fully multimodal.

The event started with an introduction by Mira Murati, OpenAI’s CTO, who revealed that today’s updates aren’t going to be just for the paid users — GPT-4o is launching across the platform for both free users and paid subscribers. “The special thing about GPT-4o is that it brings GPT-4 level intelligence to everyone, including our free users,” Murati said.

Recommended Videos

GPT-4o is said to be much faster, but the impressive part is that it really takes the capabilities up a few notches, across text, vision, and audio. It can also be used by developers to integrate into their APIs, and it’s said to be up to two times faster and 50% cheaper, with a rate limit that’s five times higher compared to GPT-4 Turbo.

OpenAI announces GPT-4o.
OpenAI

Alongside the new model, OpenAI is launching the ChatGPT desktop app as well as a refresh of the user interface on the website. The goal is to make the chatbot as easy to communicate with as possible. “We’re looking at the future of interaction between ourselves and the machines, and we think that GPT-4o is really shifting that paradigm into the future of collaboration — where the interaction becomes much more natural,” Murati said.

To that end, the new improvements — which Murati showcased with the help of OpenAI’s Mark Chen and Barret Zoph — really do appear to make the interaction much more seamless. GPT-4o is now able to analyze videos, images, and speech in real time, and it can accurately pinpoint emotion in all three. This is especially impressive in ChatGPT Voice, which became so human-like that it skirts the edge of the uncanny valley.

Saying “hi” to ChatGPT evokes an enthusiastic, friendly response that has just the slightest hint of a robotic undertone. When Mark Chen told the AI that he was holding a live demo and needed help calming down, it sounded adequately impressed and jumped in with the idea that he should take a few deep breaths. It also noticed when those breaths were far too quick — more like panting, really — and walked Chen through the right way to breathe, first making a small joke: “You’re not a vacuum cleaner.”

Introducing GPT-4o

The conversation flows naturally, as you can now interrupt ChatGPT and don’t have to wait for it to finish, and the responses come quickly with no awkward pauses. When asked to tell a bedtime story, it responded to requests regarding its tone of voice, going from enthusiastic, to dramatic, to robotic. The second half of the demo showed off ChatGPT’s ability to accurately read code, help with math problems via video, and read and describe the content of the screen.

The demo wasn’t perfect — the bot appeared to cut off at times, and it was hard to tell whether this was due to someone else talking or because of latency. However, it sounded just about as lifelike as can be expected from a chatbot, and its ability to read human emotion and respond in kind is equal parts thrilling and anxiety-inducing. Hearing ChatGPT laugh wasn’t on my list of things I thought I’d hear this week, but here we are.

GPT-4o, with its multimodal design, as well as the desktop app, will be gradually launching over the next few weeks. A few months ago, Bing Chat told us that it wanted to be human, but now, we’re about to get a version of ChatGPT that might be as close to human as we’ve ever seen since the start of the AI boom.

Monica J. White
Monica is a computing writer at Digital Trends, focusing on PC hardware. Since joining the team in 2021, Monica has written…
The Android Show 2026: Gemini Intelligence, Googlebook, Android 17 updates, and everything else
Gemini Intelligence, Googlebooks, Android 17, and redesigned Android Auto. Google didn't hold back at its pre-I/O show, and the main event is still a week away.
The Android Show 2026

Every year, Google front-loads its Android announcements in a separate pre-show the week before its annual I/O conference. This year, the company did exactly that, and The Android Show: I/O Edition was anything but a warmup act. 

Google showed up well prepared, with plenty of software and a major hardware announcement that took everyone by surprise. One by one, let's talk about everything, including a deeply integrated AI overhaul, a long-overdue security upgrade, an Android Auto makeover that feels like it was designed for 2026, and a brand-new laptop category. 

Read more
Google just announced a new kind of laptop, and it puts Gemini everywhere
Google's new Googlebook platform puts Gemini at the center of every laptop interaction, from the cursor to the desktop, with devices from major PC makers arriving this fall.
Googlebook

Google wants Gemini to be the brain of your next laptop, and the company has announced a whole new category to make that happen. Dubbed Googlebook, the new laptop platform puts Gemini at the center of the experience, with devices from Acer, Asus, Dell, HP, and Lenovo expected this fall.

What makes it different

Read more
Google just made Gemini for Home a lot better at running your smart home
Google just updated Gemini for Home with smarter features and faster controls.
Google-gemini-for-home-updates

If you have a Google smart display or speaker at home, there are new updates you should know about. Google has rolled out a fresh batch of improvements to Gemini for Home, making the assistant noticeably smarter and faster across smart speakers and displays.

Gemini for Home is getting smarter and more personal

Read more