Skip to main content
  1. Home
  2. Computing
  3. Features

Nvidia DLSS 5 might be the future of graphics, and I still want a giant “Off” button

Neural rendering is cool and all, but "yassifying" game characters... less so

Add as a preferred source on Google
NVIDIA DLSS 5 Comparison Resident Evil
NVIDIA

For years, photo-realism was seen as the ultimate goal for next-gen games. Ray-tracing was a solid step forward. And then came super-resolution and super-sampling upgrades. Yet, when Nvidia showcased its next great advancement for video game visuals, the fifth-gen Deep Learning Super Sampling, it stirred a furor. Interestingly, DLSS 5 is not just another version of DLSS with a few cleaner edges and a better performance story.

Nvidia is pitching it as a real-time neural rendering model that can add more photoreal lighting and material detail to a game frame, which is a much bigger shift than plain upscaling. That’s a bold technical swing, and a risky aesthetic one. It sounds impressive, and to be fair, part of it genuinely is. If DLSS 5 works as intended, it could help games look richer without developers brute-forcing every lighting effect the traditional way.

Recommended Videos

Announced at GTC, DLSS 5 is set to release in the fall of 2026 as Nvidia’s biggest graphics leap since real-time ray tracing. But the first reaction wasn’t applause, it was memes about “AI faces”, “AI slop“, and “yassified” characters. While Nvidia insists we’re all wrong, it still begs the question: do we actually need this?

What does DLSS 5 even do, and is it actually useful?

Nvidia says DLSS 5 takes each frame rendered by the game, plus motion data, to generate more photoreal lighting and materials in real time. On paper, it should better handle things like skin, hair, and fabric. The company is also positioning it as part of a broader neural rendering future, rather than a one-off gimmick. For photoreal games chasing more realistic lighting, this is a compelling pitch.

This isn’t meant to be a blind, one-click beauty filter either. Developers are supposed to get full control over intensity, color grading, and masking. DLSS 5 also integrates through Nvidia Streamline, meaning studios can decide exactly where the effect applies (and where it doesn’t).

There is a fair pro-DLSS 5 argument here. Traditional rendering is expensive, especially when developers want cinematic lighting without sacrificing frame rates. A tool that can bridge some of that gap could absolutely benefit players, particularly in big-budget, realistic single-player games.

If it’s so advanced, why does it keep getting called an AI filter?

It didn’t help that at the sidelines of GTC, Nvidia chief Jensen Huang said gamers are getting it completely wrong with DLSS5. But if that’s the case, why is the criticism almost in unison? That’s because ecause the criticism is not just people yelling “AI bad” on autopilot.

A big reason the “AI filter” label stuck is that some of the public explanations make DLSS 5 closer to smart image reinterpretation than something deeply aware of a game’s full 3D scene. According to Nvidia’s Jacob Freeman, the system takes the rendered frame and motion vectors as inputs, while keeping the underlying geometry unchanged.

That is exactly why critics are uneasy. If DLSS 5 is mainly working from a 2D frame plus motion information, then it is still guessing. And this guesswork is how you end up with that uncanny, over-baked look people immediately noticed in early demos.

Once a GPU feature starts changing facial tone, lighting mood, or the overall feel of a scene, people stop seeing it as a harmless enhancement and start seeing it as aesthetic interference.

Death of artistic intent?

This is the biggest question hanging over DLSS 5. Nvidia CEO Jensen Huang has defended the tech aggressively, emphasizing that developers get full control of intensity, grading, and masking. That all sounds reassuring in theory, but my eyes say otherwise.

In the demo, DLSS 5 noticeably shifts color grading and contrast in ways that make you question whether developers actually opted into those changes.

Resident Evil Requiem has one of the most jarring showcases of this tech, with Grace getting what looks like a subtle makeup applied to her eyes and lips. Other examples, like Starfield, also reinforce this oddly generic look, one that adds “detail” without necessarily adding to the immersion.

Going by various videos and posts online, both gamers and some developers were put off by the beauty-filter effect in character faces. And while Nvidia claims developers will have full control, some were blindsided by the announcement altogether, including people working at major studios like Capcom. One developer at Ubisoft even said, “We found out at the same time as the public.”

When the key selling point becomes “look how much the AI changed this,” it is hard to blame people for asking whether the original art direction is being preserved or overwritten.

Are gamers overreacting or spotting a real problem early?

The community response has been messy, but it is not baseless. Reddit threads are full of people calling DLSS 5 “AI slop,” with valid complaints of the tech wiping out moody lighting, homogenizing visual style, and making games look plasticky or uncanny. These blunt reactions also point to a real fear, where a single AI model could have two very different games have the same glossy Nvidia-approved look.

Are we supposed to actually believe DLSS 5 gives developers control to maintain a game’s “unique aesthetic” when the examples they show completely change the artistic style of some characters?

“Ah yes, this completely different looking person is what I wanted all along!” https://t.co/vSWDw51A29

— Hardware Unboxed (@HardwareUnboxed) March 16, 2026

My take is simple: DLSS is not automatically doomed, and it is not fair to dismiss the tech as worthless. But Nvidia is asking players to trust an AI layer with something more important than frame rate, which is a game’s visual identity. That is a much harder sell.

Until DLSS 5 proves that it can enhance games without making them feel AI-treated, the criticism is not just valid, it is necessary.

Vikhyaat Vivek
Tech journalist and product reviewer specializing in consumer electronics. Sean has covered everything from flagship…
AMD’s latest Ryzen 9 9950X3D2 pushes X3D to the limit
Dual 3D V-Cache, higher power, and a focus on enthusiast performance
AMD Ryzen 9 9950X3D2 FEatured

AMD has unveiled what might be its most extreme desktop CPU yet, the Ryzen 9 9950X3D2. And it’s going all-in on one thing: cache.

https://twitter.com/jackhuynh/status/2037159705395491033?s=20

Read more
Next-gen AI breakthrough promises chatbots that can read the room better
Researchers are teaching AI chatbots to read between the lines
Generative AI

Have you ever asked a chatbot something and felt like it completely missed your point? You say something with a bit of nuance, and the AI misses the subtlety entirely. That is exactly the problem researchers are trying to solve.

Even though the emotional connection with AI can feel deeper than human conversation for many users, most AI systems today still treat a sentence as a single block of sentiment. If you mix praise and criticism, the nuance often gets lost.

Read more
ChatGPT is not getting an erotic mode, after all
OpenAI pulls back as “adult mode” runs into bigger concerns
ChatGPT-to-rollback-to-friendly-and-adulttt

If you were expecting ChatGPT to get an “erotic mode,” that idea is officially off the table. According to Financial Times, OpenAI’s spicy mode is on hold “indefinitely.”

Inside OpenAI's struggle to bring the adult mode to life

Read more