Skip to main content
  1. Home
  2. Computing
  3. News

Nvidia’s next-gen GPUs to likely require a new power supply

Add as a preferred source on Google

Another day, another rumor about the specs for Nvidia’s next-gen GPUs. This time, the rumor mill is buzzing about potential power limits of Nvidia’s Lovelace GPUs and whether or not you’ll need to upgrade your power supply. Let’s just say you may have to factor in a new PSU for your next build.

According to Moore’s Law is Dead and Wccftech, Nvidia’s upcoming GPUs will likely max out at 600 watts. For comparison, the RTX 3090 tops out at 350W and RTX 3090 Ti is rumored to increase that to 450W. Obviously, any rumors must be taken with great skepticism, but 600W is a significant jump and may require many PC builders to upgrade their power supply.

Jeff Fisher presents the RTX 3090 Ti at an unveiling event.
Image used with permission by copyright holder

We’ve seen rumors of high power figures before. It was reported earlier that the RTX 4090 and 4090Ti would require a massive 1,200W power supply. Reports of those outrageous numbers were later tempered a bit since the leakers involved were unable to confirm the exact TDP figures.

Recommended Videos

It seems that Nvidia is trying to push the limits of power consumption, which could explain why such huge numbers are being rumored. The main issue seems to be ensuring that graphics cards can be adequately air cooled by both Nvidia’s reference design and Add-In-Board partners. ExtremeTech notes that Nvidia’s power targets are in line with the 12-Volt High Power PCIe Gen 5 connector that supports up to 600W.

Among the powe- consumption rumors, Lovelace GPUs may also feature superfast GDDR7 memory. The current GDDR6X used in the RTX 3080, 3080 Ti, and 3090 cards maxes out at 19Gbps. Even using the existing 256-bit and 384-bit wide memory interfaces, this would be a noticeable boost to performance.

Both of these rumors are on top of the massive performance gains that the next-gen cards are rumored to have. Nvidia’s flagship GPUs could have up to 75% more CUDA cores than the RTX 3090 and huge L2 cache. This would greatly reduce the amount of time and energy to access data from the main memory.

All of this news comes right as GPU prices are finally dropping, as much as 25% for certain graphics cards. Intel’s entrance into the graphics card market with its Arc Alchemist GPU lineup should also help ease shortage concerns.

David Matthews
David is a freelance journalist based just outside of Washington D.C. specializing in consumer technology and gaming. He has…
Google just announced a new kind of laptop, and it puts Gemini everywhere
Google's new Googlebook platform puts Gemini at the center of every laptop interaction, from the cursor to the desktop, with devices from major PC makers arriving this fall.
Googlebook

Google wants Gemini to be the brain of your next laptop, and the company has announced a whole new category to make that happen. Dubbed Googlebook, the new laptop platform puts Gemini at the center of the experience, with devices from Acer, Asus, Dell, HP, and Lenovo expected this fall.

What makes it different

Read more
Google just made Gemini for Home a lot better at running your smart home
Google just updated Gemini for Home with smarter features and faster controls.
Google-gemini-for-home-updates

If you have a Google smart display or speaker at home, there are new updates you should know about. Google has rolled out a fresh batch of improvements to Gemini for Home, making the assistant noticeably smarter and faster across smart speakers and displays.

Gemini for Home is getting smarter and more personal

Read more
AI voice chats still feel awkward because assistants don’t know when to talk
Thinking Machines Lab is testing faster full duplex AI that can listen and respond at the same time
Electronics, Mobile Phone, Phone

Thinking Machines Lab says it’s building full duplex AI, which means an AI system can take in what someone is saying while generating a response. In plain English, it’s closer to a phone call than a walkie-talkie.

The startup, founded last year by former OpenAI CTO Mira Murati, announced interaction models, starting with TML-Interaction-Small. It says the system can respond in 0.40 seconds, a pace that puts it near ordinary human back-and-forth.

Read more