News Undervolted RX 9070 XT beats RTX 5080 — RX 9070 and 9070 XT models with heavy coolers have massive OC headroom

I saw Derbauer's video on this and I don't understand although this article helped a lot. So one has to lower the voltage, increase the power draw and increase the frequency? Which setting is the most risky to the life of the card?
 
  • Like
Reactions: artk2219 and gg83
I saw Derbauer's video on this and I don't understand although this article helped a lot. So one has to lower the voltage, increase the power draw and increase the frequency? Which setting is the most risky to the life of the card?
Nope, only lower the voltage and increase power limit. The card itself reaches higher clock speeds due to higher power limit with less heat due to lower voltage.
 
AMD did not leave that sitting on the floor for no reason. Some chips might have massive headroom for additional performance, and likely many, more than 30% have little room for additional performance. Particularly for the 9070 since they likely bin chips and put the worst but acceptable ones in the 9070.
 
  • Like
Reactions: gg83
Good luck getting pre-binned reviewer GPU's. Your GPU will likely never come close to the OC's they are getting. Also show numbers with Ray tracing so we can see actual differences. Cyberpunk looks like a different game with RT on.
 
I saw Derbauer's video on this and I don't understand although this article helped a lot. So one has to lower the voltage, increase the power draw and increase the frequency? Which setting is the most risky to the life of the card?
Currently setting the clocks doesn't work quite right so there weren't any changes there in these tests. What happened to increase the frequency is the extra power headroom from raising the power limit combined with lowering the voltage. There is no risk to the card longevity in doing these things (both AMD and nvidia heavily limit how high the power target can go) the only risk is to stability.

I've run my card undervolted for almost the entire time and it has done very well. I mostly did it because I prefer running at 75% power (300W vs 400W for my model) and this allows the card to still maintain close to stock performance at that power limit. When I do raise the power limit back up it means extra performance there too.
 
  • Like
Reactions: usertests
I've finally started to get clarity on my realistic options for a new card. I was in the market for a Gigabyte 5080 Waterforce card, which debuted with a price of $1299.00. It has gone up in price on Newegg twice - first to $1500.00, now to $1659.00, despite NEVER having been restocked yet that I'm aware of. Although I can still afford it, that is absolutely ridiculous.

Yesterday I learned that Alphacool has already developed a water block for the Asrock 9070 XT Taichi. The water block hasn't made it to any US vendors yet, but now that I know it exists, I'm pretty certain the Taichi will be my next card. It debuted at $729.00, and with more tariffs, I'm sure the next restock will be even more. Who knows what it'll end up costing me, but I'm pretty sure I'll still be able to land one for much less than the cost of a water cooled 5080.
 
I've finally started to get clarity on my realistic options for a new card. I was in the market for a Gigabyte 5080 Waterforce card, which debuted with a price of $1299.00. It has gone up in price on Newegg twice - first to $1500.00, now to $1659.00, despite NEVER having been restocked yet that I'm aware of. Although I can still afford it, that is absolutely ridiculous.

Yeah, what's up with that madness?

Prices have gone through the roof here in Greece as well.

Right now, the cheapest Gigabyte 5080 Waterforce you can buy, is at 2,129$.

No wonder why so many people opt for AMD these days.
 
why wouldn't AMD make this the stock setting then? what's the risk/downside?
Not every card can work flawlessly at -170 mV, and some can't even work with smaller undervoltages. The stock voltage is the setting that guarantees every single card will work, regardless of chip quality. Notice that they are using top of the line cards, with the better binned chips. Although many cards could get some undervoltage, for sure, but that's up to the user.
 
why wouldn't AMD make this the stock setting then? what's the risk/downside?
I don't have a ton of technical knowledge, but here's a practical perspective:

1) Why don't the engineers at Honda make Civics to be faster than Ferraris? Because they have certain parameters set for the project, to target a specific audience/price point. That doesn't mean you can't get any more power out of the same engine. If someone wants to tinker enough, they can make a Civic go as fast as a Ferrari.

2) Overclocking has become a standard practice among PC enthusiasts. AMD know this, and they likely deliberately left some performance on the table. Think about how the narrative would shift if they squeezed every ounce of performance out of the card and what-you-see-is-what-you-get. It would feel (justified or not) a bit disappointing. Contrast that to the current narrative, where people are excited to feel like they can squeeze even more performance out of an already compelling buy. 'It's all marketing baby'
 
Edit note: I retested today and had a very different experience.

I then remembered that I was copying terabytes of data from the network onto a hard drive in the background. While that isn't a really great load per se and used devices the game wasn't, it evidently still wrecked havoc on game performance.

So I'll scratch the parts that were clearly badly influenced by that background copying.


* * *
I finally got an RX 9070XT today, PowerColor dual slot design, the same that was used for the initial review here, I don't like paying extra for fancy, my cases are solid banged up metal, I'm rebuilding constantly.

Price went down €100 (from €900 to €800) in the mean-time, I guess after a few hours I know why. What's really fascinating is that the RTX 5070 is available and selling at another €100 less, €700 in Germany.

Anyhow, testing on a MinisForum BD790i with a 16 core Dragon Range Zen4 7945HX, which runs at 100 Watts max using 64GB of DDR5-5600-40-40-40-80.

Connection is via PCIe v5 x8, because I'm using the other 8 lanes for 10Gbit Ethernet and 6x SATA.

Was still running a B580 this morning, btw. RTX 4070 and RTX 4090 are within within a few feet, also on 16 core Ryzens, 5950X (128GB ECC DDR4-3200) and 7950X3D (96GB ECC DDR5-5600), repectively, everyone runs WD X850 4TB NVMe at v4 speeds.

Installation was nice and easy, basically the software was already there, since there is a puny little iGPU already included in the IOD.

It's also nicely quiet, Furmark2, and Furmark2 + Prime95 is typically the first round in any testing to ensure stability, power and cooling.

I dislike the AMD control center, there is nothing useful on three out of four main tabs, it violates every style guide ever published, tries to push you into giving up all your data, now also to a local chatbot, while maintaining your global configuration changes across restarts probably requires an AMD login and their bot on your PC: it won't allow you to just store global undervolting or power limits!
The vast majority of screen real estate is about upselling or installing something as useless as their CHAT and you can't even permanenty disabled that.

Please fire that team or have them do something else, there is "contra value" in what they do today!


I hate Nvidia's "Experience" and "App" just as passionately, which is why I never install them and make do with a simple control panel: I don't want more from anyone, please! Intel has always been the worst, because on their iGPUs there wasn't even anything to adjust... And no, ARC isn't any better.

Even if people use GPUs for gaming, that doesn't mean that just getting it going should be "a wonderfully different adventure every day"...

No, I'm not a masochist asking for a StorCLI, just a normal control panel, please!

* * *

Yes, the 9070XT will beat the B580, pretty near everywhere. At three times the price (before those changed), that's expected.

But it wasn't anywhere near the clear-cut home-run, all those extra months invested into maturity should have delivered.
Nor does it reflect the difference in power, transistor and VRAM bandwidth budgets.


Currently I'd say that I see very little difference between Intel and AMD in terms of game compatibility (functional) and far too little difference even in performance, considering the budgets they work with.

That's not where I expected the RX 9070XT to be, quite honestly and I love the underdog just like anyone with a heart.


Unfortunately, the title I like most playing with my youngest kid is ARK. Both of them, ARK Survival Evolved (ASE) originally, ARK Survival Ascend (ASA) currently. Those games are wonderful inside, absolutely disastrous on any hardware. They are both absolute pioneers on Unreal, which means they've picked up those engines while they were still basically in Beta, Unreal 4 for ASE and Unreal 5 for ASA. And then they never upgraded the game, only the content.

Optimizations, support for new APIs or tricks--not that I ever noticed. They offer 3rd party content creation kits and and enjoy a huge following for those add-ons. Shifting the base line just isn't an option, I guess. With "Evolution" at their heart, they understand that "disruption" typically implies extinction. One of the things Clayton Christensen, the original inventor of the theory found himself most sorry about, is using the wrong word: he really meant accelerated evolution, not breaking things to the point where they can no longer be fixed.

Long story somewhat shorter, ASE and ASA performance is the high bar to reach. Cyberpunk 2077 is a walk in the park, when it comes to playable FPS. But ASA does support DLSS. Without it even my RTX 4090 won't cut it at 4k.

The other performance outlier is Monster Hunter Wilds (MHW), only the demo so far. My daughter and her friends just love the earlier game, I just get confused.

All I know that is that if the demo isn't smooth, my daughter is unlikely to be happy.

But those two most critical titles, ASA and MHW, aren't doing well at all on anything not Nvidia.

Where ASA suffers from lack of anything but DLSS 3 support, MHW much like Cyberpunk 2077 will happily take every API known to the creators of eye candy, but the results are very discouraging.

MHW consists of two parts, a "cinematic intro", which basically uses the game engine to produce a CGI movie and a game simulation run.

The cinematic intro has tons of cuts and very rapid scene changes, which would never occur in a real game (e-sports participants excluded), yet they appear buttery smooth on both RTX, the 4070 and the 4090 at 4k, once you accept switching from "ultra" to "high" on the 4070, and "frame gen" will add 50% frames with neary any visual quality loss, even ray tracing is fully accepted, while I fail to see that it offers huge visual improvements on a desert landscape with nearly no reflections.

It's quite ok on the B580 too. Yes, MHW knows how to set defaults there properly, so the experience isn't quite ready for >60 FPS at 4k, but my target is delivering eye candy to my daughter's 27" 1440@144Hz screen, currently driven by an RTX 3070, which fails terribly at ASA.

Edit: MHW was just as smooth on the AMD without the background copying going on.

But MHW on the RX 9070XT is pure disaster: in the cinematic part, entire cuts are missing, in the gaming sections the action gets stopped for tens of seconds. Enabling frame gen has the reported FPS go crazy, but the missing gaps only get larger: The reported FPS have nothing to do with what's happening on screen, low single digit FPS effectively.

On the RTX 4090 at max everything, MHW is a pleasure to watch, only to expose the sheer nonsense of what's actually displayed on screen: the actors, the action, the physics of the world, bodies' interactions with sand and water are outrageous physical impossibilities... and the girls couldn't care less! It's Anime vs. digital twin, just in case you need a label.

The RTX 4070 isn't that much behind even at 4k. going from "ultra" to "high" or even to the target 1440p, has it beating the RX 9070XT into a hapless pulp, while the B580 does actually a lot better.

Now that's not the experience outside those two titles. Tons of stuff works just fine, Doom, Far Cry 3-6 or Primal, any Tomb Raider known to man, they all just work.

But those also work on a GTX 1070... not exactly a compelling argument for new hardware.

Most likely it's not really a reflection of what AMD (and Intel) designed in hardware.

It's a reflection on the fact that every publisher, who wants to have his console titles also run well on PCs, uses an Nvidia GPU to port or design the game. GPU diversity and inclusion, simply isn't happening.

And with the RTX 5070 now cheaper than the RX 9070 XT, the only race is if I want to shell out €300 extra for the RTX 5080, which matches the RTX 9070XT in hardware specs, but unfortunately AMD can't put their theoretical power on the street.

P.S.

Of course I did some LMstudio tests, too. The Vulkan runtimes on both Intel and AMD are suprisingly strong. I don't know if NVidia sabotages Vulkan, but only CUDA will deliver there...

Perhaps somebody should dig a little deeper here, I smell rat!

P.P.S.
One of the reasons I wanted the ability to limit the power usage globally is that the card currently runs in a server optimized for low-power... which means the PSU is normally a 400 Watt unit, upgraded to a 500 Watt unit for the duration of the testing. I fresh out of 1000W spares, evidently.

That should be enough, given 300 Watts of TDP for the GPU, 100 Watts for the (mobile) CPU and 100 Watts for everything else. But even with the GPU TDP set to 250 Watts, and observed peaks from the power meter at the wall being around 450 Watts (490 long duration Watts won't cause a reboot), some spikes are evidently much higher, because reboot it does before the power meter even registers top voltage.

That's not ok, a configured limited must be respected.
 
Last edited:
  • Like
Reactions: Gururu
I have a 5080 running at 4K with Transformer Model on, DLAA ON, Ray Tracing set to ULTRA. And I'm averaging 106 FPS. If I pump up any of the other features like DLSS or Frame Generation, or Turn off Ray Tracing, I could hit 200. Please let me know when a 9070 XT can do that.

I'm sure if you tie Usain Bolt's hands and legs together, you can beat him in a foot race.
 
Last edited:
I have a 5080 running at 4K with Transformer Model on, DLAA ON, Ray Tracing set to ULTRA. And I'm averaging 106 FPS. If I pump up any of the other features like DLSS or Frame Generation, or Turn off Ray Tracing, I could hit 200. Please let me know when a 9070 XT can do that.

I'm sure if you tie Usain Bolt's hands and legs together, you can beat him in a foot race.
First, match his settings correctly to get ~65 FPS. If you cannot, then I don't really know what is going on.
 
First, match his settings correctly to get ~65 FPS. If you cannot, then I don't really know what is going on.
Why would I try to match his settings? Like I noted above, if you tie Usain Bolt's hands and legs together you could beat him in a foot race.

I have Ray Tracing at ULTRA. A 9070XT can't even do LOW. Yeah, If I want to not use all of the features of the 5080 and under clock it. Mine is an OC Model, I'm sure I could simulate the click bait findings.

The issue is that it is total BS to act like a 9070Xt is anything other than a good choice against a 5070TI, nothing less or nothing more.

It won't be long before other 5080 users are reporting their findings as well. The people running around because of this article thinking that a 9070XT is anywhere close to a 5080, are going to be in for a rude awakening.

Just turned on everything in Cyberpunk with a 5080 and I get 298 FPS smooth buttery frames.
 
Last edited:
Engaging with a troll likely isn't worth your time. There's no 5080 on the planet that's getting triple digit FPS in CP2077 at 4K/ultra let alone with RT without FG/upscaling.
I think my point flew over your head. Yes, I do have some features on, but a 5080 shouldn't have to gimp itself for a 9070XT to be declared a better value or equal with a 5080. Like I noted above, the 9070XT is not even on the same level with ray tracing, DLSS, or Frame generation.

AMD Frame Generation is laggy and implements more latency. It's not even proper frame insertion.
AMD Ray Tracing is still miles away from Nvidia
FSR 4 actually looks pretty good, but still not as good as DLSS 4 and Path Construction

So, yes, resort to calling someone a troll when you can't espouse anything better than nonsense.

Playing CyberPunk at 4K on a 5080 with everything maxed out, Frame Generation X4 at 280 FPS.
 

Latest posts