News AMD RX 7000-series' high idle power draw finally appears to be fixed with the latest 23.12.1 drivers

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
xfx 7900 xtx
innocn 27m2v 4k 160hz over hdmi hdr with freesync on
driver 23.12.1 only
no custom settings
idle as in staring at desktop with msi afterburner open to see the power usage
MSI afterburner has some amount of gpu usage depending on what page you are on and what skin you have enabled. 4k165 is also a lot. I have a 3080 and 3 monitors (1440p165, 4k60, 4k120) and no matter the settings My idle with all of them at their highest refresh is 91w. The only way to get my idle wattage down is to disconnect one monitor, which can be any of them, to get down to 34.5w idle. All of the wattage reporting is done through HWinfo with a cleanboot. I have not been able to get below 33w at idle with my 3080 regardless of how many monitors, and their refresh rates. I tried my 1440p monitor in 720p at 60hz by itself and i get 33w at idle on the card.
 
  • Like
Reactions: -Fran- and bit_user
MSI afterburner has some amount of gpu usage depending on what page you are on and what skin you have enabled. 4k165 is also a lot. I have a 3080 and 3 monitors (1440p165, 4k60, 4k120) and no matter the settings My idle with all of them at their highest refresh is 91w. The only way to get my idle wattage down is to disconnect one monitor, which can be any of them, to get down to 34.5w idle. All of the wattage reporting is done through HWinfo with a cleanboot. I have not been able to get below 33w at idle with my 3080 regardless of how many monitors, and their refresh rates. I tried my 1440p monitor in 720p at 60hz by itself and i get 33w at idle on the card.
the odd thing is the special 23.10.23.03 driver witch enabled raytracing in ratchet & clank was able to idle at 60w so far that was the lowest iv ever had
 
Last edited:
Thanks for the news.
@JarredWaltonGPU, do you think a RX7900 GRE review would ever be possible?
Not a mainstream card but a quieter RX6950XT equivalent is a silent hit that didn't get the limelight.
 
Thanks for the news.
@JarredWaltonGPU, do you think a RX7900 GRE review would ever be possible?
Not a mainstream card but a quieter RX6950XT equivalent is a silent hit that didn't get the limelight.
It's basically China-only AFAIK, so getting one for the US market is both hard and unlikely unless we want to try and order one ourselves (which ends up being VERY expensive due to shipping, plus potential price gouging). I see one on AliExpress for $980, but that's not exactly a reputable website in my book, given all the scam listings that turn up there.
 
It's basically China-only AFAIK, so getting one for the US market is both hard and unlikely
These factors also make it irrelevant for most of your readers. I wouldn't bother with it, personally.

It is slightly odd, in that it reportedly has the same power budget as the 7800 XT (260 W vs. 263 W), according to Wikipedia. It has more shaders, resulting in a 23% higher boost TFLOPS, but then has base TFLOPS that's 6% lower than the RX 7800 XT. Also, memory bandwidth is just 92% of the RX 7800 XT's, due to running the same number of channels at a lower frequency.

In the end, I'd guess it performs similarly to the RX 7800 XT, if possibly even a bit worse. However, if they'd used the same GDDR6 frequency as the RX 7900 XT(X) and given it a bit higher power budget, it could've slotted nicely between the RX 7800 XT and RX 7900 XT.
 
Last edited:
It's basically China-only AFAIK, so getting one for the US market is both hard and unlikely unless we want to try and order one ourselves (which ends up being VERY expensive due to shipping, plus potential price gouging). I see one on AliExpress for $980, but that's not exactly a reputable website in my book, given all the scam listings that turn up there.
It's actually pretty available in EU, guess AMD segmented it away from US.
That's sad because from what little is available it looks like it matches the best from AMDs last gen while being more power efficient and it's around 600-700€ in some shops, which means RX6950XT performance at RX6800XT price.

It seems a compelling mid-range card but since I generally tend to trust your reviews more than other places I would have really appreciated your take on the card.
 
  • Like
Reactions: P1nky and Order 66
It's actually pretty available in EU, guess AMD segmented it away from US.
Does it seem like these could be grey-market imports (i.e. not officially sold in the EU - just imported informally)?

it looks like it matches the best from AMDs last gen while being more power efficient
I don't see how it could possibly be that fast.

Either the specs in Wikipedia are way off, or the benchmarks you're basing that on are heavily biased towards things that RDNA3 is good at. See my above post for rationale.

The specs I used are here:
 
Last edited:
Does it seem like these could be grey-market imports (i.e. not officially sold in the EU - just imported informally)?


I don't see how it could possibly be that fast.

Either the specs in Wikipedia are way off, or the info you're basing that on is heavily biased towards things that RDNA3 is good at. See my above post for rationale.

The specs I used are here:
If you include the boost values of the GRE it seems to be a bit faster than the 7800 XT. The boost values are shown in the chart below their base values in italics.
 
  • Like
Reactions: Order 66
If you include the boost values of the GRE it seems to be a bit faster than the 7800 XT. The boost values are shown in the chart below their base values in italics.
I saw the boost values. In my prior post, I already mentioned that they're about 23% faster than the 7800 XT.

However, it's supposed to be limited to just 3W higher than the 7800 XT and its memory is slower. Not only that, but it has the same amount of L3 cache.

So, again, I really don't see how it could hang with the 6950 XT, unless the Wikipedia specs are off or the benchmark @Zarax saw just happens to hit its sweet spot.
 
  • Like
Reactions: Order 66
I saw the boost values. In my prior post, I already mentioned that they're about 23% faster than the 7800 XT.

However, it's supposed to be limited to just 3W higher than the 7800 XT and its memory is slower. Not only that, but it has the same amount of L3 cache.

So, again, I really don't see how it could hang with the 6950 XT, unless the Wikipedia specs are off.
Well let's look at the specifics of the comparison.

Advantage to the 7900 GRE vs. 7800 XT:

Much larger Die size: 57.7×109 529 mm2 vs 28.1×109 346 mm2
Many more CUs: 5120:320:192:80:160:80 vs 3840:240:96:60:120:60 CU
Substantially higher texture fill rate: 718.4 GT/s vs 583.2 GT/s
Nearly double the pixel fill rate: 431.0 GP/s vs 233.2 GP/s
Substantial half, single, and double TFLOPs performance: 91.96, 45.98, 1.437 vs 74.65, 37.32, 1.166

Advantage to the 7800 XT vs. 7900 GRE:

Faster infinity cache bandwidth: 2708 GB/s vs 2250 GB/s
Faster memory bandwidth: 624 GB/s vs 576 GB/s
Power budget: 263 W vs 260W

I believe that the GRE would be substantially faster when memory bandwidth is less of a factor, which with games, is often.
 
  • Like
Reactions: Order 66
Much larger Die size: 57.7×109 529 mm2 vs 28.1×109 346 mm2
Because it's the same die as the other 7900's, but much of it is disabled. So, size is misleading.

Many more CUs:
Yes, 5120 vs. 3840. I know that.

Substantially higher texture fill rate: 718.4 GT/s vs 583.2 GT/s
Nearly double the pixel fill rate: 431.0 GP/s vs 233.2 GP/s
Maybe going to L3 cache, but it lacks the main memory bandwidth for that.

Substantial half, single, and double TFLOPs performance: 91.96, 45.98, 1.437 vs 74.65, 37.32, 1.166
Again, I saw that. That's how I computed the 23% boost advantage for the GRE, but that's boost - not sustained!

Also, you just need to look at single-precision. That's what games use, and the others are proportional to it.

I believe that the GRE would be substantially faster when memory bandwidth is less of a factor, which with games, is often.
I already basically said as much, in post #30.

...except, you overlooked the part where the base TFLOPS of the GRE is lower!! So, under sustained load, the GRE should bog down more than the 7800 XT.
 
Because it's the same die as the other 7900's, but much of it is disabled. So, size is misleading.


Yes, 5120 vs. 3840. I know that.


Maybe going to L3 cache, but it lacks the main memory bandwidth for that.


Again, I saw that. That's how I computed the 23% boost advantage for the GRE, but that's boost - not sustained!

Also, you just need to look at single-precision. That's what games use, and the others are proportional to it.


I already basically said as much, in post #30.

...except, you overlooked the part where the base TFLOPS of the GRE is lower!! So, under sustained load, the GRE should bog down more than the 7800 XT.
Almost every graphics card I have ever used will stay within about 60 mghz of its boost clocks. I would expect no less from either the 7800 xt and the 7900 GRE. Base clocks for graphics cards are almost never seen anymore. They tend to stick to about 90% of their rated boost clocks, unlike CPUs. If I had to guess the GRE would be somewhere within 10-20% faster than the 7800 xt. If we are to take their single precision TFLOPS into consideration, the 7800 XT has ~81.2% the single precision TFLOPS performance compared to the 7900 GRE if comparing boost clock to boost clock performance.
 
Almost every graphics card I have ever used will stay within about 60 mghz of its boost clocks. I would expect no less from either the 7800 xt and the 7900 GRE. Base clocks for graphics cards are almost never seen anymore. They tend to stick to about 90% of their rated boost clocks, unlike CPUs. If I had to guess the GRE would be somewhere within 10-20% faster than the 7800 xt. If we are to take their single precision TFLOPS into consideration, the 7800 XT has ~81.2% the single precision TFLOPS performance compared to the 7900 GRE if comparing boost clock to boost clock performance.
I suspect that the use of Navi 31 GCD with a lot of it disabled, plus reasonable boost clocks, is the primary cause of the lowered TBP. Also, slower VRAM as well. So, compute probably will end up ~23% faster theoretical and more like ~20% real-world for compute limited scenarios. Having less VRAM bandwidth shouldn't hurt too much either, as that's the whole point of the L3 caches.

Here's the thing: In my testing, the RX 7900 XT is about 25% faster than the 7800 XT. That's with 25% more VRAM and L3 cache, 28% more bandwidth, and 38% more compute. What that says to me is that the VRAM and cache aspect may be a bit more of a factor than expected. Cutting bandwidth even further but then boosting compute probably won't do as much as if bandwidth was slightly higher along with the boosted compute. It's probably still going to be at least 10% faster at 1440p, but 20% faster will be limited to games that are almost completely compute bound, and those aren't super common — and there will also be edge cases where a game is almost completely bandwidth bound and will see negative scaling vs. 7800 XT.

If you could find these for $550~$600, they might be worth a thought. Anything more than that and the 7900 XT or 7800 XT are the better options.

These are still a bit odd for the configuration. Probably just any Navi 31 GCD that either only has four functional links for the MCDs, or it has fewer than 84 functional CUs. Given AMD set the CU count at 80, it's probably more about the MCD links than the CUs. Makes me wonder how many Navi 31 GCDs exist that only have three functional MCD links, or fewer than 80 functional CUs. Those might show up in future (also limited availability) GRE parts, like a 7800 GRE or 7700 GRE... in about a year. LOL
 
Almost every graphics card I have ever used will stay within about 60 mghz of its boost clocks. I would expect no less from either the 7800 xt and the 7900 GRE.
Which cards? What resolution? Freesync/GSync - up to what max refresh rate? What titles? What CPU? So many variables! You really can't assert that!

Base clocks for graphics cards are almost never seen anymore. They tend to stick to about 90% of their rated boost clocks, unlike CPUs.
CPUs tend to stay well above their base clocks, also!

If I had to guess the GRE would be somewhere within 10-20% faster than the 7800 xt.
But you don't!

If we are to take their single precision TFLOPS into consideration, the 7800 XT has ~81.2% the single precision TFLOPS performance compared to the 7900 GRE if comparing boost clock to boost clock performance.
Yes. That has been said over and over, again. But, you don't have a RDNA3 card and you've provided no evidence about their stock power consumption and clocking behavior. So, without any sort of evidence, boost performance gets a huge asterisk.

I really don't understand why you insist on making this huge speculative excursion. You simply lack the data and really haven't said anything I didn't already point out. We should just try to find some benchmarks from China or Europe - which shouldn't be that hard to find, if the card is as popular there as @Zarax says.

Here. NotebookCheck is a very solid reviewer, in general. I can't speak to their GPU reviews, specifically.

In their overall performance rating, the RX 7800 XT is 8% slower, the RX 7900 XT is 16% faster, and the RX 6950 XT is 10% faster. So, there goes that theory!

That's just a benchmark summary. If you want to see a review of a full system that included it, you might be able to glean some more details:

As that review points out, it has the same number of shaders and memory bandwidth as the RX 6950 XT, which is probably why it draws such comparisons. However, a glaring difference is that it has just half as much L3 as the 6950 has Infinity Cache (which is really more like L2). We've seen from other comparisons between these generations that RDNA3's L3 appears to be at a competitive disadvantage to its predecessor's Infinity Cache. Furthermore, the RX 6950 XT has a much bigger power budget to work with, though a caveat is that they're made on different manufacturing nodes.

P.S. I found an explanation of its name:

'For those unfamiliar with the 7900 GRE, AMD announced it about a month ago. The "GRE" stands for "Golden Rabbit Edition," a super odd name for a gaming graphics card. The name choice stems from the fact that 2023 is the year of the rabbit in Chinese culture.'

Source: https://www.techspot.com/review/2721-amd-radeon-7900-gre/

Consistent with the NotebookCheck review, that review also pegs it squarely below the RX 6950 XT.
 
Last edited:
In the end, I'd guess it performs similarly to the RX 7800 XT, if possibly even a bit worse. However, if they'd used the same GDDR6 frequency as the RX 7900 XT(X) and given it a bit higher power budget, it could've slotted nicely between the RX 7800 XT and RX 7900 XT.
No reason to guess! Steve from HUB still does writeups for Techspot and they got a GRE for HUB so it's in their tables:
 
Which cards? What resolution? Freesync/GSync - up to what max refresh rate? What titles? What CPU? So many variables! You really can't assert that!
I have 1080p, 1440p, 4k, with or without g-sync, 1-120 on all of them 165 on 1440p, I have hundreds of games, and 2 different AMD cpus. I would say as a general statement I am covered.
CPUs tend to stay well above their base clocks, also!
But no where near their advertised peak clock comparatively.
But you don't!
I may not have to, but I did. Jarred seemed to agree 10% was not out of the question for 1440p, but i do not want to speak for him.
I really don't understand why you insist on making this huge speculative excursion. You simply lack the data and really haven't said anything I didn't already point out. We should just try to find some benchmarks from China or Europe - which shouldn't be that hard to find, if the card is as popular there as @Zarax says.
I made this speculative excursion because you seemed to believe it wouldn't even be at the 7800 xt's performance let alone above it, when looking at its specs it seems fairly clear to me that it is at least faster than the 7800 xt. Speculation of performance is something I consider fun to do. I do not lack data at all, my speculation is based on the hard statistical performance metrics based on your Wikipedia link.
 
I may not have to, but I did. Jarred seemed to agree 10% was not out of the question for 1440p, but i do not want to speak for him.
See my updated post, for benchmarks. I linked two sites - one of them very good - that went through fairly comprehensive benchmarks. There never was a need to waste any time speculating.

I made this speculative excursion because you seemed to believe it wouldn't even be at the 7800 xt's performance let alone above it,
What I said was:

"In the end, I'd guess it performs similarly to the RX 7800 XT, if possibly even a bit worse."

And it turns out not to have been too far off! The gap between the 7800 XT and 7900 GRE is half as big as the gap between the latter and the 7900 XT!
 
  • Like
Reactions: helper800
It's basically China-only AFAIK, so getting one for the US market is both hard and unlikely unless we want to try and order one ourselves (which ends up being VERY expensive due to shipping, plus potential price gouging). I see one on AliExpress for $980, but that's not exactly a reputable website in my book, given all the scam listings that turn up there.
It's most definitely not China only. It's quite popular in OEM machines in Europe, and I think there was at least one separate listing already somewhere, but i can't find it right now. Not really watching it honestly, not in ithe market for a new GPU right now... and the card is worse than my current one anyways.
 
So, guess what! I was wrong, AMD's "Power Problem" isn't actually fixed. I probably had some wrong refresh rates set when doing the testing.

TLDR: 4K 60Hz has much lower power draw. Depending on the GPU, other GPUs can run at 4K 144Hz without spiking the power draw more than ~35%. Retesting is ongoing, but it's messy and now I'm wondering if perhaps using a different DisplayPort connection might make a difference on some cards.

Full details: I'm doing some testing of a recent game launch, and I noticed RX 7800 XT power draw was back to around 30W idle power. I thought it was because of all the GPU swaps, but it's not. Even a full driver cleaning didn't make a difference. What did make a difference: Setting the refresh rate lower. And this is where it gets fun.

RX 7800 XT idle power use drops significantly with a 120Hz or lower refresh rate (again, at 4K). RX 7700 XT behaved in a similar fashion: 120Hz was like 13W, 144Hz was 28W. And then RX 7600 simply didn't like anything more than 4K 60Hz. It drew 6W at 60Hz, but jumped to 15~18W at anything above that level (from 82Hz to 144Hz).

It's not just RX 7000-series GPUs, either. I haven't tested them all, and in fact only have results for the RX 6800 XT at the moment, but at 4K 144Hz, or anything above 60Hz, it draws about 40W idle. At 60Hz, though, it drops to 8~10W power draw.

@jeffy9987 And this would potentially explain what you're seeing. I haven't checked power draw on a higher refresh rate display (I have a 4K 240Hz display on a different PC... except it uses DSC to get there and that may or may not be a factor. Lots of variables, in other words), but if you have a 7900-class card, it's entirely possible that the line is at 144Hz, so your use of 165Hz is "too high" for the lowered power draw.

Which honestly still feels like there's some driver BS that needs to be fixed. I would think all of the current RX 7000-series GPUs should have the same video output hardware, so if the 7900-class cards can have lower idle power use at 144Hz, but the Navi 32 cards need 120Hz or lower, and Navi 33 needs 60Hz... that's weird. Is the GPU compute somehow factoring into how much power is needed? Because the lower spec cards seem to need to kick into a higher power state earlier than the high-spec cards. 🤷‍♂️

Anyway, I'm adding an update to the text and am in progress on testing other workloads (4K video playback, both via VLC and via YouTube). There will be a future article at some point. This is my TED Talk mea culpa.
 
See my updated post, for benchmarks. I linked two sites - one of them very good - that went through fairly comprehensive benchmarks. There never was a need to waste any time speculating.


What I said was:
"In the end, I'd guess it performs similarly to the RX 7800 XT, if possibly even a bit worse."​

And it turns out not to have been too far off! The gap between the 7800 XT and 7900 GRE is half as big as the gap between the latter and the 7900 XT!
You guessed the 7900 GRE to be at some performance level below or up to exactly 7800 xt's performance. Similar means identical, interchangeable, or same, not some amount above and below the reference. I guessed 10-20% faster. Low and behold we were both wrong in our speculation as it was about 4-7% faster. we were both about as far off as each other in the guess.
 
Status
Not open for further replies.