Review MSI Radeon RX 6950 XT Gaming X Trio Review: Power Hungry

What a terrible model... The Sapphire is way way better, like always. MSI didn't even try to make this card decent; more like just fulfilling commitments or quotas.

Also, I wonder if toying around with the VRAM speed would yield better results than the core in terms of extra FPS'es and specially power for the 6950XT. I have the 6900XT and I know for sure it does, but it's capped at ~2000, since it starts artifacting heavily above that speed for me. I run it stock, but I wanted to test the limit of it, heh.

Regards.
 

King_V

Illustrious
Ambassador
Editing nipick:
We are not showing professional application performance with the MSI card, as it was basically the same story as we saw with our initial MSI RX 6950 XT content creation results.
That MSI that's linked should say Sapphire.

Still, given how little benefit pushing the extreme limits of power consumption gave to MSI, I'm really curious about how much, or rather, little, performance might be lost in backing down the power and clocks... I know nobody buys a top-of-the-line card in order to be power-efficient, but I wonder if we might have a situation here that is similar to the underclocking runs for the Vega 56.
 
What a terrible model... The Sapphire is way way better, like always. MSI didn't even try to make this card decent; more like just fulfilling commitments or quotas.

Also, I wonder if toying around with the VRAM speed would yield better results than the core in terms of extra FPS'es and specially power for the 6950XT. I have the 6900XT and I know for sure it does, but it's capped at ~2000, since it starts artifacting heavily above that speed for me. I run it stock, but I wanted to test the limit of it, heh.

Regards.
I poked around a bit at VRAM speeds when I was doing the Sapphire review. Ultimately, I didn't say much about it, but even though you can push clocks higher, I don't think you get the gains that I'd expect. There's something goofy with the VRAM speeds on these 18Gbps modules where you often don't get anywhere near the theoretical boost in performance relative to the existing 16Gbps cards. I suspect memory timings (which you can't directly see on the GDDR6) are somehow at play.

For example, and I know this is a specific use case, but the cryptocurrency mining speed of the RX 6950 XT was consistently far lower than the RX 6900 XT, regardless of what I tried. You can get ~65 MH/s out of the RX 6900 XT after tuning, but the best I ever managed on the RX 6950 XT was about 54 MH/s. "Stock" (factory) performance with a tweak to the maximum GPU clock did better than any attempted memory overclock.
 
I poked around a bit at VRAM speeds when I was doing the Sapphire review. Ultimately, I didn't say much about it, but even though you can push clocks higher, I don't think you get the gains that I'd expect. There's something goofy with the VRAM speeds on these 18Gbps modules where you often don't get anywhere near the theoretical boost in performance relative to the existing 16Gbps cards. I suspect memory timings (which you can't directly see on the GDDR6) are somehow at play.

For example, and I know this is a specific use case, but the cryptocurrency mining speed of the RX 6950 XT was consistently far lower than the RX 6900 XT, regardless of what I tried. You can get ~65 MH/s out of the RX 6900 XT after tuning, but the best I ever managed on the RX 6950 XT was about 54 MH/s. "Stock" (factory) performance with a tweak to the maximum GPU clock did better than any attempted memory overclock.
That is so weirdly interesting... I wonder if the higher clocks were at the expense of way way looser timings?

Regards.
 
  • Like
Reactions: JarredWaltonGPU

Sleepy_Hollowed

Distinguished
Jan 1, 2017
512
200
19,270
Wow, what on earth is this card for? Expensive, inefficient space heater?
Since even messing with voltages does not seem to make performance better, this is an absolute head scratcher, probably a spec-hunter card only.

That being said, other versions of this card will get some real usage with AMD's FSR 2.0 and it will make absolute sense even with zero tensor cores. Losing ~7 frames per second to nvidia is neglible at those resolutions and with comparable quality as well.

Let's see if next gen and DLSS 3.0 might be different, but DLSS 2.x is not.