Feature GPU Benchmarks Hierarchy and Best Graphics Cards

Hey everyone,

I know the threads for the GPU Benchmarks Hierarchy and Best Graphics Cards both got closed out a while back. For "reasons," I am not able to get a new thread to show up directly on the article, but I'm going to follow this thread and put a link in both of those articles. Want to talk to me about GPU stuff, or complain about how I do things (respectfully, please)? This is the place! Both articles have been updated, again, as of January 6 and now include data on the RTX 4070 Ti. And on a related note, I'm rather disappointed in how GPU prices have been trending. All the old RTX 30-series cards have gone up in price over the past two months, and RX 6000-series cards are trending up as well. Collusion, or just everyone buying cards up during the holidays? I'm not sure, but I do hope things change for the better in the coming months. (And no, I'm not actually optimistic that will happen.)
 
Thanks Jarred!

Other than hating the "let's push the prices up" trend from nVidia, AMD's passiveness when it comes to actually compete instead of just following nVidia's pricing as close as it can and Intel's fumbling the ball really hard... I have nothing much to complain or comment about :ROFLMAO:

Regards.
 
  • Like
Reactions: Roland Of Gilead
Hey Jarred,

Thank you for this. I for one, really enjoy all of your reviews and GPU related stuff. There aren't many sites like Tom's that do such comprehensive reviews and are open to chatting with the community about opinions (right or wrong), or how the whole test bench and process works. For me, it's very transparent, and without bias (although some other forum users may disagree with me there - Each to their own).

Keep up the good work, and I look forward to reading and contributing to this thread as it progresses.

Cheers!
 

CharlesOCT

Prominent
Jan 4, 2023
15
33
540
It looks like maybe the 3060ti is showing as higher performance than 6700 in the 'best graphics cards' within the first comparison chart, but in the hierarchy chart it shows as less performance? I could be reading this incorrectly too.. Thanks for these articles!

best graphics cards 2022, first chart on the page (may have to try changing years in a couple spots more, fyi):
GeForce RTX 3060 Ti(opens in new tab)4 — $389 ($400)6 — 98.55 — 55.0
Radeon RX 6700 XT(opens in new tab)5 — $370 ($480)8 — 83.39 — 38.8

hierarchy:
GeForce RTX 3060 Ti(opens in new tab)61.9% (91.5fps)80.1% (149.7fps)48.7% (69.7fps)
Radeon RX 6700 XT(opens in new tab)67.5% (99.8fps)86.7% (162.1fps)51.3% (73.4fps)34.9% (40.5fps)
 
  • Like
Reactions: Roland Of Gilead
It looks like maybe the 3060ti is showing as higher performance than 6700 in the 'best graphics cards' within the first comparison chart, but in the hierarchy chart it shows as less performance? I could be reading this incorrectly too.. Thanks for these articles!

best graphics cards 2022, first chart on the page (may have to try changing years in a couple spots more, fyi):
GeForce RTX 3060 Ti(opens in new tab)4 — $389 ($400)6 — 98.55 — 55.0

Radeon RX 6700 XT(opens in new tab)5 — $370 ($480)8 — 83.39 — 38.8

hierarchy:
GeForce RTX 3060 Ti(opens in new tab)61.9% (91.5fps)80.1% (149.7fps)48.7% (69.7fps)

Radeon RX 6700 XT(opens in new tab)67.5% (99.8fps)86.7% (162.1fps)51.3% (73.4fps)34.9% (40.5fps)
There's a note about this in the Best GPUs article somewhere I'm pretty sure. But...

The hierarchy only uses the geometric mean of the average fps for the eight rasterization games, or the five ray tracing games, broken down into individual resolutions. The table in the best GPU article uses the geometric mean of the 1080p medium, 1080p ultra, and 1440p ultra results for rasterization, and for DXR it uses the geomean of the 1080p medium and 1080p ultra results. It makes the overall scores slightly different — and I should note that the A380 doesn't have a 1080p ultra DXR result, which means that value skews up... but it's so slow that I didn't worry about it.

And if anyone is curious about not including the 4K rast or 1440p/4K DXR results, it's because I don't test those on the slower GPUs where it really doesn't make sense. For an overall ranking, then, I need to use a formula where all GPUs are basically level on testing. I may actually retest the A380 and include 1080p ultra DXR testing, just to clear things up. (I'd also need to test the RX 6500 XT/6400 at 1080p ultra DXR, if those will even run.)
 
  • Like
Reactions: cyrusfox

CharlesOCT

Prominent
Jan 4, 2023
15
33
540
Thanks for explaining how you're computing the "Best GPUs" performance score. Still, my math doesnt come up with the same results as the article for the 6700XT?

RTX3060Ti geomean(91.5, 149.7, 69.7) = 98.5 (matches Best GPU article
6700XT geomean(99.8, 162.1, 73.4) = 105.9 (does not match article of 83.3)
 
  • Like
Reactions: Roland Of Gilead
Thanks for explaining how you're computing the "Best GPUs" performance score. Still, my math doesnt come up with the same results as the article for the 6700XT?

RTX3060Ti geomean(91.5, 149.7, 69.7) = 98.5 (matches Best GPU article
6700XT geomean(99.8, 162.1, 73.4) = 105.9 (does not match article of 83.3)
Not many people check my math! You found an error. Apparently I only thought I was using the geomean of 1080p and 1440p for rasterization, but the actual formula also had 4K in there. Which screws things up as noted above, since not all GPUs are tested at 4K. The 3060 Ti wasn't tested at 4K, but the 6700 XT was, so the score was off on the 6700 XT (and several others as well). I've fixed the formula in Excel and updated the table. It should show up correctly now (or in a few minutes at least).

This was also screwing up the top table for cards like the 6650 XT (no 4K result, so it ranked "higher" than the 6700 XT). The overall ranking uses the Geomean of 1080p+1440p rast (three values) + 25% of 1080p DXR (two values). I've put in an estimated 60% score for the A380 at 1080p ultra DXR as well, just to mostly remove that skew (the A750/A770 both scored 62% of the 1080p medium result at 1080p ultra). So the A380 is still fudged a bit, but it doesn't really make a difference as it would only affect the score on the slowest GPU in the charts a bit, and it's already quite a bit slower than the next GPU up the list. (It's basically tied with the 6500 XT in performance, just 3% slower on the overall score, but it has 50% more memory and cost $20 less, which is why it got the nod.)

Cheers!
 
Feb 22, 2023
7
0
10
Hi Jarred. I have been looking to the data and I wanted please, if is not inconvinient, a further eplanation on what is going on with Microsoft Flight Simulator and AMD cards, specially the new ones, and the one that I own, the RX 6700 10GB. The numbers on 7900xtx and 7900xt seems odd to me, how is that they perform worse than previous generation? And why the Rx6700 is much closer to the 6650XT? Is there some sort of memory bandwidth bottleneck? Thanks!
 
Last edited:
Hi Jarred. I have looking to the data and I wanted please, if is not inconvinient, a further eplanation on what is going on with Microsoft Flight Simulator and AMD cards, specially the new ones, and the one that I own, the RX 6700 10GB. The numbers on 7900xtx and 7900xt seems odd to me, how is that they perform worse than previous generation? And why the Rx6700 is much closer to the 6650XT? Is there some sort of memory bandwidth bottleneck? Thanks!
MSFS can be very CPU limited, and apparently it's a bit less CPU limited on certain 6000-series cards compared to the new 7900 cards.

As for the 6700 10GB vs. 6650 XT, in raw compute and bandwidth they're pretty similar. The 6700 10GB has 2304 shaders clocked at ~2450 MHz and 10GB VRAM clocked at 16Gbps on a 160-bit interface. The 6650 XT has 2048 shaders clocked at ~2635 MHz and 8GB VRAM clocked at 18Gbps on a 128-bit interface.

Do the math and it's 11.3 TFLOPS versus 10.8 TFLOPS, and 320 GB/s versus 280 GB/s bandwidth. Assuming MSFS doesn't hit bandwidth as hard as compute, that could mean less than a 5% difference in performance. Best-case, if a game hits bandwidth harder, the 6700 10GB could be up to ~15% faster is all.
 
  • Like
Reactions: VoodooSJ
I just had a thought that I'd like to validate: how much RAM for any given GPU tier is actually necessary? Would that fit into a GPU hierarchy? I would initially say it doesn't, but may be an interesting tidbit?

Context or train of thought: most games are targeting consoles and there's nothing PC enthusiasts can do about it and there's 2 key differences on consoles that could be interesting to see how they affect PC gaming performance (and in turn, GPU performance in games). One is unified memory (there's no "double access" between RAM and VRAM), so the more RAM it should help alleviate the caching situation for GPUs with lower VRAM? What about its latency for any given GPU tier? And the other is disk access and how it relates to GPU performance in newer games (thinking Spiderman and Forspoken, etc). Specially for open areas and see whether or not it would alleviate some of the "texture streaming" (using quotes, yes) problem and stutters usually happening in such areas.

Well, more like food for thought, perhaps?

Regards.
 
Feb 22, 2023
7
0
10
MSFS can be very CPU limited, and apparently it's a bit less CPU limited on certain 6000-series cards compared to the new 7900 cards.

As for the 6700 10GB vs. 6650 XT, in raw compute and bandwidth they're pretty similar. The 6700 10GB has 2304 shaders clocked at ~2450 MHz and 10GB VRAM clocked at 16Gbps on a 160-bit interface. The 6650 XT has 2048 shaders clocked at ~2635 MHz and 8GB VRAM clocked at 18Gbps on a 128-bit interface.

Do the math and it's 11.3 TFLOPS versus 10.8 TFLOPS, and 320 GB/s versus 280 GB/s bandwidth. Assuming MSFS doesn't hit bandwidth as hard as compute, that could mean less than a 5% difference in performance. Best-case, if a game hits bandwidth harder, the 6700 10GB could be up to ~15% faster is all.
Thank you for the quick and thorough answer. Cheers!
 
Mar 5, 2023
4
4
15
Not many people check my math! You found an error. Apparently I only thought I was using the geomean of 1080p and 1440p for rasterization, but the actual formula also had 4K in there. Which screws things up as noted above, since not all GPUs are tested at 4K. The 3060 Ti wasn't tested at 4K, but the 6700 XT was, so the score was off on the 6700 XT (and several others as well). I've fixed the formula in Excel and updated the table. It should show up correctly now (or in a few minutes at least).

This was also screwing up the top table for cards like the 6650 XT (no 4K result, so it ranked "higher" than the 6700 XT). The overall ranking uses the Geomean of 1080p+1440p rast (three values) + 25% of 1080p DXR (two values). I've put in an estimated 60% score for the A380 at 1080p ultra DXR as well, just to mostly remove that skew (the A750/A770 both scored 62% of the 1080p medium result at 1080p ultra). So the A380 is still fudged a bit, but it doesn't really make a difference as it would only affect the score on the slowest GPU in the charts a bit, and it's already quite a bit slower than the next GPU up the list. (It's basically tied with the 6500 XT in performance, just 3% slower on the overall score, but it has 50% more memory and cost $20 less, which is why it got the nod.)

Cheers!


Hi Jarred,

There is something quite off about this chart. It indicates that the higher level 6000 series cards are far faster than they are. For example, it places the 6950XT at faster than the 7900XTX and 4090 and the 6800XT as equivalent to the 3090 Ti at 1080p. This clearly way off based on Tom's own benchmark tests:

https://www.tomshardware.com/reviews/nvidia-geforce-rtx-4070-ti-review-a-costly-70-class-gpu/6

and every other major review from Techspot, Hardware Unboxed, Gamer's Nexus, etc.

It's also quite off for 1440p as well and even 4K, though not as badly.

This really needs updating - a lot of people use this chart as a guideline for buying cards and something is very wrong with it currently.
 
Hi Jarred,

There is something quite off about this chart. It indicates that the higher level 6000 series cards are far faster than they are. For example, it places the 6950XT at faster than the 7900XTX and 4090 and the 6800XT as equivalent to the 3090 Ti at 1080p. This clearly way off based on Tom's own benchmark tests:

https://www.tomshardware.com/reviews/nvidia-geforce-rtx-4070-ti-review-a-costly-70-class-gpu/6

and every other major review from Techspot, Hardware Unboxed, Gamer's Nexus, etc.

It's also quite off for 1440p as well and even 4K, though not as badly.

This really needs updating - a lot of people use this chart as a guideline for buying cards and something is very wrong with it currently.
It's important to note that the current GPU benchmarks hierarchy uses data from testing on a Core i9-12900K test PC. The recent reviews of cards (7900-series, 4070 Ti) have used the Core i9-13900K in an updated test PC in order to mitigate the CPU bottleneck. Things also changed with some games (e.g. Total War: Warhammer III got a patch that improved performance, Forza Horizon 5 got updated, etc.) so the 2022 hierarchy data can be a bit sketchy in places. I have benchmarked all of the cards on the 12900K, and am in the process of retesting cards on the 13900K. Because many of the 12900K results are older, things may not be fully apples to apples.

As for your pointing out that the 6950 XT outperforms the 7900 XTX in rasterization, that's due to the above. At 1080p, at launch, the 7900 series cards underperformed relative to the 6950 XT. This is also the case for some of the other latest generation GPUs at lower resolutions. My suspicion is that the launch drivers for the new architectures weren't as tuned as the existing drivers for older architectures, which means that at 1080p in particular, CPU and system bottlenecks become more of a factor. I could retest cards to verify the numbers, but it's literally a full day (6~8 hours) to run the full test suite on a single GPU, which means it can take a lot of work to suss out any potential anomalies and correct them.

Anyway, all of this is sort of moot as some time in the next ~week I will have a full update for the GPU hierarchy using the 13900K test PC. I've retested all the 30/40-series Nvidia cards, Intel Arc cards, and I'm about 66% done with the AMD 6000-series cards (7900 series are done). I will probably at some point also retest the RTX 20-series, but for the 2023 hierarchy and moving forward, I'm going to skip having separate rasterization and ray tracing summaries. Everything will go into a single grouped chart, meaning if a card sucks at ray tracing it will get penalized. (Because it's been over four years, ray tracing isn't going away, and including that as a half-supported or slow-performing add-on isn't acceptable anymore.)
 
  • Like
Reactions: cyrusfox
Mar 5, 2023
4
4
15
It's important to note that the current GPU benchmarks hierarchy uses data from testing on a Core i9-12900K test PC. The recent reviews of cards (7900-series, 4070 Ti) have used the Core i9-13900K in an updated test PC in order to mitigate the CPU bottleneck. Things also changed with some games (e.g. Total War: Warhammer III got a patch that improved performance, Forza Horizon 5 got updated, etc.) so the 2022 hierarchy data can be a bit sketchy in places. I have benchmarked all of the cards on the 12900K, and am in the process of retesting cards on the 13900K. Because many of the 12900K results are older, things may not be fully apples to apples.

As for your pointing out that the 6950 XT outperforms the 7900 XTX in rasterization, that's due to the above. At 1080p, at launch, the 7900 series cards underperformed relative to the 6950 XT. This is also the case for some of the other latest generation GPUs at lower resolutions. My suspicion is that the launch drivers for the new architectures weren't as tuned as the existing drivers for older architectures, which means that at 1080p in particular, CPU and system bottlenecks become more of a factor. I could retest cards to verify the numbers, but it's literally a full day (6~8 hours) to run the full test suite on a single GPU, which means it can take a lot of work to suss out any potential anomalies and correct them.

Anyway, all of this is sort of moot as some time in the next ~week I will have a full update for the GPU hierarchy using the 13900K test PC. I've retested all the 30/40-series Nvidia cards, Intel Arc cards, and I'm about 66% done with the AMD 6000-series cards (7900 series are done). I will probably at some point also retest the RTX 20-series, but for the 2023 hierarchy and moving forward, I'm going to skip having separate rasterization and ray tracing summaries. Everything will go into a single grouped chart, meaning if a card sucks at ray tracing it will get penalized. (Because it's been over four years, ray tracing isn't going away, and including that as a half-supported or slow-performing add-on isn't acceptable anymore.)


It wasn't just the 7900 series though - the 6950XT was also notably higher than the 4070Ti despite coming in lower in direct comparisons, and even higher than the 4080!

I totally appreciate that this testing is a lot of work. Patches and drivers aside, I think there was still a problem with either the hardware and/or games being compared, because it's quite far off from where it should lay. Glad to hear it's all getting sorted soon, thanks for your hard work!

I'm going to skip having separate rasterization and ray tracing summaries. Everything will go into a single grouped chart, meaning if a card sucks at ray tracing it will get penalized. (Because it's been over four years, ray tracing isn't going away, and including that as a half-supported or slow-performing add-on isn't acceptable anymore.)

That's definitely going to get you a lot of pitchforks from angry AMD users (and even some NVIDIA players). Ray tracing it still not the default standard by any means, so even as a current 4070Ti owner myself who enjoys using RT I'm not convinced that's the right approach. I doubt other major reviewers will do the same yet, so it will definitely split Tom's Hardware from other major reviewers. Maybe I'm wrong about that. I know it means double the testing, but it's no different than comparing Ultra settings from Medium. It's just part of the playing field at the moment.
 
It wasn't just the 7900 series though - the 6950XT was also notably higher than the 4070Ti despite coming in lower in direct comparisons, and even higher than the 4080!
Can you clarify precisely what you're referring to? If you're talking about the rasterization charts, this is expected behavior at 1080p. AMD's RX 6950 XT was generally faster than the RTX 3090 Ti and even beat the RTX 4090 at 1080p medium. This is in part because of drivers and game changes, I suspect, but also note that the RX 6950 XT has 128MB of L3 cache and seems to scale better at lower settings and resolutions. You'll note that at 1440p and 4K, the 6950 XT clearly falls behind all of the latest generation cards. I'm going to recheck my RTX 4080 numbers today, just for sanity's sake, but I don't expect any massive changes. On a 12900K, CPU bottlenecks mean some of the newer GPUs can't strut their stuff until higher resolutions.

Here's the raw numbers and percentages for a couple of comparisons for the rasterization charts, though. You can definitely see that either the 6950 XT performed exceptionally well at 1080p, or the 7900 XTX and 4080 underperform at those resolutions.

207

208
That's definitely going to get you a lot of pitchforks from angry AMD users (and even some NVIDIA players). Ray tracing it still not the default standard by any means, so even as a current 4070Ti owner myself who enjoys using RT I'm not convinced that's the right approach. I doubt other major reviewers will do the same yet, so it will definitely split Tom's Hardware from other major reviewers. Maybe I'm wrong about that. I know it means double the testing, but it's no different than comparing Ultra settings from Medium. It's just part of the playing field at the moment.
This is just for the "overall geometric mean," at least in part because it's a pain to maintain two fully separate spreadsheets and tables of data and to continually make explanations for cards that underperform with ray tracing. I figure we have all Nvidia RTX cards going back to 2018, and all AMD cards going back to 2020, and that's sufficient for a look at how things currently stand at the top of the GPU charts. People can look at the previous generation hierarchy if they want to skip the ray tracing tests being lumped in, or else just look at specific games. At some point, ray tracing support will become pretty prevalent on both the hardware and software side. I'd argue that it's pretty close to the tipping point now for major games — Hogwarts Legacy has it, for example, and it does improve the visuals.

Frankly, if you really want to show how big the Nvidia lead can be, testing with DLSS2 and FSR2 where supported, at quality mode, would be entirely justifiable in my book. The potential for visual artifacts is easily outweighed by the performance increase, and DLSS2 is in more games and more "popular" games than FSR2 right now.

Looking at my current 15 game test suite:
Borderlands 3: no upscaling
Bright Memory Infinite Benchmark: DLSS2 only
Control Ultimate Edition: DLSS2 only
Cyberpunk 2077: DLSS2/3 and FSR2.1
Far Cry 6: FSR1
Flight Simulator: DLSS2/3 and FSR2.0
Forza Horizon 5: DLSS2 and FSR2.2
Horizon Zero Dawn: DLSS2 only
Metro Exodus Enhanced: DLSS2 only
Minecraft: DLSS2 only
Red Dead Redemption 2: DLSS2 and FSR2.1
Spider-Man: Miles Morales: DLSS2/3 and FSR2.1 (and XeSS!)
Total War: Warhammer 3: no upscaling
Watch Dogs Legion: DLSS2 only

You could argue about the selection of games, but I know I didn't intentionally try to skew in favor of Nvidia. Several games only added DLSS and FSR later in their lives, for example. Anyway, that's 11 of 15 with DLSS2, 5 of 15 with FSR2, and 1 of 15 with XeSS. For games that support both DLSS and FSR2, I think generally speaking the gains from DLSS2 on Nvidia are larger than the gains from FSR2 on AMD, and DLSS2 still wins the quality comparisons by a small amount.

I don't know... maybe I'll end up recanting and sticking to separate overall performance charts for DXR and rasterization games, but it's certainly a thorn in my side.
 
Mar 5, 2023
4
4
15
Can you clarify precisely what you're referring to? If you're talking about the rasterization charts, this is expected behavior at 1080p. AMD's RX 6950 XT was generally faster than the RTX 3090 Ti and even beat the RTX 4090 at 1080p medium. This is in part because of drivers and game changes, I suspect, but also note that the RX 6950 XT has 128MB of L3 cache and seems to scale better at lower settings and resolutions. You'll note that at 1440p and 4K, the 6950 XT clearly falls behind all of the latest generation cards. I'm going to recheck my RTX 4080 numbers today, just for sanity's sake, but I don't expect any massive changes. On a 12900K, CPU bottlenecks mean some of the newer GPUs can't strut their stuff until higher resolutions.

Here's the raw numbers and percentages for a couple of comparisons for the rasterization charts, though. You can definitely see that either the 6950 XT performed exceptionally well at 1080p, or the 7900 XTX and 4080 underperform at those resolutions.

I see your point. However, if you look at your tests here from your 4070Ti review using the same games plus Plague Tail:
https://www.tomshardware.com/reviews/nvidia-geforce-rtx-4070-ti-review-a-costly-70-class-gpu/6

The 6950XT is still a bit behind the 4080, 7900xt, and 7900XTX at 1080 medium:
https://cdn.mos.cms.futurecdn.net/U4BLkLyWscosDdh4T5TQWk.png

That small lead widens a bit at Ultra 1080p settings, again in stark contrast to the hierarchy and the values you posted here.
https://cdn.mos.cms.futurecdn.net/PKdvtx559oeYSUZwPCvihT.png

And this also holds for the 1440p and 4K values. Although the relative imbalance shrinks, the 6950Xt is still shown as performing above where it should in the hierarchy chart relative to these test results:
https://cdn.mos.cms.futurecdn.net/PKdvtx559oeYSUZwPCvihT.png

This testing was completed with a 13900K and MSI Z790 DDR5 motherboard and adds a title that might favour newer cards, so that might also be a factor.

This is just for the "overall geometric mean," at least in part because it's a pain to maintain two fully separate spreadsheets and tables of data and to continually make explanations for cards that underperform with ray tracing. I figure we have all Nvidia RTX cards going back to 2018, and all AMD cards going back to 2020, and that's sufficient for a look at how things currently stand at the top of the GPU charts. People can look at the previous generation hierarchy if they want to skip the ray tracing tests being lumped in, or else just look at specific games. At some point, ray tracing support will become pretty prevalent on both the hardware and software side. I'd argue that it's pretty close to the tipping point now for major games — Hogwarts Legacy has it, for example, and it does improve the visuals.

Frankly, if you really want to show how big the Nvidia lead can be, testing with DLSS2 and FSR2 where supported, at quality mode, would be entirely justifiable in my book. The potential for visual artifacts is easily outweighed by the performance increase, and DLSS2 is in more games and more "popular" games than FSR2 right now.

You could argue about the selection of games, but I know I didn't intentionally try to skew in favor of Nvidia. Several games only added DLSS and FSR later in their lives, for example. Anyway, that's 11 of 15 with DLSS2, 5 of 15 with FSR2, and 1 of 15 with XeSS. For games that support both DLSS and FSR2, I think generally speaking the gains from DLSS2 on Nvidia are larger than the gains from FSR2 on AMD, and DLSS2 still wins the quality comparisons by a small amount.

I don't know... maybe I'll end up recanting and sticking to separate overall performance charts for DXR and rasterization games, but it's certainly a thorn in my side.

I see your point here too, I'm just not sure we're quite there yet. Good luck either way! I really appreciate what you do here. I've done plenty of benchmarking on my own in a far, far more limited capacity than this and even that was a huge amount of time and work so I have a sense of what this takes to do a complete and thorough job. I'm also a scientist by trade, so tedium in testing and organising data is something I understand well. ;)[/QUOTE]
 
I see your point. However, if you look at your tests here from your 4070Ti review using the same games plus Plague Tail:
https://www.tomshardware.com/reviews/nvidia-geforce-rtx-4070-ti-review-a-costly-70-class-gpu/6

The 6950XT is still a bit behind the 4080, 7900xt, and 7900XTX at 1080 medium:
https://cdn.mos.cms.futurecdn.net/U4BLkLyWscosDdh4T5TQWk.png

That small lead widens a bit at Ultra 1080p settings, again in stark contrast to the hierarchy and the values you posted here.
https://cdn.mos.cms.futurecdn.net/PKdvtx559oeYSUZwPCvihT.png

And this also holds for the 1440p and 4K values. Although the relative imbalance shrinks, the 6950Xt is still shown as performing above where it should in the hierarchy chart relative to these test results:
https://cdn.mos.cms.futurecdn.net/PKdvtx559oeYSUZwPCvihT.png

This testing was completed with a 13900K and MSI Z790 DDR5 motherboard and adds a title that might favour newer cards, so that might also be a factor.

I see your point here too, I'm just not sure we're quite there yet. Good luck either way! I really appreciate what you do here. I've done plenty of benchmarking on my own in a far, far more limited capacity than this and even that was a huge amount of time and work so I have a sense of what this takes to do a complete and thorough job. I'm also a scientist by trade, so tedium in testing and organising data is something I understand well. ;)
Yeah, so the RTX 4070 Ti would have been tested with the launch drivers for the card, which are of course almost three months newer than the 4090 launch drivers. That's one factor. All the game patches were also applied with the newer test PC, which is probably the bigger factor. Like, I know I tested the RX 6950 XT back in... I guess the data would have to be from the launch review, unless I retested some games. Actually, Sapphire asked for the card back and I think I switched to the MSI card's numbers on performance. https://www.tomshardware.com/reviews/msi-radeon-rx-6950-xt-gaming-x-trio-review-power-hungry

Looking at the 6950 XT 1080p ultra numbers between the original 6950 XT review (on 12900K) and the 4070 Ti review (on 13900K), there are a few improvements but nothing too crazy. Changes in settings did occur in two games (FH5 and RDR2), but I don't think those are enough to skew the results. I guess the real question is if somehow the RX 7900 and RTX 4070 Ti/4080/4090 testing on the 12900K is the problem. It might be. I'll look into that, as I said, with a 4080 retest. Depending on how that goes, I might need to retest the 4090, 4070 Ti, 7900 XTX, and 7900 XT on the old testbed just to get the hierarchy numbers corrected. Sigh...
 
  • Like
Reactions: chef_sam
Mar 5, 2023
4
4
15
Yeah, so the RTX 4070 Ti would have been tested with the launch drivers for the card, which are of course almost three months newer than the 4090 launch drivers. That's one factor. All the game patches were also applied with the newer test PC, which is probably the bigger factor. Like, I know I tested the RX 6950 XT back in... I guess the data would have to be from the launch review, unless I retested some games. Actually, Sapphire asked for the card back and I think I switched to the MSI card's numbers on performance. https://www.tomshardware.com/reviews/msi-radeon-rx-6950-xt-gaming-x-trio-review-power-hungry

Looking at the 6950 XT 1080p ultra numbers between the original 6950 XT review (on 12900K) and the 4070 Ti review (on 13900K), there are a few improvements but nothing too crazy. Changes in settings did occur in two games (FH5 and RDR2), but I don't think those are enough to skew the results. I guess the real question is if somehow the RX 7900 and RTX 4070 Ti/4080/4090 testing on the 12900K is the problem. It might be. I'll look into that, as I said, with a 4080 retest. Depending on how that goes, I might need to retest the 4090, 4070 Ti, 7900 XTX, and 7900 XT on the old testbed just to get the hierarchy numbers corrected. Sigh...

Sorry to be that stone in your shoe here. Hopefully the updated evaluations go smoothly. Best of Luck, thanks for the correspondence and info Jarred.
 

blagomils

Honorable
Dec 13, 2016
2
1
10,515
Just wondering when there will be a separated places finally implemented in all and any charts to show the difference between RTX3060 12gb and 8gb version as it is significant, but somehow separate versions are shown for basically all other cards, but not for the one of the most popular cards.
 
Just wondering when there will be a separated places finally implemented in all and any charts to show the difference between RTX3060 12gb and 8gb version as it is significant, but somehow separate versions are shown for basically all other cards, but not for the one of the most popular cards.
Not surprisingly, no one sampled me the RTX 3060 8GB... because no one should buy it. It's a junk rehash at the end of a product lifecycle to try and get slightly more money for the chip than Nvidia could get if sold as a 3050. The 3060 8GB is a bit faster than the 3050 but slower than the 3060 12GB in all cases, and no one should buy it unless they have a very specific niche use case. It's akin to the previous generation RTX 2060 12GB, or GTX 1060 5GB, or various other "they exist but only in limited quantity" past GPUs.
 
Feb 22, 2023
7
0
10
Jarred, there are rumours that the Radeon 6700 series are performing extremely and oddly well in "last of us part 1" game. Even at 6800XT levels when SAM/ReBAR is enabled. The "explanation" is that is well optimized for these cards because is a PS5 port and the similar graphics included in the console. Could you please test and confirm this and if it is true give a better explanation on why this is happening? Thanks!
 
Jarred, there are rumours that the Radeon 6700 series are performing extremely and oddly well in "last of us part 1" game. Even at 6800XT levels when SAM/ReBAR is enabled. The "explanation" is that is well optimized for these cards because is a PS5 port and the similar graphics included in the console. Could you please test and confirm this and if it is true give a better explanation on why this is happening? Thanks!
Who exactly is saying this, and do they have real benchmarks or just anecdotal evidence? Because it sounds pretty bunk. Even though the game was for PS5, that doesn't actually have an RX 6700-series GPU. It has a special chip that has CPU, GPU, and lots of memory bandwidth all in one. Optimizations for the AMD Oberon chip in PS5 wouldn't translate to Windows either, since PS5 doesn't use DirectX at all.

Best-case, I'd expect the game to target 10~12 GB of VRAM, which would make the 6650 XT and below potentially tank. But it would be very odd/surprising for the console port to run as fast on RX 6700 XT as on RX 6800 XT. As I noted in the "Why does 4K require so much VRAM?" article, The Last of Us Part I is one of several pretty poorly done ports, with Resident Evil 4 and Star Wars Jedi: Survivor being two more AMD-promoted games with questionable coding quality. I suspect any anomalies are more likely tied to it being a lousy port job rather than optimizations for Navi 22 GPUs.

I can try to look at it, but it's not going to be at the top of my todo list right now. :)
 
  • Like
Reactions: VoodooSJ
Feb 22, 2023
7
0
10
Who exactly is saying this, and do they have real benchmarks or just anecdotal evidence? Because it sounds pretty bunk. Even though the game was for PS5, that doesn't actually have an RX 6700-series GPU. It has a special chip that has CPU, GPU, and lots of memory bandwidth all in one. Optimizations for the AMD Oberon chip in PS5 wouldn't translate to Windows either, since PS5 doesn't use DirectX at all.

Best-case, I'd expect the game to target 10~12 GB of VRAM, which would make the 6650 XT and below potentially tank. But it would be very odd/surprising for the console port to run as fast on RX 6700 XT as on RX 6800 XT. As I noted in the "Why does 4K require so much VRAM?" article, The Last of Us Part I is one of several pretty poorly done ports, with Resident Evil 4 and Star Wars Jedi: Survivor being two more AMD-promoted games with questionable coding quality. I suspect any anomalies are more likely tied to it being a lousy port job rather than optimizations for Navi 22 GPUs.

I can try to look at it, but it's not going to be at the top of my todo list right now. :)
I kept looking and found this video.
View: https://www.youtube.com/watch?v=Cfh5r0mAq90