Question What GPU Benchmarks Should I Add/Drop for Future Reviews/Hierarchy?

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

JarredWaltonGPU

Senior GPU Editor
Editor
Feb 21, 2020
789
696
1,770
1
Thanks.
I feel that any performance tests that have to be done manually should come with a huge, bold label stating that, "X game does not offer a built-in benchmark. These benchmarks were performed manually." Imagine, in a manual benchmark, a particular NPC actor comes into view 4 out of 10 runs. That could really skew results. Especially, when comparing multiple GPUs whose performance capabilities are all within 5% of one another.

Heck, maybe designers and publishers will take notice and start putting built-in benchmarks in all their AAA titles. Wouldn't that be nice?
You'd be surprised how little difference there is between running the same sequence manually vs automated built-in tests. Cyberpunk 2077 for example, I walk along a 60 second path in Night City, and the number and variety of NPCs and cars seems relatively random. However, I ran the same setup on the same sequence 10 times (to check for variability between runs) and there was only a 2% spread. I've seen 'bad' built-in benchmarks (eg, Assassin's Creed Odyssey) where the difference between runs can be almost 10%, just because clouds and weather were randomized.

Fundamentally, anyone really getting hung up on a 5% difference has already lost sight of the purpose of benchmarks. If one card scores 100 fps and another scores 105 fps, I would say something like, "Nvidia is a bit faster than the AMD card here, but it's not a huge difference -- one or two settings tweaks would close the gap." It's really only 10% and larger differences that become truly meaningful in terms of the user experience.
 
Mar 9, 2021
4
0
10
0
Warframe on open world nodes can become very demanding for CPU and GPU. Cambrian drift has become a place of horror for players with lower end computer builds.

And is there a reason you are not using GTA5? Has a built in benchmark and with graphics turned up can give a 5 year old mid tier computer serious problems even at 1080p
 

JarredWaltonGPU

Senior GPU Editor
Editor
Feb 21, 2020
789
696
1,770
1
Warframe on open world nodes can become very demanding for CPU and GPU. Cambrian drift has become a place of horror for players with lower end computer builds.

And is there a reason you are not using GTA5? Has a built in benchmark and with graphics turned up can give a 5 year old mid tier computer serious problems even at 1080p
GTA5 is now six years old, so I think that's long enough in the past that it's worth not including. In fact, I dropped it from my testing at PC Gamer probably around 2018. It's been superseded by Rockstar's Red Dead Redemption 2 as well, which is newer and more demanding. What it really comes down to is time, though. Testing every popular game on every new GPU, and then retesting every few months with new drivers and updates, means anything more than about a dozen games ends up as a massive slog, and fewer than 10 games is better.
 

hotaru.hino

Commendable
Sep 1, 2020
1,602
497
1,640
36
And is there a reason you are not using GTA5? Has a built in benchmark and with graphics turned up can give a 5 year old mid tier computer serious problems even at 1080p
That sounds like it's more of a problem with the software, which can mask the potential of the hardware. It's like saying the original Crysis should still be tested because it "crushes" high-end PCs (as in, no PC has yet to reach a sustained triple digits FPS in that game). Crysis however, was designed in a time when it was thought that 10GHz single core CPUs were on the horizon and so it doesn't work well with our modern day multi-core ones, among other issues.
 
Reactions: JarredWaltonGPU
Mar 26, 2021
1
0
10
0
The two cards were always very close. 1660 Ti has more cores, yes, but 1660 Super has faster GDDR6 memory and slightly higher GPU clocks. Some games favor the 1660 Ti by a few points, others favor the 1660 Super. I’m not actually sure why they swapped places but they’re effectively tied — like a 0.1 FPS difference once all the numbers are added together.

Actually, I think my weighting for the hierarchy used average of all FPS before, and in the last month I swapped to geometric mean in the spreadsheet. And that was enough for a 0.3 FPS change (because the 1660 Ti used to be slightly ahead). Of course, the GDDR6 memory on the Ti can probably hit similar overclocks and that would push it ahead, but I don’t think Nvidia is even making those anymore. It decided 1660 Super was preferable, possibly for yield and cost reasons.
Great answer dude. Moshe Strugano (Attorney - Moshe Strugano and Co Law firm) appreciate your response
 

alceryes

Distinguished
@JarredWaltonGPU
How about picking a couple benchmarks (3D Mark, Unigine Superposition, etc) or a couple super-popular games and adding a column or two to the GPU hierarchy to show the performance of that card at a set quality level (Ultra, for instance) in those benchmarks/games.
I understand the reasoning behind just using a relative score to rank the cards, but it's a score in a vacuum. It doesn't give the reader any way of knowing how their GPU will perform in the games they are looking to use it for.

I also think the relative score unnecessarily pushes people into the mindset that their card isn't a good performer. We were raised to think that getting a 60% or lower, in anything, is a failure, but the RTX 2070 Super is actually a pretty good card that will work fine for 99.99% of the games out there at reasonable settings. Of course, IT people and heavy gamers/tweakers know the truth but adding a couple columns to explain the actual performance of said card would go a long way towards making the hierarchy chart more understandable to the masses.
 

JarredWaltonGPU

Senior GPU Editor
Editor
Feb 21, 2020
789
696
1,770
1
@JarredWaltonGPU
How about picking a couple benchmarks (3D Mark, Unigine Superposition, etc) or a couple super-popular games and adding a column or two to the GPU hierarchy to show the performance of that card at a set quality level (Ultra, for instance) in those benchmarks/games.
I understand the reasoning behind just using a relative score to rank the cards, but it's a score in a vacuum. It doesn't give the reader any way of knowing how their GPU will perform in the games they are looking to use it for.

I also think the relative score unnecessarily pushes people into the mindset that their card isn't a good performer. We were raised to think that getting a 60% or lower, in anything, is a failure, but the RTX 2070 Super is actually a pretty good card that will work fine for 99.99% of the games out there at reasonable settings. Of course, IT people and heavy gamers/tweakers know the truth but adding a couple columns to explain the actual performance of said card would go a long way towards making the hierarchy chart more understandable to the masses.
The hierarchy includes charts for all six tested resolutions at the bottom, and the Best GPUs article has the full set of 60 charts, for anyone that wants to look at the micro-view rather than the macro-view.
 
Reactions: alceryes

ASK THE COMMUNITY