Best GPU Benchmarks: How We Test Graphics Cards

If anyone is wondering, the averages (geometric mean) across all 32 games tested are:
Average: 113.7 fps
99th Percentile: 81.2 fps

Also, testing 32 games, even at just one setting (well, sometimes two to check APIs) took basically more than a full day of work. You can imagine how much effort it would take to do this on every GPU review! Which is why I try to stick to around 10 games tested.
 
  • Like
Reactions: vinay2070

vinay2070

Distinguished
Nov 27, 2011
255
58
18,870
Just a thought. Can you please include CPU scaling for UWQHD resolutions in future benchmarks? From what I understand, the CPU wont get taxed when you shift from 1080P to 4K that much, but when you shift from 1080 to UWQHD, the CPU now has to do additional computation for the extra FOV shown. Most articles dont include this on the internet. That would be really helpful to see if CPU frequency matters or the extra threads matter for a given game engine to display the extra real estate.

Thanks for the article!
 
Just a thought. Can you please include CPU scaling for UWQHD resolutions in future benchmarks? From what I understand, the CPU wont get taxed when you shift from 1080P to 4K that much, but when you shift from 1080 to UWQHD, the CPU now has to do additional computation for the extra FOV shown. Most articles dont include this on the internet. That would be really helpful to see if CPU frequency matters or the extra threads matter for a given game engine to display the extra real estate.

Thanks for the article!
I've actually got a full suite of testing with ten of the top AMD and Nvidia GPUs right now on both i9-9900K and R9 3900X. The article should go up this week. In short, most games at 1440p and 4K are still mostly GPU limited -- with a 2080 Ti, at 1440p ultra, the 9900K is about 5% faster (vs. 10% faster at 1080p). And at 4K ultra it's only 2.4% faster.

Of course, if RTX 3090 / 3080 Ti (whatever the top new GPU is called) ends up being 50% faster than a 2080 Ti, and a 3070 ends up 50% faster than a 2070, it will push the bottleneck back to the CPU more.
 

vinay2070

Distinguished
Nov 27, 2011
255
58
18,870
Well I was speaking about UltraWideQHD rather than QHD. In UWQHD, the game has to display extra area on the left and right of the screen compared to FHD/QHD/UHD. Now not only the GPU has to work hard, but the CPU has to work hard as well to decide what to display in that extra area and to calculate AI etc for that extra area.

I was wondering what part of the CPU does this extra area utilize? whether single threaded or multi thredead part? and if different engines are optimized different ways. I am assuming farcry will increase the difference between AMD and Intel CPUs

In other words, given a 2080Ti, I want to know the FPS at MEDIUM orHIGH settings at FHD, QHD and UWQHD for a list of CPUs for different games. Primarily 10700K 10600K, 9700K, 10400, 3700X, 3600, and the 3300. The reason I suggest Medium or High settings to reduce GPU bottle necking. Of course this is a lot of work for all the games. But doing for 2 or 3 CPU demanding games should show the difference.
 
Well I was speaking about UltraWideQHD rather than QHD. In UWQHD, the game has to display extra area on the left and right of the screen compared to FHD/QHD/UHD. Now not only the GPU has to work hard, but the CPU has to work hard as well to decide what to display in that extra area and to calculate AI etc for that extra area.

I was wondering what part of the CPU does this extra area utilize? whether single threaded or multi thredead part? and if different engines are optimized different ways. I am assuming farcry will increase the difference between AMD and Intel CPUs

In other words, given a 2080Ti, I want to know the FPS at MEDIUM orHIGH settings at FHD, QHD and UWQHD for a list of CPUs for different games. Primarily 10700K 10600K, 9700K, 10400, 3700X, 3600, and the 3300. The reason I suggest Medium or High settings to reduce GPU bottle necking. Of course this is a lot of work for all the games. But doing for 2 or 3 CPU demanding games should show the difference.
There's no real difference in CPU requirements for widescreen vs. ultrawide. All of the world updating stuff that happens is "virtual" on the CPU -- positions and matrices are recalculated, AI runs, etc. and it doesn't matter whether the updates will be visible on the screen or not.

Once all that world update stuff is complete, the GPU takes everything and spits out a frame full of pixels, using the most up-to-date game world data. Running at 1600x1200 (4:3), 1920x1080 (16:9), or 2560x1080 (21:9) isn't any more or less work on the CPU. The GPU does more work in some cases, but that's purely because there are more pixels to render.

It's also not linear scaling of performance with regards to the number of pixels rendered. 1080p represents 2,073,600 pixels per frame and 1440p is 3,686,400 pixels per frame, so 1440p requires the GPU to render 77.8% more pixels. That would mean 44% less performance if scaling were linear with pixel counts, but in practice you only lose about 25-35% of performance by going from 1080p to 1440p. Similarly, 2560x1080 is 33% more pixels than 1920x1080, but performance is typically only ~15% slower. That applies to 4K vs. 1440p as well (or 3440x1440 vs 2560x1440). 4K is 125% more pixels, but performance drops 35-45% instead of the 'expected' 56% drop.
 
I was wondering why you quit using World of Tanks Encore?
Did we ever use WoT Encore for GPU reviews? I never have -- it's generally regarded as more of a CPU test. Which is why I also don't use stuff like Ashes of the Singularity for GPU benchmarking. Anyway, I shifted to Tom's Hardware from PC Gamer in February, looked at the then-current test suite, and made a few modifications. I'm still looking for some good games (for benchmarking) to replace Far Cry 5 and Final Fantasy XIV -- both are older, and FFXIV in particular doesn't really represent GPU capabilities that well today.