Senior GPU Editor
You'd be surprised how little difference there is between running the same sequence manually vs automated built-in tests. Cyberpunk 2077 for example, I walk along a 60 second path in Night City, and the number and variety of NPCs and cars seems relatively random. However, I ran the same setup on the same sequence 10 times (to check for variability between runs) and there was only a 2% spread. I've seen 'bad' built-in benchmarks (eg, Assassin's Creed Odyssey) where the difference between runs can be almost 10%, just because clouds and weather were randomized.Thanks.
I feel that any performance tests that have to be done manually should come with a huge, bold label stating that, "X game does not offer a built-in benchmark. These benchmarks were performed manually." Imagine, in a manual benchmark, a particular NPC actor comes into view 4 out of 10 runs. That could really skew results. Especially, when comparing multiple GPUs whose performance capabilities are all within 5% of one another.
Heck, maybe designers and publishers will take notice and start putting built-in benchmarks in all their AAA titles. Wouldn't that be nice?
Fundamentally, anyone really getting hung up on a 5% difference has already lost sight of the purpose of benchmarks. If one card scores 100 fps and another scores 105 fps, I would say something like, "Nvidia is a bit faster than the AMD card here, but it's not a huge difference -- one or two settings tweaks would close the gap." It's really only 10% and larger differences that become truly meaningful in terms of the user experience.