News AMD Ryzen 9 4950X Rumor: Could it Spell Trouble for Intel Gaming Supremacy?

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
The more logical way of doing a repeatable benchmark is to replay inputs using fixed/recorded PRNG seeds for AIs and other algorithms, in which case the load on the game engine is effectively the same as actual play as all of the pathfinding, AI and whatever else all need to be re-calculated for playback.
That's what they did with csgo, fortnite and some others but they record every position of every...thing players enemies items so nothing has to run at playback time except for the 3d render engine.
 
Unless you are only rendering a static scene, you will need the xyzrpy of every control point of every object in the scene for every frame too.
Yes you do, you need them for a static scene as well it's just more data for a moving scene, but instead of the CPU having to calculate them on the spot depending on your input they are just loaded in from a text file.
 
Just look at the benchmark of strange brigade, they maxed it out to the complete limit,it's a completely static scene where other benches at least pretend that some things are moving.
I've done extensive testing of a lot of games over the years, with and without built-in benchmarks. While there are exceptions, most of the built-in benchmarks still run a lot of AI and other calculations in the background in order to properly simulate live gameplay. Even Strange Brigade, with it's "everything is frozen" portrayal of the game during the test sequence, doesn't totally shut off all other calculations. But it is more demanding in the live game (mostly on the CPU).

There are advantages to not having a completely 'live' environment for benchmarks, though, the biggest being repeatability. Whatever the benchmarks sequence, whether built-in or using a set path while logging frametimes using OCAT, they're ultimately only looking at one part of the game world. Some areas of a game will be more demanding, others less -- and some built-in benchmarks are more demanding than the typical gameplay, while others are less demanding.

Ultimately, they're all just data points showing performance potential, and when you look at them all in aggregate (eg: 9900K vs 3900X), it's pretty clear that across a suite of games, Intel's CPUs are still faster than AMD's CPUs. Mostly because of clockspeeds and latencies, and Zen 3 may change that.
 
You can visually distinguish a change in performance of less than or equal to 4.17%?
Sure you can. While 4% on average frame rates may be meaningless to most people, it certainly can make a substantial difference in perceptible stutter. That's why many PC reviewers keep tabs on 1% and even 0.1% lows to give their readers/viewers some representation of how much stutter there may be. Few things in a game get on my nerve faster than stutter does.
 
Sure you can. While 4% on average frame rates may be meaningless to most people, it certainly can make a substantial difference in perceptible stutter. That's why many PC reviewers keep tabs on 1% and even 0.1% lows to give their readers/viewers some representation of how much stutter there may be. Few things in a game get on my nerve faster than stutter does.

But that's not the case that Flemishdragon is making, though my analogy may have been misleading. They're claiming to be able to see the difference that 200MHz gives them in performance. What I'm telling them is that they can't visually tell the difference that a 4.17% clock speed increase (4.8GHz vs 5.0GHz) would give.

I didn't say 100 avg fps vs 104 avg fps. I'm saying 100 fps constant vs 104fps constant.
 
  • Like
Reactions: JarredWaltonGPU
I can in Lightwave the feedback and other functions going on which are often not multi threaded.
I am not here to lie. And if you FEEL and notice it that means there must be a big difference somewhere. I guess it's not only single threaded since if there are tasks being done they also can be multithreaded a little (but not really) for example 1 core fully second 50% 3thcore 30%... so it should add already to the 4.17% you are saying. For example Opengl does not only use gpu but many other things. A year ago I had to less overclock my cpu in the summer to 4.6 instead of 4.8. I felt the difference in Lightwave. In the winter tried to push an old 2500K to 4900mhz it felt like snappier even the windows, it's like my cpu was on caffeine. But it wasn't stable so it's back on 4.7, that computer is 11years old.
 
I am not here to lie. And if you FEEL and notice it that means there must be a big difference somewhere. I guess it's not only single threaded since if there are tasks being done they also can be multithreaded a little (but not really) for example 1 core fully second 50% 3thcore 30%... so it should add already to the 4.17% you are saying. For example Opengl does not only use gpu but many other things. A year ago I had to less overclock my cpu in the summer to 4.6 instead of 4.8. I felt the difference in Lightwave. In the winter tried to push an old 2500K to 4900mhz it felt like snappier even the windows, it's like my cpu was on caffeine. But it wasn't stable so it's back on 4.7, that computer is 11years old.
That's the problem, though: both single- and multi-threaded workloads would be at best 200 MHz faster, and at 4.8GHz to 5.0Hz that's only a 4.17% increase. And because most things don't scale perfectly with clockspeed, it's probably more like a 3.5% difference. If you're doing a task that takes 30 minutes, you won't notice that it finishes in 28.8 minutes unless you're using a stopwatch.

Realistically, the threshold for something being noticeable outside of benchmarks is usually more like 20% -- maybe 10-15% if you're really tuned in to what's going on. Back when CPUs were 1GHz, 200MHz meant a lot more than at nearly 5GHz. Basically, a double digit percentage gain is the minimum to be truly meaningful. Other stuff might factor in, like if you're running stock vs. OC you might actually get a constant 4.9GHz instead of 4.7GHz with fluctuations down into the 1-4GHz range for power saving.
 
  • Like
Reactions: King_V
That's the problem, though: both single- and multi-threaded workloads would be at best 200 MHz faster, and at 4.8GHz to 5.0Hz that's only a 4.17% increase. And because most things don't scale perfectly with clockspeed, it's probably more like a 3.5% difference. If you're doing a task that takes 30 minutes, you won't notice that it finishes in 28.8 minutes unless you're using a stopwatch.

Realistically, the threshold for something being noticeable outside of benchmarks is usually more like 20% -- maybe 10-15% if you're really tuned in to what's going on. Back when CPUs were 1GHz, 200MHz meant a lot more than at nearly 5GHz. Basically, a double digit percentage gain is the minimum to be truly meaningful. Other stuff might factor in, like if you're running stock vs. OC you might actually get a constant 4.9GHz instead of 4.7GHz with fluctuations down into the 1-4GHz range for power saving.
Both were OC even the 4.6 not running with power saving or a min. So from 4.6 to 4.8. that's 4,34%+2,15%+1.2%(first cpu100%, second50%, third 35,20%, but ofcourse rendering uses all the cores fully I am only talking about realtime feedback gpu is the same speed) That's a sloppy 7,69% But like you say the scaling is probably not linear. But you notice that in realtime feedback. I am not a gamer, pretty much grown out of it but if there is a competition and he has that advantage he will win.