Diferent cpu load

gamepassion

Distinguished
Sep 5, 2011
64
1
18,545
I want to understand better how cpus work in games.

So, lets assume this scenario (based on benchmarks seen on youtube) :

Same gpu, say gtx 1080.
2 different cpus: i7 8700 and r7 2700x

Now, they get relative similar diference in most benchmarks, i7 getting better fps while r7 getting a few less fps. Lets say difference is about 10% (for over 100 fps on both cases).

To me this means ryzen is less powerfull in games, as gpu is the same

Now, lets look at cpu loadings: i7 load at 30-40% while r7 load at 15-20%. This os while gpu gets consistent 90+% load.

Why cpus don't use more of their power to deliver more fps? I get that gpu can deliver that much fps and so, cpu has nothing to work on so it gets less utilized, but than, if on i7 gtx gets 120 fps, why it doesn't get 120fps on r7, upping the load of ryzen cpu?

Hope my dilemma is clearly explained. Hope someone will get me trough it. Thanks!
 
More cores/threads than needed are not always helpful, and can actually hinder frame rates in some games. and, even more so with Ryzen due to it's architecture as pertains to RAM access.

Simultaneous streaming while gaming can fully load a 4c/8t CPU, and perhaps a 6c/6t design, but, I've not seen any comparisons yet where the 2700X will outframe an optimally running 8700/8700K, even with streaming. (Many Ryzen 7-1700 fans were quick to claim some perceived victory based solely on lower core usage scenarios when Ryzen hit the market, min/max/average frame rates somehow not important to some.

If the 10% difference is not important (As in the 100 vs. 110 fps scenario), the 2700X is indeed priced VERY attractively, but, some folks like the highest frame rates for 144 Hz monitors, etc...

Perhaps Ryzen 3000-series CPUs will eliminate the 'gaming frame rate gap' compared to Intel CPUs, but, we will not know for sure until this summer.
 
Well, I read it. Very interesting points in there. Still, it does not answer my dilemma.

If a gpu deliver 120fps at its best (used 99% and paired with the best cpu that is used 30%) why the same gpu deliver only 100fps (also used 99%) with a weaker cpu that is used only 20%.

I see here that the cpu is the difference. The gpu is the same, it is fully used. The cpu not fully used shouldn't up its usage to deliver more?
 
No you didn't read the part I'm quoted in. Once the GPU is full the cpu will stop rendering more frames as it won't do any good since the GPU is choked up.

There is more to a system than a gpu and CPU as well. Different monitors, different settings (a 99% loaded GPU getting 100 fps a minor tweaks to settings will see big number increases)
 
Still, my problem isn't with game settings (let's assume settings are identical). I don't understand why if gpu can deliver more but is stopped by the cpu that isn't fully utilized.

Look at i5 4th gen or alike. They get towards 100% usage in today games (well, in some of them) so I get a rtx 2080 is bottlenecked by those cpus.

But in example I give, only gpu is at max usage, so gpu should bottleneck the cpu. And I get this is the desired scenario for gaming rigs. So, why a cpu that has plenty of resources available don't deliver?

One comparation i found on the web said this: bottleneck is like on road, a car that could run at 180kph can't because there is another car in front of it thst only goes 120kph. And this I get. I don't get that a car that can run at 180kph and has in front another one running at 180kph is running only at 150kphh. Only reason would be driver doesn't want to run higher.

So, I only presume it is how cpu architecture works and/or games are optimized. Cpu and/or game limit performance on purpose some how. So this is my dilemma: how can this be explained?

An I have read tha part where you were cited. And i really understand it. But digging deeper: that a cpu can deliver a max fps is obvious. But why can't it deliver more IF cpu isn't used fully?
 
1) only thre GPU can render frames

2) only the cpu can tell the GPU what to render

When the GPU is maxed out feeding it more frames won't help as it can't render any more than it is already.

In your fake world example that's not how it works a gpu loaded to 100% with the exact same settings on two.differnt systems will perform within 1-2 fps of each other.

It's only in cases where the cpu can't render as many frames as the GPU can that the cpu plays a major role... In your theoritcal systems csgo the Intel would get a more.fps and the gpu would have like 30% load


Switch game to battlefield 5, and make the fame 4k the GPU will sit at 100% and they'll get the same fps
 
Supahos, thank you for consuming your time (and most like your energy) replying to me. I`m not tech savvy. Rather I`m a logical person that rationalizes probably more than it should.And your posts here really helped me understand things better. Maybe from your perspective, things get quite clear. Somehow, from mine it doesn`t.

So maybe I should ask other way around:

What cpu utilization really means? What %utilized means and what 100% would mean? Is it correct to assume a less utilized cpu in today games (or apps) have a better chance to stand over time?

My dilemma started in my search for an upgrade for cpu. Somehow I get the idea that if 5 years ago I would have opted for an i7 instead of an i5 I would still have a performing cpu. And so, now I look for I7 cpus compared to I5 and also Ryzen cpus. And while I see better results for I7, I don`t get the lower results of R7 vs I5. And so I start seeing the cpu usage that even more confused me.

 
Cpu utilization is how much of it's maximum is being used. The reason one that has less utilization may or may not age better is software utilization. That's the problem right now anyway. If they could balance the load of software to use all resources available a 2700x would beat everything except a 9900k in everything. I think in June/July when Ryzen 3000 comes out the days of Intel being better at anything may be older. A much lower powered 8c16t ryzen 3000 engineering sample beat a 9900k heads up.
 
So, to better understand: (1) ryzen doesn't use the software better

or

(2) software isn't well optimized for ryzen.

If (1) I understand why the new 3000 line could be better. But if (2) how will software be more optimized for ryzen? And why this isn't already happening?
 
#2 for sure. It has a bunch of pretty good cores. Intel has less really good cores. Any software designed to actually use whatever power is available a 2700x is a really good option. Anything that is coded poorly (lazily) it's just a pretty good option. Going forward Intel now has mainstream 8 core parts so they'll allow software people to optimize for it which will actually help ryzen.

The new 3000 should be somewhere between a tiny bit slower single core to a little bit faster. Also I'm pretty sure the could that beat the 9900k was a ryzen 5, not a ryzen 7 or supposed ryzen 9.

If it's individual cores are equal, and it has more cores and costs less per core it will basically not make sense to buy anything intel

As for why it's not happening? It is most games that have come out in the past 18 months have supported 8 cores and the gaps between Intel and AMD aren't big. Just most reviews use older stuff so you can compare to older tech to see gains.

And as to why .. it's simple Intel was 87-90% of the consumer cpu market noone would spend time helping amd. No idea world wide numbers but the largest retailer of CPUs in Germany is selling more 2600/2600xs a month than Intel total processors
 
Good. Now it star to be clear: games are poorly optimized for more cores so ryzen fall behind, as they have worse per core performance, but where app is optimized for more cores/ht ryzen can win even with worse per core performance. Makes sense.

So, can I state that in 2 years say, a r7 2700x would have aged better than a i7 8700? (quite equally priced).