Question Potential CPU Bottleneck?

Apr 22, 2022
58
3
35
I just built an SFF PC, and since I was playing on a laptop before and was hooked up to a monitor capped at 60Hz, I really wanted to experience like 400 fps in csgo. However, even with my rtx 3080 and 12600k, I'm, only getting around 180-200 in deathmatch with 16+ people and around 220 fps in competitive matches with only 10 people in the server. I have seen benchmarks of the rtx 3080 ripping apart the game with like 600 fps, so I am slightly concerned. I thought it might have been thermal throttling but my cpu is not pushing more than 55 and gpu is not pushing more than 65. Nothing has been overclocked, and normally by default my 12600k is boosting to 4.9Ghz as advertized, but no strict overclock there, and neither is my gpu core clock or memory clock anything beyond default. My graphics settings are 4:3, 1280x960, shadow quality high, texture detail low, texture streaming disabled, effect detail high, shader detail very high, boost player contrast on, multicore rendering on, 8x MSAA, FXAA disabled, texture filtering anisotropic 16x, and vsync disabled. Is there a setting I am not using correctly, either in game or in nvidia control panel, or is there actually a bottle neck?

Here are my specs:
CPU: 12600K
GPU: EVGA 3080 FTW 3
Mobo: gigabyte aorus ultra z690i ddr4
Cooler: EVGA CLC 280
RAM: G skill trident z 3200 MHz cl 16
Storage: 1tb WD black sn 850 (drive with windows on, I have another 2 tb ssd)
Case: SSUPD meshlicious
PSU: Corsair RMx 750W Gold
 

alexvdm

Commendable
Jul 15, 2020
53
4
1,545
I just built an SFF PC, and since I was playing on a laptop before and was hooked up to a monitor capped at 60Hz, I really wanted to experience like 400 fps in csgo. However, even with my rtx 3080 and 12600k, I'm, only getting around 180-200 in deathmatch with 16+ people and around 220 fps in competitive matches with only 10 people in the server. I have seen benchmarks of the rtx 3080 ripping apart the game with like 600 fps, so I am slightly concerned. I thought it might have been thermal throttling but my cpu is not pushing more than 55 and gpu is not pushing more than 65. Nothing has been overclocked, and normally by default my 12600k is boosting to 4.9Ghz as advertized, but no strict overclock there, and neither is my gpu core clock or memory clock anything beyond default. My graphics settings are 4:3, 1280x960, shadow quality high, texture detail low, texture streaming disabled, effect detail high, shader detail very high, boost player contrast on, multicore rendering on, 8x MSAA, FXAA disabled, texture filtering anisotropic 16x, and vsync disabled. Is there a setting I am not using correctly, either in game or in nvidia control panel, or is there actually a bottle neck?

Here are my specs:
CPU: 12600K
GPU: EVGA 3080 FTW 3
Mobo: gigabyte aorus ultra z690i ddr4
Cooler: EVGA CLC 280
RAM: G skill trident z 3200 MHz cl 16
Storage: 1tb WD black sn 850 (drive with windows on, I have another 2 tb ssd)
Case: SSUPD meshlicious
PSU: Corsair RMx 750W Gold
im not sure, but if a component isn't at 100% and the other one isn't it shouldn't be a bottleneck, are your settings the same as the benchmarks?
 
Apr 22, 2022
58
3
35
im not sure, but if a component isn't at 100% and the other one isn't it shouldn't be a bottleneck, are your settings the same as the benchmarks?
Not for the most part, because the benchmarks I look at are generally running full high settings. However, I have seen benchmarks of rtx 3080 at 1440p and all high settings get more frames than me, and I am already running at a lower resolution. I am not sure how to check CPU/GPU usage when I'm playing the game, so I have not been able to see if one is being utilized fully and the other is not. Any suggestions for software to do that?
 

Karadjgne

Titan
Ambassador
Gpu has nothing to do with the fps. Fps is all cpu. Gpu just lives upto whatever fps the cpu sends it, or fails.

You have a 12600k. That's decent. Now add in the objects, Ai, other players, mxaa, and all the other stuff that gets loaded into maps like Sand II, which is brutal compared to Office. And it's all running on 2-3 threads. So it's never going to make full use of the cpus ability in anything other than those 2-3 threads.

Those other vids you've seen at 600fps most likely were using a 12900k, and it's considerably larger L3 cache, had cpu bound settings like mxaa turned to low-medium, cloud occlusion and other high end lighting affects turned down etc, although a 3080 or better is fully capable of hitting every frame the cpu sends because graphically CSGO is quite simple, the biggest affects being lighting/shadows, bloom and physX particles. It's not like it's a user tailored appearance, it's all the same 8 or so stock appearances, even the graffiti is stock.
 
Apr 22, 2022
58
3
35
Gpu has nothing to do with the fps. Fps is all cpu. Gpu just lives upto whatever fps the cpu sends it, or fails.

You have a 12600k. That's decent. Now add in the objects, Ai, other players, mxaa, and all the other stuff that gets loaded into maps like Sand II, which is brutal compared to Office. And it's all running on 2-3 threads. So it's never going to make full use of the cpus ability in anything other than those 2-3 threads.

Those other vids you've seen at 600fps most likely were using a 12900k, and it's considerably larger L3 cache, had cpu bound settings like mxaa turned to low-medium, cloud occlusion and other high end lighting affects turned down etc, although a 3080 or better is fully capable of hitting every frame the cpu sends because graphically CSGO is quite simple, the biggest affects being lighting/shadows, bloom and physX particles. It's not like it's a user tailored appearance, it's all the same 8 or so stock appearances, even the graffiti is stock.
How interesting, this is the first time I've learned this. So GPU is only responsible for displaying whatever the CPU sends it? Does that also mean that the idea of a weaker CPU with a strong GPU for games is a lie? And in the case of CSGO, aside from msaa, what other settings are CPU dependent so I can turn them down for more frames? (This last part is mainly that I got a 240hz monitor with my build, from the 60hz before, so I want to make sure I can at least hit 240 consistently)
 
How interesting, this is the first time I've learned this. So GPU is only responsible for displaying whatever the CPU sends it?
https://forums.tomshardware.com/threads/why-gpus-cant-just-render-more-frames.3758367/#post-22663019

tl;dr:
  • Games run in a fixed amount of time
  • The CPU must complete the work in the time frame given before it can process graphics
  • The faster the CPU can do this, the more frames it can push to the GPU to render
Does that also mean that the idea of a weaker CPU with a strong GPU for games is a lie?
That depends on what your aim is. If your aim is to have the most FPS ever, then yes, you're supposed to have a really strong CPU.

And in the case of CSGO, aside from msaa, what other settings are CPU dependent so I can turn them down for more frames? (This last part is mainly that I got a 240hz monitor with my build, from the 60hz before, so I want to make sure I can at least hit 240 consistently)
There are no CPU dependent options to adjust. Either the CPU can do the work fast enough or it can't
 

Karadjgne

Titan
Ambassador
All the data collected from your net connection, all the saved game data, object data, game code from storage gets dumped into the ram and shunted to the cpu as needed. The cpu takes all that, assigns every object a dimension, a space on the xyz axis, takes all the partical physX data, vector analysis, lighting affects, motion, Ai, everything and compiles it into a packet for that frame, which gets sent to the gpu. The amount of times the cpu can compile that frame in a single second us your fps limit.

The gpu takes those sent packets and creates a wire frame according to the given instructions. Then it adds color, lighting, shadows etc and final renders all that according to resolution. The amount of times the gpu can do that per second is the fps you get on screen.

The gpu does not create frames. It cannot exceed the amount of frames sent by the cpu. It can only deliver equal or less and that is limited in output by the refresh rate of the monitor.


Cpus and gpus differ. And that difference changes with each game. I have a 3700x and 2070Super. In Skyrim, I have 180+ 4k/8k, high poly and hdt-smp scripted mods and 4k DSR that I'm down to @ 60-90fps depending on map location. Very cpu bound game with imposed gpu limitations, I would get the same (did actually) fps on screen with a gtx970 @ 1080p without the DSR.

In CSGO Office I'm hitting just over 300fps. I almost got that with my 3770k @ 4.9GHz, weaker IPC but higher frequency offsets the difference. Gtx970 or 2070Super makes no difference, it's so graphically simple.

In StarWars The Old Republic, I'm game frame capped at 200fps single player, but can drop as low as 60fps with bloom and nameplates in a 16man operation. Multi-player is brutal on cpu and gpu. My 3770k/gtx970 would go as low as 20fps at ultra.

It's not a matter of weak cpu vs strong gpu, it's a matter of which game needs what. Some games will run just fine on a potato, some games the cpu can far exceed what the gpu can render, and resolution plays a major role in gpu frames.

Imagine a Ryzen 1600 and rtx3080, in CSGO you'd be hitting 250+ fps from the cpu. At 1080p, that 3080 will be taking a nap. That's so pathetically easy for the 3080 to do, it'd barely be running above idle temps. And you'd be grumbling about a measly 250fps at 1080p. Change that to a 4k monitor, the 3080 has to populate 4x as many pixels, that's brutal on any gpu. 250fps is what you'd potentially get from the cpu, 120fps is what you'd get from the gpu. Which is still amazingly good for 4k. And temps would be much closer to 80°+ for gpu, but the cpu will be running cooler, not having to work as hard.
 
Last edited: