Question Why am I getting low fps when nothings at 100%

gpole87

Reputable
Jul 30, 2018
16
0
4,510
Ive recently upgraded from a gtx 1060 to an rtx 3060 and was expecting bigger gains than I'm seeing. Using afterburner etc it seems that neither my any of my cpu cores or gpu are anywhere near 100% (more like 40-60%) which seems weird to me. I looked online and found ram could be the issue but I have 24gb so shouldn't be the case ( I know this isn't ideal but tried dropping back to 16gb and saw almost identical fps ). All the temps are below 70C so nothing should be throttling and all the drivers are up to date.

I feel like it might still be the cpu as its quite old compared to the rest of the system but not sure how.
Any help/ ideas would be great.

System Specs:
i7 6700
24gb Corsair vengeance 2133mhz ddr4 ( 1x8gb + 1x16gb )
Rtx 3060
Asus Prime B250m-a
Corsair vs550w Power supply
Windows 11 but had the same problem on windows 10 also.
 
CPU usage % is generally reported as = # threads utilized / # threads available. Most* games still don't use more than about....6 threads consistently.

It's easiest to just look at GPU usage %. If it's below...say...80%, you're CPU-limited in that particular game at those particular quality settings. Once you become CPU-limited, lowering game quality settings doesn't noticeably improve FPS.


Simplified way a PC plays a game:
  1. CPU figures out what needs to be in a given frame (imagine a rough sketch) based on user and game world input. Issues draw call to GPU to tell it what to render.
    • Think of this as positional tracking. How many things are moving (or have the potential to) from one frame to the next.
  2. GPU receives draw call and makes a pretty picture. Sends to monitor when complete.
    • This is detail. Object is now in new position, how has lighting/shading/etc changed. Re-draw object per game/quality rules.
  3. The GPU can't do any work until the CPU tells it what to draw. Raising graphics settings and/or resolution increases the complexity of the GPU's job, making it take longer to render each frame. Lowering settings decreases the complexity of the GPUs job making it take less time to render each frame.
  4. If the GPU finishes rendering a frame before the CPU has finished figuring out what the next frame should contain, the GPU has to wait (<100% GPU usage).
  5. Based on #3 & #4, you should be able to optimize for 90% or greater GPU usage (depending on a game's CPU stress and the CPU/GPU balance of a system)
 

gpole87

Reputable
Jul 30, 2018
16
0
4,510
CPU usage % is generally reported as = # threads utilized / # threads available. Most* games still don't use more than about....6 threads consistently.

It's easiest to just look at GPU usage %. If it's below...say...80%, you're CPU-limited in that particular game at those particular quality settings. Once you become CPU-limited, lowering game quality settings doesn't noticeably improve FPS.


Simplified way a PC plays a game:
  1. CPU figures out what needs to be in a given frame (imagine a rough sketch) based on user and game world input. Issues draw call to GPU to tell it what to render.
    • Think of this as positional tracking. How many things are moving (or have the potential to) from one frame to the next.
  2. GPU receives draw call and makes a pretty picture. Sends to monitor when complete.
    • This is detail. Object is now in new position, how has lighting/shading/etc changed. Re-draw object per game/quality rules.
  3. The GPU can't do any work until the CPU tells it what to draw. Raising graphics settings and/or resolution increases the complexity of the GPU's job, making it take longer to render each frame. Lowering settings decreases the complexity of the GPUs job making it take less time to render each frame.
  4. If the GPU finishes rendering a frame before the CPU has finished figuring out what the next frame should contain, the GPU has to wait (<100% GPU usage).
  5. Based on #3 & #4, you should be able to optimize for 90% or greater GPU usage (depending on a game's CPU stress and the CPU/GPU balance of a system)


So even if the cpu isn’t anywhere near 100 across any cores it can still be limiting the gpu?
 

Karadjgne

Titan
Ambassador
Bah. Cpu usage is a measurement of time. It's how much time the cores are at an idle state between use. So if the cpu is at 60% usage, 40% of the time the cpu is doing nothing but waiting for instructions.

That's important only for headroom. It means in the next frame there's room for the cpu to add other instructions, like a physX explosion or some Ai computations etc, and not affect fps output. If the cores are reaching 100%, there's no time for anything else, like adding that explosion, so the frame takes far longer to complete. Fps goes in the toilet.

Most commonly found in high-end systems and low-end games like CSGO, where cpu usage will be 40%, but 2 individual cores will be at 100%, because CSGO does not rollover instructions to new threads. It's also common on low-end systems where ppl use low presets to increase fps.

One of the biggest misconceptions about gpus is that they increase fps. They do not. Fps is solely the responsibility of the cpu. It takes all the computations, all the code, all the objects, physX, dimensions, everything and packs it into a frame package and ships that to the gpu. The amount of times the cpu can do that in 1 second is your Frames per Second.

The gpu either lives upto that number or fails. In cpu bound games, the gpu generally has less work, so can maximize fps onscreen upto the limit of what's sent by the cpu. In gpu bound games, it fails and the cpu sends more than the gpu can handle.

With a lesser card like the 1060, replaced by a 3060, the 3060 has better ability to put more frames up onscreen in gpu bound games.

So in CSGO the cpu might send 200fps. At 1080p, that's easy graphics for the 3060, so at medium it puts up 200. At high 200, at ultra you get 200.

But, the 1060 was also strong enough to do the same. So end result is you get no visual gains since both cards were capable of 200fps. Limited by the cpu.

Change that from 1080p to 1440p and now graphics are 1.8x more intense, the 1060 will suffer and at ultra can only put up 100fps, but the 3060 is still strong enough to put up all 200fps. Vast visual difference.

That's why you aren't seeing much of a change. Lower resolution, lower refresh, lower fps and similar results. If you challenge the gpu, by using 4kDSR or upgrading resolution or running highly graphical intensive games, the differences between the 1060 and 3060 will become more obvious.
 
Last edited:
  • Like
Reactions: Kribtax and tennis2

gpole87

Reputable
Jul 30, 2018
16
0
4,510
Bah. Cpu usage is a measurement of time. It's how much time the cores are at an idle state between use. So if the cpu is at 60% usage, 40% of the time the cpu is doing nothing but waiting for instructions.

That's important only for headroom. It means in the next frame there's room for the cpu to add other instructions, like a physX explosion or some Ai computations etc, and not affect fps output. If the cores are reaching 100%, there's no time for anything else, like adding that explosion, so the frame takes far longer to complete. Fps goes in the toilet.

Most commonly found in high-end systems and low-end games like CSGO, where cpu usage will be 40%, but 2 individual cores will be at 100%, because CSGO does not rollover instructions to new threads. It's also common on low-end systems where ppl use low presets to increase fps.

One of the biggest misconceptions about gpus is that they increase fps. They do not. Fps is solely the responsibility of the cpu. It takes all the computations, all the code, all the objects, physX, dimensions, everything and packs it into a frame package and ships that to the gpu. The amount of times the cpu can do that in 1 second is your Frames per Second.

The gpu either lives upto that number or fails. In cpu bound games, the gpu generally has less work, so can maximize fps onscreen upto the limit of what's sent by the cpu. In gpu bound games, it fails and the cpu sends more than the gpu can handle.

With a lesser card like the 1060, replaced by a 3060, the 3060 has better ability to put more frames up onscreen in gpu bound games.

So in CSGO the cpu might send 200fps. At 1080p, that's easy graphics for the 3060, so at medium it puts up 200. At high 200, at ultra you get 200.

But, the 1060 was also strong enough to do the same. So end result is you get no visual gains since both cards were capable of 200fps. Limited by the cpu.

Change that from 1080p to 1440p and now graphics are 1.8x more intense, the 1060 will suffer and at ultra can only put up 100fps, but the 3060 is still strong enough to put up all 200fps. Vast visual difference.

That's why you aren't seeing much of a change. Lower resolution, lower refresh, lower fps and similar results. If you challenge the gpu, by using 4kDSR or upgrading resolution or running highly graphical intensive games, the differences between the 1060 and 3060 will become more obvious.
Thanks!
 
CPU usage doesn't tell you very much, your game could be using only 2 cores but maxing out those two cores creating a bottleneck. Yet CPU usage would suggest your game just isn't very demanding.

An i7 6700 will hold back a 3060 in some situations. What games are you playing and what frame rate are you getting?
 
Using afterburner etc it seems that neither my any of my cpu cores or gpu are anywhere near 100% (more like 40-60%) which seems weird to me.
What is your polling/sample rate set to on afterburner? 1 sample per second? 10 samples per second?
Games are very bursty, unlike "power virus" stress testers like Prime95 that will load a CPU to a consistent 100% usage 100% of the time. Your sampling rate may be too slow to properly catch the CPU usage.