[SOLVED] Why GPU's can't just render more frames?

Calicifer

Prominent
Apr 1, 2022
31
2
535
Greetings,

I'm stuck on a problem which for me seems fundamental to having enthusiast GPU. You are supposed to buy powerful GPU to have high framerate. However, some games simply refuse to render more frames than a certain arbitrary number. I had checked my v-sync settings in game and in Nvidia control panel. I turned them off and set GPU to render as many frames as it can. However, my GPU is still not working at 100% in Fallout New Vegas. I render 50-70 FPS at the highest possible settings at half load.

This is the problem which I noticed in a lot of older games. GPU simply does not render those insane framerates required for high refresh rate monitors. Why is that? Shouldn't I be able to control how hard my GPU works for any particular game? Is poor framerate performance solely dependent on game engine and how much it can give work for GPU?
 
Solution
This is the problem which I noticed in a lot of older games. GPU simply does not render those insane framerates required for high refresh rate monitors. Why is that? Shouldn't I be able to control how hard my GPU works for any particular game? Is poor framerate performance solely dependent on game engine and how much it can give work for GPU?
Strap yourself in for a long one.

The answer to this question comes in two parts. The first is that the GPU cannot do any work unless the CPU tells it to. You can think of the CPU compiling "graphics order form" from an application, then submitting it for the GPU to render. This "order form" contains things like where the position of objects are, what sort of operations to run to generate...

USAFRet

Titan
Moderator
CPU is similarly underworked.

I do think that it is more due to old game engines being made out of spaghetti instead of programming code.
And that is also an issue.

But to your question....the GPU can only work with what the CPU gives it.
A better GPU lets you turn up the detail level.

Every blade of grass, every water droplet in every cloud, see if his shoelaces are untied from 300 meters away.
 

Calicifer

Prominent
Apr 1, 2022
31
2
535
My specs:

CPU - i5-4670
GPU - GTX 760
RAM - Crucial technology DDR3 16 GB (1600*28),
Motherboard - B85 Killer
SSD - CT500MX
PSU - VS 650
Resolution - 1080p

My question: It is less about this game in particular and more about general theory. I play Dungeon Siege 2 at the moment and it runs at 20-30 FPS. I can understand this, because it is an old game. I needed to do several fixes to it to work just a way I want. There is probably a lot of dead code trying to do something which modern architecture emulates or it does not exist. Old code probably does not know how to use new hardware. However, I could go from game to game and find countless examples where my CPU/GPU are underloaded and frames are not as high as I want them to be. I want to know exactly why I cannot set my GPU to work 99% to render more frames or otherwise bottleneck my CPU in generating more frames.

At the moment it seems to me that game engines simply do not give more work for hardware to do. However, I can see similar things in other benchmarks too. Someone is generating 120 frames, but their hardware is still underloaded and no part is throttling. I want to know more information about this or be directed to a right direction to do my own research of why situations like this happen.
 
Last edited:
This is the problem which I noticed in a lot of older games. GPU simply does not render those insane framerates required for high refresh rate monitors. Why is that? Shouldn't I be able to control how hard my GPU works for any particular game? Is poor framerate performance solely dependent on game engine and how much it can give work for GPU?
Strap yourself in for a long one.

The answer to this question comes in two parts. The first is that the GPU cannot do any work unless the CPU tells it to. You can think of the CPU compiling "graphics order form" from an application, then submitting it for the GPU to render. This "order form" contains things like where the position of objects are, what sort of operations to run to generate the color of the pixel, and so forth.

The second part is that this step only comes in after the game has done everything else. Graphics are not actually necessary to run a game, they're simply a visual representation of the game world and is only a benefit to the user. So it wouldn't be beneficial to the user if what they're seeing at any given point in time is not representative of the game world at that moment. i.e., being late doesn't help and rendering future frames can be wasteful since that's not the future it may be. Although, the former part is a bit of a stretch as in order to smooth out hiccups with the CPU being too busy, frames are buffered to be rendered later. But with a sufficiently fast overall render time, the perception of delay is unnoticeable to most people.

But this probably doesn't answer the question still: why do old games seem to have a frame rate limit when clearly the hardware can perform much better than the hardware of the time? Two main reasons
  • The first is if the game were to run with no frame rate cap, it could potentially cause the hardware to be over-utilized for such a trivial task. For example, one of the developers at Microsoft when working on Windows XP noticed that 3D Pinball consumed 100% of the CPU when running. Keep in mind this was a game which at that point was 5-6 years old and ran fine on hardware at the time. So it was clearly over-utilizing the hardware. A frame rate limiter was added and the CPU usage plummeted accordingly.
  • The second is that modern games run at fixed intervals. This is sort of a run-time cap on the application. So for example, a game may only execute its code at most 200 times per second, or every 5ms. If the CPU finishes that work before the 5ms is up, the game does nothing more. If it misses its 5ms window, it spills into another 5ms to "catch up." If the game is done with this work, it then goes off and submits something to the GPU.
    • Why design things this way? The game needs to run consistently no matter which hardware it runs on, as long as the hardware meets the performance requirements. If a game is constantly missing its processing intervals, the game will appear to slow down compared to real time. However, if the game doesn't wait and goes on the next interval immmediately, it'll appear to speed up, running faster than real time. This is a problem if you try to run a PC game made in the early 80s on modern hardware: those games used the CPU's clock speed to keep track of time and clock speeds at those days were 4-8MHz. You can imagine how impossible it is to play when the game is running thousands of times faster.
    • Here's a good video that explains this using the SNES hardware as an example:
      View: https://www.youtube.com/watch?v=Q8ph2OVqZeM

EDIT: Bonus round - why do some game engines have a frame rate limiter? An example was some of Bethesda's games had one of say 60 FPS. I forget if their engines had a problem with this, but in some cases, setting it higher would cause issues for the game. This is due to physics, including things like collision detection and whatnot.

If you take a look at physics equations, you'll notice a lot of them have a component called a time delta. That is, it requires knowing the values between two points in time and how much time has passed between them. For instance, you can't figure out velocity unless you know where an object is at two points in time and the time between those two points. So games need to have a time delta reference. An easy one is whatever this frame rate limit is. The problem is, if you design the equations around this frame rate limit, setting a higher limit will cause issues because the time delta is no longer what the developers expected. This can be a problem for collision detection as if you mess with the time delta enough, objects can appear to "phase through" another.
 
Last edited:
Solution
Strap yourself in for a long one.

The answer to this question comes in two parts. The first is that the GPU cannot do any work unless the CPU tells it to. You can think of the CPU compiling "graphics order form" from an application, then submitting it for the GPU to render. This "order form" contains things like where the position of objects are, what sort of operations to run to generate the color of the pixel, and so forth.

The second part is that this step only comes in after the game has done everything else. Graphics are not actually necessary to run a game, they're simply a visual representation of the game world and is only a benefit to the user. So it wouldn't be beneficial to the user if what they're seeing at any given point in time is not representative of the game world at that moment. i.e., being late doesn't help and rendering future frames can be wasteful since that's not the future it may be. Although, the former part is a bit of a stretch as in order to smooth out hiccups with the CPU being too busy, frames are buffered to be rendered later. But with a sufficiently fast overall render time, the perception of delay is unnoticeable to most people.

But this probably doesn't answer the question still: why do old games seem to have a frame rate limit when clearly the hardware can perform much better than the hardware of the time? Two main reasons
  • The first is if the game were to run with no frame rate cap, it could potentially cause the hardware to be over-utilized for such a trivial task. For example, one of the developers at Microsoft when working on Windows XP noticed that 3D Pinball consumed 100% of the CPU when running. Keep in mind this was a game which at that point was 5-6 years old and ran fine on hardware at the time. So it was clearly over-utilizing the hardware. A frame rate limiter was added and the CPU usage plummeted accordingly.
  • The second is that modern games run at fixed intervals. This is sort of a run-time cap on the application. So for example, a game may only execute its code at most 200 times per second, or every 5ms. If the CPU finishes that work before the 5ms is up, the game does nothing more. If it misses its 5ms window, it spills into another 5ms to "catch up." If the game is done with this work, it then goes off and submits something to the GPU.
    • Why design things this way? The game needs to run consistently no matter which hardware it runs on, as long as the hardware meets the performance requirements. If a game is constantly missing its processing intervals, the game will appear to slow down compared to real time. However, if the game is not hitting the intervals exactly, it'll appear to speed up, running faster than real time. This is a problem if you try to run a PC game made in the early 80s on modern hardware: those games used the CPU's clock speed to keep track of time and clock speeds at those days were 4-8MHz. You can imagine how impossible it is to play when the game is running thousands of times faster.
    • Here's a good video that explains this using the SNES hardware as an example:
      View: https://www.youtube.com/watch?v=Q8ph2OVqZeM
Good info!

There are many, many videos on YT of New Vegas running upwards of 150-200FPS. Whatever is limiting the OPs performance, it doesn't appear to be the game itself.
 

Karadjgne

Titan
Ambassador
Logic is flawed from the get-go, due mainly to misunderstanding 'usage', which is common issue.

Usage is not how much of the cpu/gpu is used, but how much it uses.

Think of it this way.
You put a nail into a plaster wall. You Will use every single muscle in your hand to hold the hammer firmly. You will use every single muscle in your arm to swing the hammer under control. You will Not use all the strength possible in those muscles. That's usage.

In cpu terms, that means there will be periods of non-use, gaps in code or threads not used, bandwidth not fully saturated etc. That has nothing to do with the fact the cpu/gpu is working at 100%, it's just not using the full strength of the cpu because there simply is no need.

CSGO is a 2 thread game. That's all the code allows for, there's no spillover. So with a quad core cpu, you'll never see 100% usage on the entire cpu, you might see 100% usage on 2 cores and a certain % on the other 2 cores as they'll be dealing with windows, the net, discord, TeamSpeak, general bs that a pc deals with.
 

Calicifer

Prominent
Apr 1, 2022
31
2
535
New Vegas can definitely go higher than 50-70FPS.

Please list full specs, including PSU and gaming resolution.

Here you go. I kinda went overboard with file length, because damn, I love Fallout New Vegas. There is always something more to discover.

I started recording before launching game. Then after a while I launched recording software and I was recording from time to time. It does not seem to affect my FPS outside of making my GPU work little harder. It adds 10-20% to overall GPU usage. Gameplay consists both of boring inventory management in closed environment, fighting in buildings and fighting in open areas. If you are familiar with a game, I went to Big Mountain.

Here is recording of my system.
https://easyupload.io/uxqpe1

Btw: I tested game on lowest settings. Game shows similar behavior. Constant heavy stuttering, marginally greater framerates, but my system seems to have comparable utilization. Going from absolute highest settings to absolute lowest settings did not changed technical issues with New Vegas nor my system seems eager to render more frames even with v-sync off both in game and Nvidia control panel.
 
Last edited:
Here you go. I kinda went overboard with file length, because damn, I love Fallout New Vegas. There is always something more to discover.

I started recording before launching game. Then after a while I launched recording software and I was recording from time to time. It does not seem to affect my FPS outside of making my GPU work little harder. It adds 10-20% to overall GPU usage. Gameplay consists both of boring inventory management in closed environment, fighting in buildings and fighting in open areas. If you are familiar with a game, I went to Big Mountain.

Here is recording of my system.
https://file.io/pRkjsKE5inL1

Btw: I tested game on lowest settings. Game shows similar behavior. Constant heavy stuttering, marginally greater framerates, but my system seems to have comparable utilization. Going from absolute highest settings to absolute lowest settings did not changed technical issues with New Vegas nor my system seems eager to render more frames even with v-sync off both in game and Nvidia control panel.
File.io says the file has been deleted(?)
 

Karadjgne

Titan
Ambassador
I used to have a GTX 760 and I was never amazed by its performance, it was more about being affordable :p
Ahh. That's a matter of perspective. Back when the 760 was new, it was still a seriously decent card, considering the complexity of the games it was used on. A 760 for CSGO was great. If stepping up from HD igpu graphics, it's a night and day difference.

My 3dfxVoodoo2 16Mb card was amazing for 3d graphics back in the day, compared to everything else, it was so advanced in design nvidia bought the company, just for the rights. Compared to today's cards, it's an ant trying to stare down an elephant.

So I'm not surprised by Op's reaction, coming from igpu to a 1030GT.
 

Calicifer

Prominent
Apr 1, 2022
31
2
535
Strap yourself in for a long one.

I thank you for your detailed comment. So far this is the best answer in this thread I was looking for and will mark it as an answer when thread runs its course.

This is exactly what I wanted to know and maybe your video will allow me to find more detailed look. So far I understood that it is down to badly programmed game engines and that only select few games have their games decently programmed. Those games tend to be ones on which we are seeing games being benchmarked. This is realisation I came from seeing games messing up various settings up. For example, one game actually ran worse and could not keep up 60 FPS with v-sync on and it was more choppy experience because of that. I turned this setting in another game and it performed a lot better if not ideally. I had tried mod suggested in this thread previously and it actually had a significant improvement. Now my GPU actually throttles when it needs to render an open wasteland. Previously, I had less FPS and less GPU workload while playing in same areas. FPS is also a lot higher in general, but it is still not ideal. In closed, easier to render environments, my GPU is not working fully rendering more frames. From my experience, I can only conclude that game engine had spaghetti for a code and there might be some compactability issues with newer hardware. With further work, game could take use of more GPU resources and render more frames.

As you had wrote, code is hard limited by developers not to generate more frames. However, programming your physics to speed of GPU is the same incompetent practice as tying game speed to CPU. Remember those good old games? I do want for my GPU to throttle at 99% when running minesweeper if I choose so. I turn off all frame limiting options from my GPU and game settings, but I never can't render more frames than seemingly game engine is comfortable in letting me render. If I paid for 390 Hz refresh monitor, I want to use every refresh cycle I paid for! Sadly, it seems that hardware advanced way past capabilities of most programmers to allow these extremely high refresh rates for their games. How game runs seems to be very dependent on individual game and its engine.

Overall, your comment talked mostly why game does not run faster. Why it can't just do more work. What I meant with my question is why game can't do same work more times. This is what high FPS is. Rendering same tasks more times in order to see more minute changes between each frame window. Extremely high framerates are inherently wasteful, because you are for example rendering 1 same, identical frame 10 times to achieve more FPS. Like you had said, this is why frame limiter were introduced. In addition, like you mentioned, game engine either ties its in game calculations to processing times or engine simply does not give more work to CPU. CPU at the same time cannot order more frames to be generated on its own without game engine explicitly telling to do so. This is the core of my issue. What is the point of buying 360 Hz monitor if I cannot render my games at such frequency? I do not want my hardware doing more work. I want it doing same work harder/more times.


As an off-topic, I see similar issue with whole gaming industry in general with unreal engine 5 showing terribly looking city demo and people being impressed by it even procedurally generated city from bird view is bad, it is repetitious and uninspired. I can't wait for next generation of carbon worlds generated by AI. Especially since last gen procedural generation turned out so damn great...
Gaming industry just goes for greater and greater visual fidelity rather than focusing for visual quality. We as gamers and industry are solely focused on generating just more of everything. More visual effects like ray/path tracing. More pixels for ever higher resolutions. However, at the same time visual quality is being left in the dust. Anti-aliasing is often terrible in games. Smoothness of movement is terrible. Often games are incomprehensible mess in movements. God forbid if I'm going to complain about flat game audio. You can't tell direction and distance of a sound without a visual cue if you do not buy dedicated hardware for that. There is a lot that I have to complain about in games and FPS is just one thing I could not understand. Why my GPU just can't render more frames for better visual quality. Your answer gave me a lot of reasons why it happens like this and this thread had put me on a right track where I should look for more information in regards to this question.
 

Calicifer

Prominent
Apr 1, 2022
31
2
535
Older games need fixes...they are coded to work well on old hardware.
https://www.nexusmods.com/newvegas/mods/66537


Thank you. I had installed this fix. It removed shuttering. It increased my FPS up to 140 frames per second. It significantly raised my 1% lows to 30-40 FPS. Game actually now throttles my GPU while I'm in wasteland. However, I'm still disappointed, because in less demanding areas my frames do not go higher even if my GPU has more headroom for that. With this mod I can't use 144 Hz monitor fully, because it does not seem to want to push more frames even when I'm in less demanding areas.

I used a performance fix previously, but mods generally tend to be underwhelming. My previous performance fixer had introduced instability in game which had caused it to crash from time to time. This is why I had disabled it all together. This one seems to be stable and does well what it advertises to do.
 

Calicifer

Prominent
Apr 1, 2022
31
2
535
Logic is flawed from the get-go, due mainly to misunderstanding 'usage', which is common issue.

Usage is not how much of the cpu/gpu is used, but how much it uses.

Think of it this way.
You put a nail into a plaster wall. You Will use every single muscle in your hand to hold the hammer firmly. You will use every single muscle in your arm to swing the hammer under control. You will Not use all the strength possible in those muscles. That's usage.

In cpu terms, that means there will be periods of non-use, gaps in code or threads not used, bandwidth not fully saturated etc. That has nothing to do with the fact the cpu/gpu is working at 100%, it's just not using the full strength of the cpu because there simply is no need.

CSGO is a 2 thread game. That's all the code allows for, there's no spillover. So with a quad core cpu, you'll never see 100% usage on the entire cpu, you might see 100% usage on 2 cores and a certain % on the other 2 cores as they'll be dealing with windows, the net, discord, TeamSpeak, general bs that a pc deals with.


I tracked individual cores and could not detect any throttles. It seems that in-game engine has issues as with mods I got a lot more FPS and more GPU now is more utilized. When previously I could go to most taxing areas in the game and my GPU would work at 60% or to turn every setting to bare minimum and game would still perform similarly. Now with a mod my GPU is more utilized and in most demanding areas my game actually throttles! It is amazing and what I wanted to see.

Also, I do not mean that I want my CPU and GPU do more work from game engine. It will only make game itself run faster. What I want is my GPU doing same work harder. That is, CPU sends command to render a frame. GPU instead of rendering that frame 1 time, giving me 1 FPS, I want that GPU to render that frame 360 times giving me 360 FPS.

Screw it! I want to run minesweeper at over 9000 frames! I want to have an option to melt my GPU running minesweeper if this is what I want. I do miss capability to set how hard my GPU works in game. Maximum FPS seems to be limited solely by game engine and I started to associate high frame rates with how technically well made game is.
 
GPU instead of rendering that frame 1 time, giving me 1 FPS, I want that GPU to render that frame 360 times giving me 360 FPS.
You don't want that because you would be looking at the exact same frame without any change for 6 seconds straight.
If the game can refresh 360 time a second and give you 360 different frames in that second than that is what you want but both the GPU and the CPU will have to be able to do that.
That's also why mods are restrained to certain levels of performance. new vegas operates at a certain interval (the tik rate the mod talks about) and you can only multiply that, you can't just use whatever random performance level you want.
 

Calicifer

Prominent
Apr 1, 2022
31
2
535
Isn't all high frame rates excessive? I had simplified my example to make it clear that I know the difference and where my beef lies.

If there is one command from CPU then rendering one frame will result in 1 FPS. However, CPU sends countless states to GPU. If it renders one frame per second, it means that all states and what CPU does in that one frame window will be portrayed just by 1 frame per second. A fast moving object for example might not be even captured in that frame or be blurry. If GPU however renders many frames in that time period, you could see fast moving object being portrayed precisely with no ghosting. This means that your GPU is fast enough to frame every change which CPU makes on a screen. This is the point of high framerate, increased smoothness and precision.

There is a limit where people can perceive a difference is 240 Hz. 360 Hz on the other hand is a point where is too little difference for even experienced people to tell a difference reliably. For me an issue is that all high end monitors are simply pointless if most games struggle to even reliably hit over 144 FPS. What is the point of buying anything more than 1440p, 144/160 Hz monitor if most games are hard capped not to render more frames than that? I also just want to push my hardware harder. If I paid for 144 Hz monitor, I want all my games to run at 144 Hz. If I'm going to buy higher refresh rate monitor, I want all my games to run at let say, 390 Hz. I paid for all those refresh rates and I want to use every single refresh per second I paid for!
 
Last edited:
Isn't all high frame rates excessive? I had simplified my example to make it clear that I know the difference and where my beef lies.

If there is one command from CPU then rendering one frame will result in 1 FPS. However, CPU sends countless states to GPU. If it renders one frame per second, it means that all states and what CPU does in that one frame window will be portrayed just by 1 frame per second. A fast moving object for example might not be even captured in that frame or be blurry. If GPU however renders many frames in that time period, you could see fast moving object being portrayed precisely with no ghosting. This means that your GPU is fast enough to frame every change which CPU makes on a screen. This is the point of high framerate, increased smoothness and precision.
That's not how it works though, the movement of that fast moving object is calculated by the CPU, the CPU has to determine where this object is for every single frame.
If the GPU just renders the object wherever it wants then the game would play itself, it wouldn't care what you as the player does it would just render things as fast as possible no matter what you do.
Because the CPU takes all the input of the player and everything that happens in the game world and computes the resulting frame, without that it would just be chaos.

What you talk about is a thing for movies you can look at smooth video project,
they add more frames to each existing frame to make things smoother.
 

Karadjgne

Titan
Ambassador
I do want for my GPU to throttle at 99% when running minesweeper if I choose so. I turn off all frame limiting options from my GPU and game settings, but I never can't render more frames than seemingly game engine is comfortable in letting me render.
You miss the point entirely.

The cpu gets all the info for a frame. It places every object in its xyz axis. It calculates every motion, vector, dimension. It computates Ai, physics and other stuff. Adds all that and more to the frame. Then ships that entire frame packet to the gpu. That takes a certain amount of time. The amount of times a cpu can do that in 1 second IS your fps limit.

Doesn't matter if it's 50fps or 500fps, it is what it is. The gpu receives that packet, buffers it, renders a vertex wire frame, adds all the coloring, shaders, etc then finish renders all that according to resolution. It does that, or tries to do that with every frame sent to it by the cpu.

Detail settings affect both cpu and gpu, depending on the setting.

As far as usage goes, if the gpu only requires 50% in order to accomplish its goal, that's all it takes. 51-99% does absolutely nothing. The cpu and gpu are still working at 100%, the clock speeds aren't slowed, the vram still works at its rate, just doesn't need more to accomplish the task.

Being as fps is generated by the cpu, the gpu cannot make fps higher. Ever. If the cpu only ships 50 packets in 1 second, the gpu can only render that 50 packets per second. Trying to bump usage up is pointless and useless as the gpu only has 50 packets to deal with. Not 9000.

If a pentium class cpu and rtx2060 could both magically render unlimited fps, what you'd end up with would be a cpu/gpu that could tie the same amount of work as done by a 12900k/3090, by just making it work harder. Doesn't work that way. It's all about Time. It takes a certain amount of time to deal with everything, stronger and faster cpus can do it quicker, so fps is higher.
 

Calicifer

Prominent
Apr 1, 2022
31
2
535
I do understand that CPU is responsible for giving work to GPU. This is why I called GPU rendering frames given by CPU. However, where we differ in our views or what I do not understand is: can CPU give more commands to GPU than it can feasibly render?

That is, I imagine that for CPU to issue commands is a lot easier. CPU is running 1000 commands per second. That is, game changes its state that many times per second. Maybe ray of lighting is different. Maybe cloud changed where it is. Maybe wind is blowing. All of these elements are being fed to CPU which slightly changes what is an actual state of a game is even if it is visually identical. This is where GPU comes in. CPU sends that game state to GPU to render. Since rendering it is a lot harder, GPU cannot keep up with a maximum speed by which CPU can make changes to game state. This is where FPS comes in. Game state can change in 1 second 1000 times in this example. Thus theoretical maximum FPS is 1000 FPS. However, our crappy GPU can render only 1 frame per second and thus it is we only get image refreshed on a screen once every second.

In my mind, I imagine that CPU is a lot faster and it is not usually a bottleneck in games. It is game engine which feeds game states into CPU which then CPU sends to GPU for rendering. The fact that game does not render more frames per second is due to game engine being slow and unresponsive itself as it simply does not send more meaningful instructions for CPU to work with. I imagine whole process to be like this:
Game engine <= CPU => GPU
  1. CPU reads instructions from game engine to move a ball. Game engine describes how a ball is moving, what resources to use. Game engine only describes how ball moves 10 times per second.
  2. CPU understands these instructions from game engine and actually moves ball ten times per second in its own memory. It moves ball only ten times per second, but it is a lot faster and could move ball 100 per second.
  3. In order to see how ball is moved on a screen, CPU sends instructions to GPU to render information it has in its own memory. It can send only 10 instructions per second, because it was not ordered to do 100 changes per second by a game engine. So, CPU is utilized only by 10%.
  4. GPU can render 50 frames per second, but it is only ordered to make 10 pictures by the CPU. So, it makes only 10 frames and sends it to monitor. It is utilized only 20%.
Here is where I come into picture. I say that I want to see this ball rolling at 100 frames per second instead of 10 frames per second. Modern computing have a frame capper in order not to render each frame 10 times. If it did, I would have 100 FPS, but there would not be any difference from 10 FPS, because I would be essentially rendering the same image 10 times. Since neither my CPU or GPU is not bottlenecked in this scenario, it is fault of game engine for not giving more work to CPU which causes it to be under-utilized. This in turn takes away work from my GPU which can't render more pictures per second, because work is simply not coming from CPU. CPU is not giving more work, because it is reading instructions from piece of paper which is game engine and it follows instruction to only calculate ball's position/physics/everything else only once every 100 ms.

In my view, problem with my hardware being unable to render more frames lies with game engine not giving more work to CPU. Here are few examples:
  1. Game engine asks to calculate how ball is rolling 1000 times per second.
  2. CPU can only calculate 100 times per second how ball is rolling. We have CPU bottleneck at 100 FPS.
  3. It can only send 100 requests per second to GPU to render an image. However, GPU can render 1000 frames per second. Thus CPU is bottlenecking the system and GPU is working only at 10%

  1. Game engine asks to calculate how ball is rolling 1000 times per second.
  2. CPU can only calculate 10.000 times per second how ball is rolling.
  3. It can only send 1000 requests per second to GPU to render an image. If GPU can only render 50 frames per second, we have 50 FPS. GPU is a bottleneck. If we upgrade to GPU which can render 10.000 frames per second, this GPU will still work only at 10% and game will only have 1000 FPS.
If I understand correctly, under-utilized hardware is a fault of a game engine for not being able to have enough commands in a small time frame. My desire is to have my system running at 99%, pushing out more frames for high refresh monitors. However, I can't just get more FPS, because system will be rendering same image multiple times. If my hardware is under-utilised, it highlights that my system is more powerful than game engine is and it can't possibly keep pace with my system and my expectations.
 
Last edited: