Question New GPU but no FPS improvement ?

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Dec 10, 2022
16
0
10
i recenty upgraded from a 3060ti to a 4080 but i feel 0 diffrence in fps and i might even we worse i also added more ram which should make my game better but while everyone gets 550 fps i get 220 average in fortnite for example
cpu:i5-12600k
ram;24gb kinston hyperx fury
psu:2b 850w 80+ gold

removed by moderator
 
Last edited by a moderator:
There is a pretty critical piece you are missing here. Those benchmarks were run at "ultra". GPU processing power allows for fidelity, CPU is what makes framerate. Having a more powerful GPU won't increase frame rate at the base settings. It only shows up as the graphics settings are increased.

Yes, they were run at Ultra. That's besides the point. In general, a GPU upgrade will give a boost in FPS. It's pretty clear!

If I upgrade my GPU from my 3060ti to a 3080, I will get more FPS. How much more will depend on some variables, but to say there won't be an increase is incorrect IMO.
 
what is your GPU utilisation during gaming, that will tell if your GPU is bottleneck or not, gpu running at 95%+ is running fine and gpu upgrade would rise FPS, gpu running lower than this would mean CPU is bottleneck, this doesnt mean CPU itself is a bottleneck, ram, data storage, kernel drivers, system setings, bios settings, background apps/services...there are tons of reasons when CPU gets bottlenecked by something around

but as of now you didnt tell us your GPU usage during gaming
 

jasonf2

Distinguished
Yes, they were run at Ultra. That's besides the point. In general, a GPU upgrade will give a boost in FPS. It's pretty clear!

If I upgrade my GPU from my 3060ti to a 3080, I will get more FPS. How much more will depend on some variables, but to say there won't be an increase is incorrect IMO.
That is not how it really works. Your CPU handles a significant amount of the heavy lifting of a video game. This includes preparing frames to be rendered by the GPU. So if you take your game settings to the lowest possible settings you will see what your rigs maximum framerate for the game at set resolution is. Assuming that you have a decent GPU (not 10 year old integrated graphics that still could be limiting framerate) that lowest fidelity setting framerate is being limited primarily by the CPU. From there if you take the game and set it at "ultra" with everything enabled the difference in framerate is mostly GPU. Because GPU's fill out the fidelity in a frame and CPUs pretty much determine the maximum rate being fed to the GPU you won't see a major difference in base frame rate between a 3060ti and a 3080 on low resolution, low settings (lets just say 720p at lowest settings). There will be some, just due to reduced memory latency from the 3080, but it won't be remarkable. Where you will see a huge difference is when you start pushing high settings at 4k. The 3080 because of its increased memory and raw horsepower will kill the 3060 ti in pretty much all metrics, because both cards are being bogged down.

Good benchmarks are ran on a single machine with the cards being switched out with everything else being kept the same. The huge problem with userbench is there is no benchmark machine. So you are seeing everything, with a weight towards heavily overclocked beast machines with the best processors possible and os tweaks. In the real world that doesn't necessarily represent what you are going to get when you are running a mid range six core processor that maxes out at 3.7ghz stock. It is pretty obvious from your prior input that framerate and latency are both very important to you. If you are already running everything at the lowest settings on backed down resolution to under tax your current GPU you will not see a magical increase in framerate by upgrading your GPU at those settings. Some yes, but not a crazy percentage unless you are running 10 year old intel integrated graphics (which is still being taxed at those settings). You will however be able to move your settings up a few notches with no appreciable loss in framerate. If you want higher max framerate you will need to upgrade your CPU to something with better IPC, Clock Frequency (or both preferably). This is why an I9-13900 @ 5.8ghz boost leads the pack currently for gaming. It isn't about core count, it is that IPC and clock speed.
 
  • Like
Reactions: Roland Of Gilead
Good benchmarks are ran on a single machine with the cards being switched out with everything else being kept the same.

This is precisely what I've been saying. When all things are equal (reviewer testbed etc same CPU) there is an increase in FPS, to the levels in the chart above. To say otherwise is misleading.

I would agree that at bare levels (low in game settings etc) that the increase is much more nuanced. But there is still an improvement from going from a 3060ti to a 3080. The difference is not only down to reduced mem latency.

Again, Ill give my own example. I've a 5600x, and a 3060ti. I would expect an increase in FPS at a given resolution, with set in game settings to compare with. I can pretty much guaranteed there will be an increase in FPS.

By your logic, only in certain scenarios (ie low settings - 720p) your assertions are correct. But outside of that, with any eye candy on, or increase in res, then the opposite is true - the new GPU should yield more FPS. Yes/No?

Good benchmarks are ran on a single machine with the cards being switched out with everything else being kept the same. The huge problem with userbench is there is no benchmark machine. So you are seeing everything, with a weight towards heavily overclocked beast machines with the best processors possible and os tweaks. In the real world that doesn't necessarily represent what you are going to get when you are running a mid range six core processor that maxes out at 3.7ghz stock

I'm 100% with on UBM. As I've already mentioned, it's useful at best as a comparison, and can point out some simple stuff to use as fixes. As one of the other posters said, whether or not the mem is running at XMP/optimum. Yes, you can find this out in other ways, but the point being is that UBM is not completely useless.

The OP's CPU is not a 6 core CPU running at 3.7ghz. It's more often than not running at much faster boost speeds. And with that, and on top of a GPU upgrade, there should be an increase in FPS.

Now, just to be clear, I'm not for a moment recommending the OP upgrades their CPU to give them more FPS. I'm just stating things as I see them, and as the charts in multiple reviews of many GPU's make clear - That is, with all things being equal there should be an increase in FPS, going from one GPU to another one with more processing power.

Anyway, we really are digressing from the topic, and this should be focused back on a solution to the OP's issue, rather than us having a discussion, which neither of us are ready to give up on :)
 
  • Like
Reactions: jasonf2
Intel banned its usage from their own forums because the entire thing is fraudulent. It cannot be used to ascertain anything of value at all, because the "Algorithm" is intentionally broken. It has been a public joke for going 5 years now, and no serious person worth their salt would ever recommend anyone use in any diagnostic sense, for any reason whatsoever. This is not a personal opinion, it is an incredibly well known fact. There are plenty of other useful tools that don't actively support a group of people that I wouldn't trust to flip a light switch.
Please link the single piece of free software you're talking about. It needs to show all the components of the PC including: motherboard, RAM, storage devices, CPU, BIOS version. It also needs to show CPU frequency under load as well RAM speed. It should also include what OS version and what percentage of CPU is being used in the background.
Oh and the results must be able to be shared and without a download or special software.
 
Please link the single piece of free software you're talking about. It needs to show all the components of the PC including: motherboard, RAM, storage devices, CPU, BIOS version. It also needs to show CPU frequency under load as well RAM speed. It should also include what OS version and what percentage of CPU is being used in the background.
Oh and the results must be able to be shared and without a download or special software.
to some extent 3d mark
 
  • Like
Reactions: william unites

jasonf2

Distinguished
This is precisely what I've been saying. When all things are equal (reviewer testbed etc same CPU) there is an increase in FPS, to the levels in the chart above. To say otherwise is misleading.

I would agree that at bare levels (low in game settings etc) that the increase is much more nuanced. But there is still an improvement from going from a 3060ti to a 3080. The difference is not only down to reduced mem latency.

Again, Ill give my own example. I've a 5600x, and a 3060ti. I would expect an increase in FPS at a given resolution, with set in game settings to compare with. I can pretty much guaranteed there will be an increase in FPS.

By your logic, only in certain scenarios (ie low settings - 720p) your assertions are correct. But outside of that, with any eye candy on, or increase in res, then the opposite is true - the new GPU should yield more FPS. Yes/No?



I'm 100% with on UBM. As I've already mentioned, it's useful at best as a comparison, and can point out some simple stuff to use as fixes. As one of the other posters said, whether or not the mem is running at XMP/optimum. Yes, you can find this out in other ways, but the point being is that UBM is not completely useless.

The OP's CPU is not a 6 core CPU running at 3.7ghz. It's more often than not running at much faster boost speeds. And with that, and on top of a GPU upgrade, there should be an increase in FPS.

Now, just to be clear, I'm not for a moment recommending the OP upgrades their CPU to give them more FPS. I'm just stating things as I see them, and as the charts in multiple reviews of many GPU's make clear - That is, with all things being equal there should be an increase in FPS, going from one GPU to another one with more processing power.

Anyway, we really are digressing from the topic, and this should be focused back on a solution to the OP's issue, rather than us having a discussion, which neither of us are ready to give up on :)
Anytime you significantly increase performance of an individual component you are going to see some performance gain. If the new component is resource starved it won't be performing to its fullest potential. So if your rig isn't close to pushing a 3090, and you throw a 4090 into the rig, ceteris paribus, one should not expect a doubling of performance, even though the card is ~ twice as powerful. With that being said though I would expect a few percent improvement in performance even with the CPU bottleneck just because the new card is faster. However in loads where the card is pushed, like high resolution rendering with ray tracing enabled the 4090 will shine. With that much being said even an older CPU on software like Cyberpunk 2077, being stupidly resource intensive, you will see more of a performance gain than something like CSGO. It is all really relative to how hard the software is pushing the individual hardware components. We are both saying the same thing.

Back to the actual thread though. The big thing I see here is the RAM configuration. Get it back to a matched pair set on the board QVL with good timings. The mismatch has the biggest probability of limiting RAM clock thus the CPU performance, which will related directly to framerate.
 
Dec 10, 2022
16
0
10
I'm not quite following you here. Yes, the CPU plays a part in how much FPS can be output by sending as much data (pre-rendered frames) to the GPU as possible, but to suggest the GPU ( or a GPU upgrade) has nothing to do with that or doesn't increase FPS output by itself is a little misleading.

With all things being equal, in this case the OP's 12600k/Ram/Mobo etc, he should of course expect a boost in FPS by switching from the 3060ti to the 4080. There isn't a game out there that will max out a 12600k 100%, so that's not really a factor. What is a factor is the strength of the GPU.

Here's an example of why:

View: https://imgur.com/VLC87v7


If what you said was true, there would be no increase in FPS output at 1080p where only by the GPU changed out, and every single review of a GPU out there is wrong.

You can see clearly from the review (on Tom's) that with all things being equal, then a higher level GPU will give more FPS when the CPU remains the same.

I would agree that there can be artificial limits (in game frame limits etc, monitor resolution) that could impact results of adding a more performant GPU, but there will still be an increase in FPS nonetheless.

There must be something else at play.

To the OP, what Windows profile are you running?

Also, is this the PSU you have? 2B (PW005) Ecstasy Gaming Power Supply 850W 80plus Gold, Full Voltage, Full Modular | 1800 EGP (sigma-computer.com)

Whilst this PSU, 'theoretically' has enough wattage, I understand these units are cheap and not particularly good. Before continuing to run your daily system, with that junker, I'd get a decent PSU, like an Corsair RMX1000w, to allow your system to breath, backed by a quality PSU (which is the heart of your system).
ok i will i appreciate that
 
Dec 10, 2022
16
0
10
Okie Lucy, I 'splain. 😅

You click on the start game exe. Cpu sends a query to storage and asks for the info, which gets moved to ram. Cpu uses the info, with stipulations from the detail settings in game, to create a list of instructions that include all data on objects, dimensions, vectors, Ai computations, etc that make up the entire frame. The amount of frames the cpu can put together in 1 second is the maximum amount of possible fps.

Those data packets are streamed to the gpu. The gpu reads those instructions, creates a wire frame, pre-renders that according to detail settings, adds all the pertinent data such as colors, lighting etc then final renders that according to output resolution. The amount of times the gpu can do that is the fps you see.

If the cpu can send 500fps, but the gpu at ultra can only render 250fps, that's considered a gpu bottleneck where lowering the detail settings to low may allow the gpu to render 400fps. The gpu doesn't 'increase' fps as such, it just allows more with a lowered setting.

If the cpu can only send 100fps, at ultra the same gpu can easily render all 100fps but lowering the detail levels will have little impact on the results, only those details that are cpu bound will raise the fps slightly. So even at low settings you are only getting 125fps, even though the gpu is capable of 400fps. That's considered a cpu bottleneck.

The gpu can only render what the cpu sends. It can't increase fps at all, that number is set by the cpu, the gpu only allows more fps to be rendered or less fps rendered, according to the detail levels and resolution.

Op has a 12600k. It's going to set a limit on fps, which Op is stating as closer to 500-600fps. That's at low details, ultra details putting it at closer to 220fps with bloom and all the other lighting conditions. That's what the gpu gets. Doesn't matter if it's a 3060ti or 4080, the gpu gets 220fps. You could run 4090 SLI, doesn't change the fact the gpu only has 220fps to work with. You put that cpu with a 750ti, it still gets 220fps, but the 750ti can only render 60fps, so you now get 60fps on screen.

Most assume that having a stronger gpu upgrade will raise fps, but that's only true as far as if the cpu is actually sending much more frames than the original gpu could render. So if the cpu was sending 500fps, and the 3060ti could only render 220fps, then a 4080 should be able to render a higher amount, like 400fps or more. But if the cpu is only sending 220fps and the 3060ti is capable of 250fps, and the 4080 capable of 400fps+, it's not going to show Any fps change, cuz both cards received only 220fps.

The difference in cards being that at 4k, the 4080 would still be rendering closer to 220fps, whereas the considerably weaker 3060ti would be struggling with 120fps, regardless of whether the cpu sent 220fps or 500fps.

Hope that helps a lil.
thank u so should a i9 12th gen be enough?
 
Dec 10, 2022
16
0
10
what is your GPU utilisation during gaming, that will tell if your GPU is bottleneck or not, gpu running at 95%+ is running fine and gpu upgrade would rise FPS, gpu running lower than this would mean CPU is bottleneck, this doesnt mean CPU itself is a bottleneck, ram, data storage, kernel drivers, system setings, bios settings, background apps/services...there are tons of reasons when CPU gets bottlenecked by something around

but as of now you didnt tell us your GPU usage during gaming
like 10 percent i think
 
Dec 10, 2022
16
0
10
what is your GPU utilisation during gaming, that will tell if your GPU is bottleneck or not, gpu running at 95%+ is running fine and gpu upgrade would rise FPS, gpu running lower than this would mean CPU is bottleneck, this doesnt mean CPU itself is a bottleneck, ram, data storage, kernel drivers, system setings, bios settings, background apps/services...there are tons of reasons when CPU gets bottlenecked by something around

but as of now you didnt tell us your GPU usage during gaming
excactly 40 percent cpu usage and 15 percent gpu
 
guys can anyone summarize what i should do for example : bios, new cpu, more ram anything cause im willing to do anything

Hey there,

Little late getting back to you, so sorry about that.

You've been given a lot of potential fixes from everyone.

To summarise though, here ya go :

  1. Update bios, ensuring you clear CMOS as per the manufacturers guide. This is in your motherbroard manual.
  2. Use DDU to uninstall your GPU driver, and do a fresh clean install (this can be done, by choosing the custom install, and selecting the 'clean install option'.
  3. Ensure all system drivers are up to date.
  4. If you are OC'ing any components apart from ram (in XMP), then stop. Run everything at stock.
  5. If you use anything like Intel XTU or Throttlestop, stop using those too. You need your system running at stock (apart from XMP) so we can have a baseline to work with.
  6. Check for throttling using HwInfo (with sensors).
  7. Run standardised bench tests. CB R23 for CPU, Superposition/Furmark for GPU.
  8. Ensure you are running the right power plan. Type the following in to Powershell -

Ultimate Performance: powercfg -duplicatescheme e9a42b02-d5df-448d-aa00-03f14749eb61
High Performance: powercfg -duplicatescheme 8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c
Balanced: powercfg -duplicatescheme 381b4222-f694-41f0-9685-ff5bb260df2e
Power saver: powercfg -duplicatescheme a1841308-3541-4fab-bc81-f71556f20b4a

By using these commands you will over write any changes you may have made to these power plans in the past. It will set them to default.

  1. Run system checks/Malware/Antivirus and make sure system is clean.
  2. Uninstall/Reinstall games that are affected. Verify files on STeam/Origin etc. On a side note, is it only older games which need fast CPU more than GPU, or is it with demanding games too?
  3. Consider a Windows re-install. Hopefully you have your game files on a separate drive, which would make this a much easier process.

12. As suggested, do not mix and match ram. This can cause lots of instability.

13. If all else fails, maybe bring to local repair store, and get them to test - Maybe swap out the CPU, swap out the GPU.

Some of the other posters may have extra to add in. I just can't think of them now :) So welcome further additions, suggestions.