Question New GPU but no FPS improvement ?

Dec 10, 2022
16
0
10
i recenty upgraded from a 3060ti to a 4080 but i feel 0 diffrence in fps and i might even we worse i also added more ram which should make my game better but while everyone gets 550 fps i get 220 average in fortnite for example
cpu:i5-12600k
ram;24gb kinston hyperx fury
psu:2b 850w 80+ gold

removed by moderator
 
Last edited by a moderator:
i recenty upgraded from a 3060ti to a 4080 but i feel 0 diffrence in fps and i might even we worse i also added more ram which should make my game better but while everyone gets 550 fps i get 220 average in fortnite for example
cpu:i5-12600k
ram;24gb kinston hyperx fury
psu:2b 850w 80+ gold
24gb means you have a mish mash of RAM, what exact RAM do you have?

Run userbenchmark and share the public link to the results.

A game like Fortnite is not heavy on the gpu, the fps are more likely dictated by the cpu/RAM. This is probably why a gpu upgrade did not improve fps.
 
  • Like
Reactions: william unites
Dec 10, 2022
16
0
10
24gb means you have a mish mash of RAM, what exact RAM do you have?

Run userbenchmark and share the public link to the results.

A game like Fortnite is not heavy on the gpu, the fps are more likely dictated by the cpu/RAM. This is probably why a gpu upgrade did not improve fps.
a new gpu should upgrade my fps tho considiring i have a good cpu and ram
https://www.userbenchmark.com/UserRun/57338825
here is the benchmark
 
Dec 10, 2022
16
0
10
a new gpu should upgrade my fps tho considiring i have a good cpu and ram
Not if there is a problem. It probably wasn’t the 3060Ti that was the limiting factor in that game.

First thing I’d do is take out 8gb of RAM. Having 3 sticks of RAM it cannot run totally in dual channel mode, best is it will run in flex mode or worst it will run in single channel. Both could limit performance. You want 2 sticks installed in slots A2 & B2. Then enable XMP and ensure it’s running at 3200mhz. Then run the game again.
 
  • Like
Reactions: Roland Of Gilead
Dec 10, 2022
16
0
10
Not if there is a problem. It probably wasn’t the 3060Ti that was the limiting factor in that game.

First thing I’d do is take out 8gb of RAM. Having 3 sticks of RAM it cannot run totally in dual channel mode, best is it will run in flex mode or worst it will run in single channel. Both could limit performance. You want 2 sticks installed in slots A2 & B2. Then enable XMP and ensure it’s running at 3200mhz. Then run the game again.
k lemee do that real quick
 
Dec 10, 2022
16
0
10
Not if there is a problem. It probably wasn’t the 3060Ti that was the limiting factor in that game.

First thing I’d do is take out 8gb of RAM. Having 3 sticks of RAM it cannot run totally in dual channel mode, best is it will run in flex mode or worst it will run in single channel. Both could limit performance. You want 2 sticks installed in slots A2 & B2. Then enable XMP and ensure it’s running at 3200mhz. Then run the game again.
didnt fix it i get unstable 240 fps in normal games like that is way too low
and when i turn fast it drops to like 120 then comes back up
 
Your UBM run shows 32% CPU background usage.
Your benchmarks and games will never be quite right with 1/3 of your CPU being used for something else.
Check start up in task manager and disable everything not needed. Game loaders,up-daters,programs etc.........
Also most of the performance benchmarks on YouTube are fakes.
Running on a system with nothing but windows and the game installed. With every performance tweek done to it.

Is there a particular reason you are running a non-standard resolution on the monitor? 1720x1080
 
People need to stop using that Userbenchmark rubbish. It's even been banned on the Intel forums because of how utterly awful it is.
As a benchmark tool it is poor, I would not use it to compare systems or components when making buying decisions. However, by sharing the public link it can be useful at highlighting what is underperforming and may hint at why. For example, it can show a cpu that isn’t boosting, RAM that isn’t running at correct speed or in dual channel, a gpu that isn’t performing to expected levels or if cpu resources are being hogged by other applications. It often provides an insight to the system without needing to ask loads of questions or running multiple other tests.
 
  • Like
Reactions: Roland Of Gilead
Dec 10, 2022
16
0
10
Your UBM run shows 32% CPU background usage.
Your benchmarks and games will never be quite right with 1/3 of your CPU being used for something else.
Check start up in task manager and disable everything not needed. Game loaders,up-daters,programs etc.........
Also most of the performance benchmarks on YouTube are fakes.
Running on a system with nothing but windows and the game installed. With every performance tweek done to it.

Is there a particular reason you are running a non-standard resolution on the monitor? 1720x1080
yes, stretched res helps lower input delay in fortnite
 

Karadjgne

Titan
Ambassador
Fps is a measure of CPU power, not GPU. A gpu cannot 'increase' fps as such it can only block fps or allow the full extent of whatever fps the cpu sends it.

So if you get 220fps with a 12600k/3060ti and 220fps with a 12600k/4080 then the obvious answer is the cpu is maxed out due to whatever bios/windows/in game settings you have or anything else that can affect the ability of the cpu to make higher fps.

220fps is about normal for fortnite at Ultra on 1080p, whereas 550fps is closer to Low settings as presets can drastically change cpu parameters with stupid stuff like cloud lighting, ambient occlusion and other pre affects.
 
Dec 10, 2022
16
0
10
Fps is a measure of CPU power, not GPU. A gpu cannot 'increase' fps as such it can only block fps or allow the full extent of whatever fps the cpu sends it.

So if you get 220fps with a 12600k/3060ti and 220fps with a 12600k/4080 then the obvious answer is the cpu is maxed out due to whatever bios/windows/in game settings you have or anything else that can affect the ability of the cpu to make higher fps.

220fps is about normal for fortnite at Ultra on 1080p, whereas 550fps is closer to Low settings as presets can drastically change cpu parameters with stupid stuff like cloud lighting, ambient occlusion and other pre affects.
im using the lowest settings possible and yet im getting unstable 220
 
Fps is a measure of CPU power, not GPU. A gpu cannot 'increase' fps as such it can only block fps or allow the full extent of whatever fps the cpu sends it.

I'm not quite following you here. Yes, the CPU plays a part in how much FPS can be output by sending as much data (pre-rendered frames) to the GPU as possible, but to suggest the GPU ( or a GPU upgrade) has nothing to do with that or doesn't increase FPS output by itself is a little misleading.

With all things being equal, in this case the OP's 12600k/Ram/Mobo etc, he should of course expect a boost in FPS by switching from the 3060ti to the 4080. There isn't a game out there that will max out a 12600k 100%, so that's not really a factor. What is a factor is the strength of the GPU.

Here's an example of why:

View: https://imgur.com/VLC87v7


If what you said was true, there would be no increase in FPS output at 1080p where only by the GPU changed out, and every single review of a GPU out there is wrong.

You can see clearly from the review (on Tom's) that with all things being equal, then a higher level GPU will give more FPS when the CPU remains the same.

I would agree that there can be artificial limits (in game frame limits etc, monitor resolution) that could impact results of adding a more performant GPU, but there will still be an increase in FPS nonetheless.

There must be something else at play.

To the OP, what Windows profile are you running?

Also, is this the PSU you have? 2B (PW005) Ecstasy Gaming Power Supply 850W 80plus Gold, Full Voltage, Full Modular | 1800 EGP (sigma-computer.com)

Whilst this PSU, 'theoretically' has enough wattage, I understand these units are cheap and not particularly good. Before continuing to run your daily system, with that junker, I'd get a decent PSU, like an Corsair RMX1000w, to allow your system to breath, backed by a quality PSU (which is the heart of your system).
 
As a benchmark tool it is poor, I would not use it to compare systems or components when making buying decisions. However, by sharing the public link it can be useful at highlighting what is underperforming and may hint at why. For example, it can show a cpu that isn’t boosting, RAM that isn’t running at correct speed or in dual channel, a gpu that isn’t performing to expected levels or if cpu resources are being hogged by other applications. It often provides an insight to the system without needing to ask loads of questions or running multiple other tests.

Intel banned its usage from their own forums because the entire thing is fraudulent. It cannot be used to ascertain anything of value at all, because the "Algorithm" is intentionally broken. It has been a public joke for going 5 years now, and no serious person worth their salt would ever recommend anyone use in any diagnostic sense, for any reason whatsoever. This is not a personal opinion, it is an incredibly well known fact. There are plenty of other useful tools that don't actively support a group of people that I wouldn't trust to flip a light switch.
 
Intel banned its usage from their own forums because the entire thing is fraudulent. It cannot be used to ascertain anything of value at all, because the "Algorithm" is intentionally broken. It has been a public joke for going 5 years now, and no serious person worth their salt would ever recommend anyone use in any diagnostic sense, for any reason whatsoever. This is not a personal opinion, it is an incredibly well known fact. There are plenty of other useful tools that don't actively support a group of people that I wouldn't trust to flip a light switch.
UBM works fine.....within limits.
It provides some basic info about the machine.
It's quick and it's easy to run.
 
  • Like
Reactions: drivinfast247
Intel banned its usage from their own forums because the entire thing is fraudulent. It cannot be used to ascertain anything of value at all, because the "Algorithm" is intentionally broken. It has been a public joke for going 5 years now, and no serious person worth their salt would ever recommend anyone use in any diagnostic sense, for any reason whatsoever. This is not a personal opinion, it is an incredibly well known fact. There are plenty of other useful tools that don't actively support a group of people that I wouldn't trust to flip a light switch.

It's not 'intentionally' broken. It is however skewed towards Intel systems in terms of overall performance.

I also don't agree with UBM being used as some kind of measure of the performance of a particular PC and it's specs. It does as others have pointed out, lend to an idea of what might be a quick fix, like ram running at stock and not XMP, or the CPU having active background tasks (which could detract from final scores) or an overlay running, which skews GPU results or even prevents the GPU tests from finishing. So whilst not completely relevant, it can point to some simpler solutions. So there is some positive there.
 

Karadjgne

Titan
Ambassador
I'm not quite following you here. Yes, the CPU plays a part in how much FPS can be output by sending as much data (pre-rendered frames) to the GPU as possible, but to suggest the GPU ( or a GPU upgrade) has nothing to do with that or doesn't increase FPS output by itself is a little misleading.
Okie Lucy, I 'splain. 😅

You click on the start game exe. Cpu sends a query to storage and asks for the info, which gets moved to ram. Cpu uses the info, with stipulations from the detail settings in game, to create a list of instructions that include all data on objects, dimensions, vectors, Ai computations, etc that make up the entire frame. The amount of frames the cpu can put together in 1 second is the maximum amount of possible fps.

Those data packets are streamed to the gpu. The gpu reads those instructions, creates a wire frame, pre-renders that according to detail settings, adds all the pertinent data such as colors, lighting etc then final renders that according to output resolution. The amount of times the gpu can do that is the fps you see.

If the cpu can send 500fps, but the gpu at ultra can only render 250fps, that's considered a gpu bottleneck where lowering the detail settings to low may allow the gpu to render 400fps. The gpu doesn't 'increase' fps as such, it just allows more with a lowered setting.

If the cpu can only send 100fps, at ultra the same gpu can easily render all 100fps but lowering the detail levels will have little impact on the results, only those details that are cpu bound will raise the fps slightly. So even at low settings you are only getting 125fps, even though the gpu is capable of 400fps. That's considered a cpu bottleneck.

The gpu can only render what the cpu sends. It can't increase fps at all, that number is set by the cpu, the gpu only allows more fps to be rendered or less fps rendered, according to the detail levels and resolution.

Op has a 12600k. It's going to set a limit on fps, which Op is stating as closer to 500-600fps. That's at low details, ultra details putting it at closer to 220fps with bloom and all the other lighting conditions. That's what the gpu gets. Doesn't matter if it's a 3060ti or 4080, the gpu gets 220fps. You could run 4090 SLI, doesn't change the fact the gpu only has 220fps to work with. You put that cpu with a 750ti, it still gets 220fps, but the 750ti can only render 60fps, so you now get 60fps on screen.

Most assume that having a stronger gpu upgrade will raise fps, but that's only true as far as if the cpu is actually sending much more frames than the original gpu could render. So if the cpu was sending 500fps, and the 3060ti could only render 220fps, then a 4080 should be able to render a higher amount, like 400fps or more. But if the cpu is only sending 220fps and the 3060ti is capable of 250fps, and the 4080 capable of 400fps+, it's not going to show Any fps change, cuz both cards received only 220fps.

The difference in cards being that at 4k, the 4080 would still be rendering closer to 220fps, whereas the considerably weaker 3060ti would be struggling with 120fps, regardless of whether the cpu sent 220fps or 500fps.

Hope that helps a lil.
 
Last edited:
  • Like
Reactions: Phaaze88

jasonf2

Distinguished
I'm not quite following you here. Yes, the CPU plays a part in how much FPS can be output by sending as much data (pre-rendered frames) to the GPU as possible, but to suggest the GPU ( or a GPU upgrade) has nothing to do with that or doesn't increase FPS output by itself is a little misleading.

With all things being equal, in this case the OP's 12600k/Ram/Mobo etc, he should of course expect a boost in FPS by switching from the 3060ti to the 4080. There isn't a game out there that will max out a 12600k 100%, so that's not really a factor. What is a factor is the strength of the GPU.

Here's an example of why:

View: https://imgur.com/VLC87v7


If what you said was true, there would be no increase in FPS output at 1080p where only by the GPU changed out, and every single review of a GPU out there is wrong.

You can see clearly from the review (on Tom's) that with all things being equal, then a higher level GPU will give more FPS when the CPU remains the same.

I would agree that there can be artificial limits (in game frame limits etc, monitor resolution) that could impact results of adding a more performant GPU, but there will still be an increase in FPS nonetheless.

There must be something else at play.

To the OP, what Windows profile are you running?

Also, is this the PSU you have? 2B (PW005) Ecstasy Gaming Power Supply 850W 80plus Gold, Full Voltage, Full Modular | 1800 EGP (sigma-computer.com)

Whilst this PSU, 'theoretically' has enough wattage, I understand these units are cheap and not particularly good. Before continuing to run your daily system, with that junker, I'd get a decent PSU, like an Corsair RMX1000w, to allow your system to breath, backed by a quality PSU (which is the heart of your system).
There is a pretty critical piece you are missing here. Those benchmarks were run at "ultra". GPU processing power allows for fidelity, CPU is what makes framerate. Having a more powerful GPU won't increase frame rate at the base settings. It only shows up as the graphics settings are increased.
 

SyCoREAPER

Honorable
Jan 11, 2018
957
361
13,220
First stop bickering about UB it doesn't help the OP. It's crap. End of story.

OP, this is the second time I've seen a complaint similar in two days and the thing you both have in common is a 12th Gen Intel CPU.

Try updating the Bios for the card and Motherboard. On separate boots, not on the same run and keep that 3rd stick of ram out.

Make sure XMP is enabled. Taking a stick out or adding will kick XMP off on Gigabyte boards.
 
Intel banned its usage from their own forums because the entire thing is fraudulent. It cannot be used to ascertain anything of value at all, because the "Algorithm" is intentionally broken. It has been a public joke for going 5 years now, and no serious person worth their salt would ever recommend anyone use in any diagnostic sense, for any reason whatsoever. This is not a personal opinion, it is an incredibly well known fact. There are plenty of other useful tools that don't actively support a group of people that I wouldn't trust to flip a light switch.
I’ve have used it time and time again to quickly spot issues and I’ve seen lots of others do the same. I acknowledge it’s limitations but it can be used as a basic tool for these types of threads where someone has fps issues. Sometimes it doesn’t help, sometimes it shows a problem without needing to ask lots of questions. If you know how to use it and it’s limitations it can be a helpful basic tool.
 
Okie Lucy, I 'splain. 😅

You click on the start game exe. Cpu sends a query to storage and asks for the info, which gets moved to ram. Cpu uses the info, with stipulations from the detail settings in game, to create a list of instructions that include all data on objects, dimensions, vectors, Ai computations, etc that make up the entire frame. The amount of frames the cpu can put together in 1 second is the maximum amount of possible fps.

Those data packets are streamed to the gpu. The gpu reads those instructions, creates a wire frame, pre-renders that according to detail settings, adds all the pertinent data such as colors, lighting etc then final renders that according to output resolution. The amount of times the gpu can do that is the fps you see.

If the cpu can send 500fps, but the gpu at ultra can only render 250fps, that's considered a gpu bottleneck where lowering the detail settings to low may allow the gpu to render 400fps. The gpu doesn't 'increase' fps as such, it just allows more with a lowered setting.

If the cpu can only send 100fps, at ultra the same gpu can easily render all 100fps but lowering the detail levels will have little impact on the results, only those details that are cpu bound will raise the fps slightly. So even at low settings you are only getting 125fps, even though the gpu is capable of 400fps. That's considered a cpu bottleneck.

The gpu can only render what the cpu sends. It can't increase fps at all, that number is set by the cpu, the gpu only allows more fps to be rendered or less fps rendered, according to the detail levels and resolution.

Op has a 12600k. It's going to set a limit on fps, which Op is stating as closer to 500-600fps. That's at low details, ultra details putting it at closer to 220fps with bloom and all the other lighting conditions. That's what the gpu gets. Doesn't matter if it's a 3060ti or 4080, the gpu gets 220fps. You could run 4090 SLI, doesn't change the fact the gpu only has 220fps to work with. You put that cpu with a 750ti, it still gets 220fps, but the 750ti can only render 60fps, so you now get 60fps on screen.

Most assume that having a stronger gpu upgrade will raise fps, but that's only true as far as if the cpu is actually sending much more frames than the original gpu could render. So if the cpu was sending 500fps, and the 3060ti could only render 220fps, then a 4080 should be able to render a higher amount, like 400fps or more. But if the cpu is only sending 220fps and the 3060ti is capable of 250fps, and the 4080 capable of 400fps+, it's not going to show Any fps change, cuz both cards received only 220fps.

The difference in cards being that at 4k, the 4080 would still be rendering closer to 220fps, whereas the considerably weaker 3060ti would be struggling with 120fps, regardless of whether the cpu sent 220fps or 500fps.

Hope that helps a lil.
Less of the patronizing, thank you! :)

Analogy aside, that doesn't change anything.

It's clear form the slide above, that all things being the same (as in the reviewers test bed with same CPU etc) to test the GPU's, you can clearly see an increase in FPS from one tier card to the next in the tier.

Going from the 3060ti at 129.4 FPS, to lets a 3070ti at 148.6 FPS. Is this an increase or not? Yes, it is. Is the CPU the same for testing each GPU? Yes, it is.

The chart doesn't lie.

Anyway, we've gotten slightly off topic here.