Question 3090 underperforming, 50%~ usage in games

Alex_Greene

Reputable
Apr 30, 2020
39
1
4,535
Computer specs:
Ryzen 7 3800x
EVGA RTX 3090 FTW3
2560x1440 144Hz monitor
Asus Tuf Gaming x570 Plus
32GB (2x16) 3200MHz RAM
Corsair RM1000x PSU
Windows 10

I recently upgraded to a 3090 from a 2080 super, for a little while the card usage was around 90% and performed very well, but it is now failing to exceed 50-60% usage in any game, as well as NVIDIA's automatic performance tuner scanner, which tends to reach 99% usage on cards during said scan

I've been told that the Ryzen 7 3800x slightly bottlenecks the 3090, but I hadn't seen much of an issue during my time using it. My CPU usage doesn't get very high when gaming, and the temperature remains relatively cool (I use an air cooler, BeQuiet! Dark Rock Pro 4)

I installed EVGA X1 Precision after the new installation to attempt overclocking, which installed firmware for the LEDs, but I decided after to uninstall the program in exchange for NVIDIA's built in overclocker.

I also recently flashed my motherboard's bios due to AMD motherboard incompatibilities with the 30 series GPUs that affected USBs, which fixed that issue

I noticed shortly after that GPU usage stopped passing 50 or 60% for most games, and titles such as Destiny 2 (which previously ran at 100+ fps) was running at lower speeds at times.

I completely reinstalled my GPU drivers using the Display Driver Uninstaller tool to remove the in Safe Mode, and the issue is still occurring.

I'd appreciate any help, and I'd like to refrain from having to reinstall Windows. It would be a major hassle in my current situation.
 

Phaaze88

Titan
Ambassador
Gpu-Z, sensors tab. Stretch the window so you can see all parameters without having to scroll it down.
If you have a secondary monitor, have Gpu-Z running on it. If not, run the games in window/borderless window and adjust that window size, as well as position the Gpu-Z window on either the left or right of your screen so you can easily examine the sensors while playing.
Like this, Gpu-Z should allow you to check:
1)Core clock, memory clock, board power draw, gpu core power draw, gpu voltage, +12 rail voltage, Vram use...

2)What might be holding back gpu core boost clocks - if that's even occurring, at least, in a severe manner.
The PerfCap Reason graph is quite useful for that.

3)In case of a thermal issue:
Gpu core is up to 91C. Gpu Boost has a target of 83C, though.
Gpu hot spot is up to 110+C.
Memory junction is up to 105C.
Spikes to high temperatures are less concerning than actually sitting at them.


If nothing out of the ordinary is observed, it might not be the gpu; it often gets mistaken as the culprit, when it's just a victim in wait. It is the last step before images get to your screen, after all.
If there is a riser device involved, try without it.
 
usually when a videocard is 60% that means that the Proccessor is holding the GPU back, if wanna try, enable DSR from Nvidia control panel and choose 2x or 4x the gpu usage should go up to 100% for sure in 4x mode.
EDIT: One sign that the cpu was holding you back you already had, gpu at max 90% (the old one) is a sign that you are cpu limited, if you are limited by teh GPU it should always be 99% or around that value then you upgrade the gpu to something more. I think your old build was very balanced, now with the upgrade of the GPU you should have gone for a 5800x or 5800x3d cpu also to be balanced again.
Or the 2nd solution is to overclock the crap out of the 3800x.
 

Alex_Greene

Reputable
Apr 30, 2020
39
1
4,535
Gpu-Z, sensors tab. Stretch the window so you can see all parameters without having to scroll it down.
If you have a secondary monitor, have Gpu-Z running on it. If not, run the games in window/borderless window and adjust that window size, as well as position the Gpu-Z window on either the left or right of your screen so you can easily examine the sensors while playing.
Like this, Gpu-Z should allow you to check:
1)Core clock, memory clock, board power draw, gpu core power draw, gpu voltage, +12 rail voltage, Vram use...

2)What might be holding back gpu core boost clocks - if that's even occurring, at least, in a severe manner.
The PerfCap Reason graph is quite useful for that.

3)In case of a thermal issue:
Gpu core is up to 91C. Gpu Boost has a target of 83C, though.
Gpu hot spot is up to 110+C.
Memory junction is up to 105C.
Spikes to high temperatures are less concerning than actually sitting at them.


If nothing out of the ordinary is observed, it might not be the gpu; it often gets mistaken as the culprit, when it's just a victim in wait. It is the last step before images get to your screen, after all.
If there is a riser device involved, try without it.
Thermals seem to be good across the board, doesn't really climb above 85 celcius on the hotter sections (memory and hotspot). CPU usage never goes very high when GPU is being used (doesn't go above what games normally demand of it), and running Furmark with uncapped FPS at 2560x1440 resolutions does end up hitting around 97% GPU usage with an average FPS of 250 or so. CPU also seems generally unaffected across the board. I can't tell if running a 2560x1440p monitor at 144fps is just demanding less from my hardware because it doesn't need to draw more, or if there's something else I'm missing.
 

Phaaze88

Titan
Ambassador
Thermals seem to be good across the board, doesn't really climb above 85 celcius on the hotter sections (memory and hotspot). CPU usage never goes very high when GPU is being used (doesn't go above what games normally demand of it), and running Furmark with uncapped FPS at 2560x1440 resolutions does end up hitting around 97% GPU usage with an average FPS of 250 or so. CPU also seems generally unaffected across the board. I can't tell if running a 2560x1440p monitor at 144fps is just demanding less from my hardware because it doesn't need to draw more, or if there's something else I'm missing.
Cpu utility should not be read in the same manner as gpu, ram, or storage. The cores should be looked at individually. If looked at as a whole, it can, and will be entirely misleading.
Also, leave Furmark alone. Gpu cooler testing is all it's good for, and it sounds like the FTW cooler is doing great.

What about PerfCap Reason? There's usually at least one reason keeping the gpu from boosting higher, but that's normal.
Hows the core clock - is it holding over 1800 easily?
Memory clock is pretty steady?
There's some spare Vram available?
 

Alex_Greene

Reputable
Apr 30, 2020
39
1
4,535
Cpu utility should not be read in the same manner as gpu, ram, or storage. The cores should be looked at individually. If looked at as a whole, it can, and will be entirely misleading.
Also, leave Furmark alone. Gpu cooler testing is all it's good for, and it sounds like the FTW cooler is doing great.

What about PerfCap Reason? There's usually at least one reason keeping the gpu from boosting higher, but that's normal.
Hows the core clock - is it holding over 1800 easily?
Memory clock is pretty steady?
There's some spare Vram available?
PerfCap mentioned Voltage Reliability and SysPower at some points, which is odd because I upgraded from an RM850x to an RM1000x specifically for this GPU. I have the power maximum set to 115% and the voltage maximum set to 20% (baseline was just 0%). I’m using three separate PCI-E cables for the 3090, should I use two cables and utilize the daisy chain for the third slot?

GPU clock consistently sits at around 1900MHz (give or take 75MHz), and the memory clock sits at close to 10,000MHz by default.
 
Last edited:

Phaaze88

Titan
Ambassador
Reason: Power, refers to the gpu's Board Power Limit, not your psu.
https://www.techpowerup.com/vgabios/226290/evga-rtx3090-24576-200908-1
That card has a default of 420w, 450w with the power limit slider maxed. There's also a 500w bios. The gpu is bumping up into the 420 and 450w limits, and so, won't pursue another 15mhz or so.

Reason: Voltage Reliability, means the card can't boost further due to the limits of the gpu core; the silicon is not stable at the next voltage bin.

The voltage slider doesn't do anything. It's a 'suggestion' to the gpu that it's allowed to use more more voltage for higher boost, but the boost algorithm pretty much ignores it.

No, stick to the 3 separate cables.

Core and memory clocks are great... I'm not seeing anything suggesting the gpu is at fault, sorry.
 

Alex_Greene

Reputable
Apr 30, 2020
39
1
4,535
Reason: Power, refers to the gpu's Board Power Limit, not your psu.
https://www.techpowerup.com/vgabios/226290/evga-rtx3090-24576-200908-1
That card has a default of 420w, 450w with the power limit slider maxed. There's also a 500w bios. The gpu is bumping up into the 420 and 450w limits, and so, won't pursue another 15mhz or so.

Reason: Voltage Reliability, means the card can't boost further due to the limits of the gpu core; the silicon is not stable at the next voltage bin.

The voltage slider doesn't do anything. It's a 'suggestion' to the gpu that it's allowed to use more more voltage for higher boost, but the boost algorithm pretty much ignores it.

No, stick to the 3 separate cables.

Core and memory clocks are great... I'm not seeing anything suggesting the gpu is at fault, sorry.
I appreciate the help regardless, thank you for taking time out of your day to help. If anything, it’s helped me understand and learn a few things that I should be aware of for the future.
 

Alex_Greene

Reputable
Apr 30, 2020
39
1
4,535
Reason: Power, refers to the gpu's Board Power Limit, not your psu.
https://www.techpowerup.com/vgabios/226290/evga-rtx3090-24576-200908-1
That card has a default of 420w, 450w with the power limit slider maxed. There's also a 500w bios. The gpu is bumping up into the 420 and 450w limits, and so, won't pursue another 15mhz or so.

Reason: Voltage Reliability, means the card can't boost further due to the limits of the gpu core; the silicon is not stable at the next voltage bin.

The voltage slider doesn't do anything. It's a 'suggestion' to the gpu that it's allowed to use more more voltage for higher boost, but the boost algorithm pretty much ignores it.

No, stick to the 3 separate cables.

Core and memory clocks are great... I'm not seeing anything suggesting the gpu is at fault, sorry.
Just installed Nvidia's new drivers (the ones they released yesterday), Red Dead Redemption 2 hits 94% GPU usage ingame and sits at around 100+ FPS consistently. Seems the recent drivers from Nvidia have just been utter garbage.