Question gpu, cpu, and nvme drive all underperforming could anyone help me troubleshoot? evga 3080ti, ryzen 9 5900x, WDS500G1X0E-00AFY0 500GB

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Jan 29, 2022
45
0
30
Please keep it to 1 thread.
userbenchmark link: https://www.userbenchmark.com/UserRun/50058724
system specs:
Operating System
Windows 10 Home 64-bit
CPU
Ryzen 9 5900x
RAM
32.0GB Dual-Channel Unknown @ 1499MHz (22-21-21-50)
Motherboard
ASUSTeK COMPUTER INC. TUF GAMING X570-PRO (WI-FI) (AM4) 38 °C
Graphics
LC27G5xT (2560x1440@144Hz)
4095MB NVIDIA GeForce RTX 3080 Ti (EVGA) 58 °C
Storage
931GB Seagate ST1000DM003-1SB102 (SATA ) 34 °C
465GB Western Digital WDS500G1X0E-00AFY0 (Unknown (SSD))

i got an evga 3080ti and some new ram yesterday. and ive had my nvme and cpu for a while and i didnt have any problems when playing games on my 2060super but now my fps is dropping way lower than it had been when using the 2060super. i dont know whats causing everything to underperform but i need help bad!
also im confused as to why speccy reads my ram speed as 1499MHz and on task manager it says 3000MHz could that be a problem as well? either way i just need someone to help me make sense of all this please
 
They weren't tested nor certified by the manufacturer to be compatible. A packaged 4x 8GB kit was.
Brand doesn't matter. You could have 2 different kits of G. Skill Trident Z with the same speed, timings, and capacity. Then look up the OEM with CPU-Z, and find that the first kit's OEM was SK Hynix, and the other one was Samsung.
All speed and timings do is increase the compatibility rate. It does not guarantee stability.
You sign yourself up for testing and certification when mixing ram, and these kits crash right away upon trying to run 3600mhz. Some mixed kits take a little more effort to dig that out.
The odds of getting the ram to run at 3600 is low.
wish i wouldve known that sooner. everyone told me it should run fine with the mbo and cpu i have. so 3200MHz it is then
 
They weren't tested nor certified by the manufacturer to be compatible. A packaged 4x 8GB kit was.
Brand doesn't matter. You could have 2 different kits of G. Skill Trident Z with the same speed, timings, and capacity. Then look up the OEM with CPU-Z, and find that the first kit's OEM was SK Hynix, and the other one was Samsung.
All speed and timings do is increase the compatibility rate. It does not guarantee stability.
You sign yourself up for testing and certification when mixing ram, and these kits crash right away upon trying to run 3600mhz. Some mixed kits take a little more effort to dig that out.
The odds of getting the ram to run at 3600 is low.
ah well im fine with 3200MHz im more concerned with the gpu so hopefully tomorrow i can buy another 8pin cable and see if it helps
 
thats mixing kits even all 4 ram cards are all corsair vengeance 8gb 3600Mhz? like theyre the same brand speed and size

That doesn't necessarily mean anything. RAM is a binned product, meaning that there's no factory spitting out 3000 MHz RAM and 2666 MHz RAM and 2400 MHz RAM, it's basically the same RAM and it's sold at different levels depending how well individual sticks test. They may not even have the same manufacturer.

While RAM sticks at the same timings and speed tend to work together, only RAM that's sold in a kit, or packaged together, is guaranteed to (those sticks have been specifically tested as working).

Given that you're trying to pin down an issue, remove two of the sticks.
 
when i tried to use the profile to set all my ram to 3600MHz it kept crashing my pc
What's the partnumber and version number of the Corsair RAM?
DRAM_label.jpg
 
The use of a single cable from the PSU, when drawing a high electrical current, can cause a voltage drop at the graphics card's power connectors. It's better to split the electrical current over multiple cables from the PSU, as Seasonic recommends, to lessen the probability of any voltage drop at the graphics card's power connectors.
 
Last edited:
ok so update: i bought a 1000w psu evga g+ and i plugged everything in correctly and this time i have 3 seperate 8pin cables connected to my gpu but its still bottlenecking on destiny 2. im not sure whats causing it
 
i have no idea whats causing everything to bottleneck on destiny 2 still. i got a 1000w psu, everything is connected properly. i updated my chipsets and bios and drivers and i just have no clue at all. i dont even know what to check for

full specs of my build:
Operating System
Windows 10 Home 64-bit
CPU
AMD Ryzen 9 5900x
RAM
32.0GB Dual-Channel Unknown @ 1666MHz (18-22-22-42)
Motherboard
ASUSTeK COMPUTER INC. TUF GAMING X570-PRO (WI-FI) (AM4) 40 °C
Graphics
LC27G5xT (2560x1440@144Hz)
4095MB NVIDIA GeForce RTX 3080 Ti (EVGA) 59 °C
Storage
465GB Western Digital WDS500G1X0E-00AFY0 (Unknown (SSD))

User benchmark link: https://www.userbenchmark.com/UserRun/50085198
 
Always online game = internet connection becomes a factor.
Is the cpu boosting above 4.2ghz now?
Gpu core, gpu hotspot, Vram... were you able to confirm that it isn't thermal throttling?
Its still stuck at 4.2 and do i check that on precision x? The cpu and gpu temp looked fine ill check again really quick
 
Always online game = internet connection becomes a factor.
Is the cpu boosting above 4.2ghz now?
Gpu core, gpu hotspot, Vram... were you able to confirm that it isn't thermal throttling?
even when i went back to bios and changed it from asus optimal back to normal cpu is still 4.2Ghz. but only 44-46 degrees C and the gpu is at 62 degrees 925mv 1800mhz and using 32-36% power the fps on destiny 2 go as low as 70-80fps and jump right back up to 120-145fps then right back down constantly idk how to check vram usage
 
Ryzen boosts according to voltages and temps. While in single or maybe 2-3 cores might see the full 4.80GHz boosts if temps are good and voltages are good, for multi-all core boosts like those used in cinebench and p95 etc, expect closer to 4.20GHz. The pivotable, optimal temp being 60°C. Lowering VID can allow for higher boosts or more cores boosting higher, depending on the application as it'll curtail voltages going to the cpu. Higher core count = lower boosts overall in attempt to keep voltages, and therfore temps, in check.

With Ryzen being a dynamic boost, nothing is really set in stone, so small adjustments can have a moderate or no impact, depending.

Fps isn't a static number by any means. It can and will change according to what the cpu has to deal with. A village scene with no npc's is vastly different to a field of tall grass blowing in the wind. The field has to take every blade and assign it dimensions, shadows, shading, movements etc. A village is mostly 2d background. Add npc's and that adds all the collisions, Ai, objects etc and fps tanks.

It's why for most games and apps, the 5600x/5800x is right behind the 5900x, if not almost equal, as the cpus have similar responses to lower core counts. It's only really in high core apps like production where core count trumps IPC and the 5900x is brutally more effective than a 5600x.
 
Ryzen boosts according to voltages and temps. While in single or maybe 2-3 cores might see the full 4.80GHz boosts if temps are good and voltages are good, for multi-all core boosts like those used in cinebench and p95 etc, expect closer to 4.20GHz. The pivotable, optimal temp being 60°C. Lowering VID can allow for higher boosts or more cores boosting higher, depending on the application as it'll curtail voltages going to the cpu. Higher core count = lower boosts overall in attempt to keep voltages, and therfore temps, in check.

With Ryzen being a dynamic boost, nothing is really set in stone, so small adjustments can have a moderate or no impact, depending.

Fps isn't a static number by any means. It can and will change according to what the cpu has to deal with. A village scene with no npc's is vastly different to a field of tall grass blowing in the wind. The field has to take every blade and assign it dimensions, shadows, shading, movements etc. A village is mostly 2d background. Add npc's and that adds all the collisions, Ai, objects etc and fps tanks.
So destiny dropping that many frames is normal? And im testing by going into the same maps and running around a bit. Even staying still the frames just drop way lower than normal
 
Any game will drop frames depending on what the cpu has to accomplish, game engine, optimization etc. If you look at a gpu review (gamersnexus, hardware unboxed etc) you'll see that they not only give max fps for that scene tested but also 1% lows, which can be anywhere upto half what max fps is. There's a simple stairwell in Far Cry 4 where ppl saw drops from 150+ fps at the door and 20fps halfway up the stairs for no discernable reason.

Drops happen constantly. It's only an issue if the drops cause the game to be unplayable, and Destiny isn't exactly a very well optimized game.
 
Any game will drop frames depending on what the cpu has to accomplish, game engine, optimization etc. If you look at a gpu review (gamersnexus, hardware unboxed etc) you'll see that they not only give max fps for that scene tested but also 1% lows, which can be anywhere upto half what max fps is. There's a simple stairwell in Far Cry 4 where ppl saw drops from 150+ fps at the door and 20fps halfway up the stairs for no discernable reason.

Drops happen constantly. It's only an issue if the drops cause the game to be unplayable, and Destiny isn't exactly a very well optimized game.
it was just concerning bc with my 2060 super paired with my 5900x it was just fine never and stayed at around 100 frames and never dropped 10-20 frames while just standing still looking at the ground like it is with my 3080ti nor did the game freeze and stutter like its been doing recently. i saw someone on youtube playing d2 with the same gpu and cpu as me and im not getting anywhere close to that performance