[SOLVED] 3080Ti and 5950x Temperatures

Oct 8, 2021
15
1
15
Please keep it to 1 thread
Hey y’all,

So I’ve been using my newly built PC for about 2 weeks now. I already had a couple concerns regarding load and idle temps. But, now I am really worried something might be wrong. I’m going to attach a couple of pictures showing my temps under load with MSI Afterburner and at idle with HWMonitor.

This is a picture of my computer running Outerworlds:
ZBndWQE.jpg


Here’s an example (also in Outworlds) of a spike in my CPU temps:
A4oxgwY.jpg


Finally, here are two screenshots of my CPU and GPU idle temps on HWInfo with the max and min values.

CPU:
pQp5pPD.jpg

GPU:
Sbvq2Tv.jpg


All in all, these temps seem extremely high to me considering the load. Especially regarding the CPU.

As for the GPU I’m not all that concerned. My temperature target is 83C in afterburner and I have it on Prefer Maximum Performance in Nvidia control panel. So that’s probably introducing a little more heat to my GPU.

The main concern is my CPU so if anyone has any answers that would be great.

My Specs:
GPU: EVGA 3080 Ti FTW3 Ultra Gaming
CPU: AMD Ryzen 9 5950x
CPU Cooler: Noctua NH-D15 (Dual Fan)
RAM: 2x16Gb Crucial Ballistix @ 3600Mhz
MOBO: Asus TUF Gaming x-570-Pro
PSU: Corsair RM850
Case: Corsair 5000D Airflow
Fan Setup: 3 front intake, 3 top exhaust, 1 back exhaust

Any advice would be great.
Thanks!
 
Solution
The Ryzen 5000 behavior is normal, believe it or not.
They have a very aggressive turbo boost by design. It is active, even up to 90C; it will still seek out higher boost bins before dialing down from temperatures.
The other thing is the greater thermal density of the 7nm dies under the hood; it's harder to dissipate that heat in the smaller package... yet, we continue to seek out smaller and smaller chips.
At some point, there's going to be a need for a new kind of thermal solution under the hood - like this for example: https://www.tomshardware.com/news/tsmc-exploring-on-chip-semiconductor-integrated-watercooling

On the plus side, the 5950X doesn't pull the same kind of power as the Intel equivalent would... what even IS that...
One thing at a time.

Have you enabled a custom fan profile in MSI Afterburner for your GPU?

*Edit: What kind of thermal application did you use? Did you use a small dot and let it spread out on its own? Or, did you do something else?
No custom fan curves. Everything is default. As for the thermal paste, I just used a small dot and let the heat sink spread it out.
 
The Ryzen 5000 behavior is normal, believe it or not.
They have a very aggressive turbo boost by design. It is active, even up to 90C; it will still seek out higher boost bins before dialing down from temperatures.
The other thing is the greater thermal density of the 7nm dies under the hood; it's harder to dissipate that heat in the smaller package... yet, we continue to seek out smaller and smaller chips.
At some point, there's going to be a need for a new kind of thermal solution under the hood - like this for example: https://www.tomshardware.com/news/tsmc-exploring-on-chip-semiconductor-integrated-watercooling

On the plus side, the 5950X doesn't pull the same kind of power as the Intel equivalent would... what even IS that?
Oh, looks like the 10940X and 10980XE... I actually forgot about those Cascade Lake cpus...

As for the GPU I’m not all that concerned. My temperature target is 83C in afterburner and I have it on Prefer Maximum Performance in Nvidia control panel. So that’s probably introducing a little more heat to my GPU.
~Ehh...
Due to Gpu Boost 3.0, the gpu holds higher clocks and for longer periods:
1)the cooler it runs. It's going to dial back on the clocks harder when it hits 83C, because that's considered the thermal limit - but it has a maximum of 91C.
2)the less often it runs into board power limit. It also dials back harder the more frequently it hits it.
 
Solution
The Ryzen 5000 behavior is normal, believe it or not.
They have a very aggressive turbo boost by design. It is active, even up to 90C; it will still seek out higher boost bins before dialing down from temperatures.
The other thing is the greater thermal density of the 7nm dies under the hood; it's harder to dissipate that heat in the smaller package... yet, we continue to seek out smaller and smaller chips.
At some point, there's going to be a need for a new kind of thermal solution under the hood - like this for example: https://www.tomshardware.com/news/tsmc-exploring-on-chip-semiconductor-integrated-watercooling

On the plus side, the 5950X doesn't pull the same kind of power as the Intel equivalent would... what even IS that?
Oh, looks like the 10940X and 10980XE... I actually forgot about those Cascade Lake cpus...
Well that’s somewhat relieving to hear. Good to know that my CPU is working as intended.

Ehh...
Due to Gpu Boost 3.0, the gpu holds higher clocks and for longer periods:
1)the cooler it runs. It's going to dial back on the clocks harder when it hits 83C, because that's considered the thermal limit - but it has a maximum of 91C.
2)the less often it runs into board power limit. It also dials back harder the more frequently it hits it.
I mean it’s not clocking down the card when it reached that temperature so should I be concerned? It’s clock is boosted to 1800MHz out of the box and even at those temps is boosting past that which means it has thermal headroom to do so.
 
Ok , you can test the following thing, put it manually on off and see temps, then put it manually on on and see temps
On ASUS-boards PBO auto and off is the same. Only difference is, that with auto PBO can be changed f.i. by apps like ryzen master. If off, that's not possible.

That means: If govvern sets to off, he will not see any temp-differences compared to auto.

But if he sets to ON, could be, that CPU becomes too hot and might throttle down.
 
Just a thought out of curiosity where some others may have overlooked
I googled if Outerworlds is a CPU intensive game, apparently it is

View: https://www.reddit.com/r/buildapc/comments/dmz578/severe_cpu_bottlenecking_in_the_outer_worlds/


I was suffering from stuttering and all-time 100% CPU usage.

As i suggest, the problem is related to the UE4 feature of world loading. The game might have the legacy of console optimisation with it's restricted amount of RAM. Every time you move around and turn the camera, the Engine loads and unloads the surroundings and LODs to/from RAM in order to do everything not to extend RAM load more than 4 Gb. This loads the CPU pretty much/ Current-gen PCs with 8+ Gigs of RAM have the opportunity to hold the surroundings in RAM without loading and unloading. This helped for me to get rid of the stuttering and all-time 100% CPU usage. Now the CPU loads to 100% only while loading new locations and there are micro-stutters. Other time CPU loads something about 60-90%. Still on Ultra.



How about your CPU temps on other games, is it as bad?
 
Just a thought out of curiosity where some others may have overlooked
I googled if Outerworlds is a CPU intensive game, apparently it is

View: https://www.reddit.com/r/buildapc/comments/dmz578/severe_cpu_bottlenecking_in_the_outer_worlds/






How about your CPU temps on other games, is it as bad?
The thing is, in Outerworlds, I’m not even getting close to 100% usage. I’m staying under 20% usually around 14-17%. Yet I’m still seeing temperatures get close to 80C and sometimes spike past that. Albeit it’s staying in the mid 70C range after a 30 minutes of playing.

My temps in other games are completely normal hovering mid 60C and spiking occasionally to over 70C.
 
Hmm if that's the case..

Finally, here are two screenshots of my CPU and GPU idle temps on HWInfo with the max and min values.
Your CPU idle temp is 40-50C while your GPU idle temp is at least 60C...
I would suspect that your current system has inadequate cooling then, looking at your idle temps of the screenshot you provided for both CPU and GPU.

My 3080 Ti sits at 42C on idle, 5900x on 30-40C on idle.

Do you happen to be using a NVMe M.2 SSD? If so they generate a ton of heat, and create a hot pocket of air if it's placed in the M.2 slot between the CPU and the GPU. Usually, some cases even if they have 3x intake fans on the front, by the time the air reaches that area it isn't as strong enough to push that hot pocket of air.

I would try turning your top 3x fans from exhaust to intake if that will help, provided that the top of your case has a dust filter to allow it not to accumulate dust onto the case over time.
 
Hi everyone,

Was really curious to see some temperature readouts from my 5950x.

While benchmarking on heaven benchmark I’m seeing my CPU hover around 72C once it’s been running for a about 15 minutes. It never goes past 8% usage and despite that I’m seeing temperatures in that range.

As for cinebench, I’m not seeing even close to the same results. On a multi core benchmark I’m not seeing it going past 63C even at a full load. And on a single core load not past 66C

So it’s a bit confusing how a test not designed to stress my CPU is resulting in higher temperatures. I’m not sure if I have bad airflow in my case but I couldn’t imagine that being true considering my setup.
  • No PBO enabled
  • No custom fan curves
  • GPU boosted to 1800MHz out of the box
  • Nvidia Control Panel “Prefer Maximum Performance” is enabled.
Any advice would be great. Thank you.

Set Up:
MOBO - ASUS TUF Gaming x570-Pro
CPU - AMD Ryzen 9 5950x
GPU - EVGA 3080Ti FTW3 Ultra Gaming
RAM - 2x 16GB CL16 Crucial Ballistix
Storage - 1x 500GB 980 Pro M.2 NVME
- 1x 1TB 980 Pro M.2 NVME
CPU Cooler - Noctua NH-D15 (Dual Fan)
Case - Corsair 5000D Airflow
Fan Config - 3 Front Intake, 3 Top Exhaust, 1 Back
Exhaust
 
Last edited:
The Heavan benchmark is gpu intensive, making the gpu throw out a great deal of heat, especially a 3080ti. My guess would be that since your cpu is cooled with an air cooler it is being affected by the gpu exhaust. The cinebench is cpu intensive with no gpu heat being expelled in to the case thus the cooler temperatures during that benchmark.
 
The Heavan benchmark is gpu intensive, making the gpu throw out a great deal of heat, especially a 3080ti. My guess would be that since your cpu is cooled with an air cooler it is being affected by the gpu exhaust. The cinebench is cpu intensive with no gpu heat being expelled in to the case thus the cooler temperatures during that benchmark.
So do I consider everything to be working normally or should I be concerned about the heat being emitted from the GPU.

I was watching HWInfo closely, and even with CPU temps spiking into 85C it never said that it was thermal throttling.

On the other hand my GPU did get an instance of power limiting due to Thermal Max being reached. Does that mean it’s throttling due to over heating or because it’s set to a specific temperature in MSI afterburner?
 
Hey all,

I was recently trying to undervolt my 3080Ti because of unwanted high temperatures. There are a few things I realized and need answers for.

1. When undervolting I had heaven benchmark running in the background. When I noticed crashing or artifacts I would reset the voltage curve to default to try a less extreme undervolt. When I was I resetting I realized that the curve that’s applied by “Default”, is completely dependent on temperature. When applying default under load, I see my clocks can go up to 1850MHz max. Then when applying it at idle the curve jumps up to 1950MHz max! So if I’m going to apply that setting should I do it under load or at idle?

2. Another reason for me undervolting was I believed my GPU was getting so hot that it was affecting my CPU temps. But despite be achieving some lower temps while undervolting I was still seeing my CPU temps skyrocket at a light load. Does anyone have an explanation for this?

Any advice would be great!
 
include your complete system specs.
recently trying to undervolt my 3080Ti because of unwanted high temperatures.
i've seen the same thing mentioned a couple times lately, always related to a 3080 Ti.
i have had a similar experience but just came on all of a sudden.

first couple months of using this card it idled ~30°C.
light loads would only push it to ~35°.
now over the last 10 days or so it idles ~40°.
light loads push it up to ~45°.
fans are operating as they should, raised the fan curve significantly to no avail.

still in games & benching it only ever hits ~60° at the max.

under-clocking/volting does no good whatsoever in this situation.
setting core & memory to -500,
power limit to 28,
and fans to static 100%
does nothing to relieve the high idle temperatures.

i am beginning to believe the onboard sensors are not reading correctly at the lower temperatures.
 
Last edited:
If you are getting in to the 80's with your cpu and gpu is thermal throttling, you either have an overheating case or your fan settings are too low. I would try to increase your case fan speeds, increase your cpu cooler speed, if you have that ability, and change your fan curve speed in the Msi Afterburner to increase cooling . See if this helps. What power setting do you have set for the gpu in 3d settings in the Nvidia control panel? Is it normal setting or maximum performance? Make sure it is set to normal to help cool it.
 
trying to undervolt my 3080Ti because of unwanted high temperatures
unexpected update today.

not exactly related...upon booting i lowered my CPU voltage a bit more, still just testing for lowest possible denominator.

after running for a few hours checked my GPU stats again and is back down to 30°C at idle, ~35° with low usage.
this after sitting at those higher temperatures for over 10 days.
????
 
Hey everyone,

I had a few concerns regarding my new 5950x. Recently I just finished a new PC build with the following specs:

  • MOBO: ASUS TUF Gaming x570-Pro
  • CPU: AMD Ryzen 9 5950X
- GPU: EVGA RTX 3080Ti FTW3 Ultra Gaming
  • RAM: Crucial Ballistix 2x 16GB CL16 @ 3600Mhz
  • Storage: 2x Samsung 980Pro NVME M.2
  • CPU Cooler: Noctua NH-D15 (Noctua Therm Paste)
  • Case: Corsair 5000D Airflow
  • Fan Setup: 3 Front Intake, 3 Top exhaust, 1 Back Exhaust

PBO is off, Windows power settings are set to balanced, and Nvidia Control Panel settings are all default.

I’ve been running benchmarks from Cinebench R23, Heaven Benchmark, and Superposition using HWInfo to monitor.

Using Cinebench, I’m seeing CPU temperatures peak at 65C and hover around 61C under 100% load across all cores. Also all core voltage are right around 1.025V @ 3.5-3.8GHz. This is what I would expect to be normal behavior.

Now, when running Heaven Benchmark, everything is completely different. Granted, my GPU temps are around 78-82C, but my CPU temps are jumping up to 75C, sometimes spiking close to 85C! Also at 5-10% usage I’m seeing and constant 1.4V across all cores with clocks jumping as high as 4.5GHz. Some individual cores even reaching 5GHz. As far as I’m concerned this is not a CPU benchmark.

Another thing in Heaven benchmark is I’m seeing my Power Reporting Deviation dip as low as 55% which is really concerning since it’s meant to stay between 90-110%.

Again, everything is stock and I don’t have PBO or OC settings enabled.

This happens in games as well not just benchmarks. My most recent was Outerworlds where I saw my CPU temps spike to 86C just in a loading screen.

I’m really not sure what to do here. This is the closest I’ve gotten to figuring out why my PC is running so hot.

I would really appreciate any feedback.
 
Last edited: