Question 13900k Thermal Throttling at stock speeds

MHMabrito

Distinguished
Nov 2, 2013
42
1
18,545
I bought a EVGA CLCx 360mm AIO to cool down my 13900k. Even at stock settings for the CPU, I'm getting thermal throttling under stress tests. I find it weird because I'm not seeing these issues with other CPU Coolers, is there a chance I just got a defective one?

Radiator and fans are top mounted BTW.
 

letmepicyou

Reputable
Mar 5, 2019
229
39
4,620
I bought a EVGA CLCx 360mm AIO to cool down my 13900k. Even at stock settings for the CPU, I'm getting thermal throttling under stress tests. I find it weird because I'm not seeing these issues with other CPU Coolers, is there a chance I just got a defective one?

Radiator and fans are top mounted BTW.

If you have some quality thermal paste lying about, try removing the water pump, cleaning the cpu and block off with rubbing alcohol, and re-apply your thermal paste. Some folks report pretty decent drops in temp, depending on the AIO cooler in question, by changing out the stock thermal pad for some quality paste. Make sure you gradually tighten the 4 screws a few turns each one in a crisscross pattern, and make sure they're pretty tight and that you're using the correct backing plate and pump spacer screws for your application.

If you can't get that 13900k under control with the cooler you have now, give the Corsair H170i Elite a try, I have the LCD version, but they make a non-LCD version for a bit less. It's a 420mm monster, so your case has to support it. But I have the 12700kf, and I don't think I've ever seen my CPU get over 34C. The thing is a monster.
 

Encleaver

Distinguished
Jun 24, 2015
6
0
18,510
Im using 13900k with H150i Elite 360mm cooler and starting to see it hit 90-100C on some of the more demanding games i play. It's also at stock speeds. You are not alone. CPU just like to run hot.
 
That is to be expected.

"Intel’s i9-13900K supports Adaptive Boost Technology (ABT), which allows Core i9 processors to dynamically boost to higher all-core frequencies based upon available thermal headroom and electrical conditions. This allows multi-core loads to operate at up to 5.5ghz if the necessary amount of thermal dissipation is there. This feature works in a way that actively seeks high temperatures: If the chip sees that it is running below the 100-degree C threshold, it will increase its performance and power consumption until it reaches the safe 100C limit, thus sustaining higher clocks (and providing better performance) for longer periods. "

Comes from this review of the 13900K with various class coolers.

https://www.tomshardware.com/features/intel-core-13900k-cooling-tested
 
What Geofelt said.

Everyone needs to understand that we live in a new age of CPU frequency behavior, where frequencies are heavily influenced by a multitude of factors, instead of favoring a single frequency alone.

This is more efficient for CPU manufacturers to implement, and allows CPUs, GPUs etc to maximize the amount of performance available to the entire system, rather than the chip being the bottleneck itself.

There can be some unfavorable side-effects from running chips at hot temperatures, like dried up thermal paste after just a few years. But its not dangerous to run these chips at such high temps, unless you want it to last for an abnormally long time.

I believe people don't realize, that CPUs and GPUs have a silicon health curve, and they know what safe voltages to apply at any given temperature. For AMD CPUs, GPUs and Nvidia GPUs for instance, this is why they start dropping clocks and voltage well before hitting TJmax.

This isn't anything new either, laptop CPUs have been running at 90-100C for decades at this point, and I can't think of a single mobile processor that has died from "overheating".
 

letmepicyou

Reputable
Mar 5, 2019
229
39
4,620
What Geofelt said.

Everyone needs to understand that we live in a new age of CPU frequency behavior, where frequencies are heavily influenced by a multitude of factors, instead of favoring a single frequency alone.

This is more efficient for CPU manufacturers to implement, and allows CPUs, GPUs etc to maximize the amount of performance available to the entire system, rather than the chip being the bottleneck itself.

There can be some unfavorable side-effects from running chips at hot temperatures, like dried up thermal paste after just a few years. But its not dangerous to run these chips at such high temps, unless you want it to last for an abnormally long time.

I believe people don't realize, that CPUs and GPUs have a silicon health curve, and they know what safe voltages to apply at any given temperature. For AMD CPUs, GPUs and Nvidia GPUs for instance, this is why they start dropping clocks and voltage well before hitting TJmax.

This isn't anything new either, laptop CPUs have been running at 90-100C for decades at this point, and I can't think of a single mobile processor that has died from "overheating".
Higher temperatures equal shorter component lifespans. This is an ABSOLUTE.
 
Higher temperatures equal shorter component lifespans. This is an ABSOLUTE.

You aren't factoring in voltage and amperage. That is as much a manipulator as temperature. A modern 7nm CPU at 1.5v running at a cool 65C degrees will die fast, compared to 100C at a standard 1.0 or 1.1v.

Will a CPU designed to run at up to 100C, last longer at lower temperatures? yeah sure probably, but we don't know how much longer, or if it even matters. If a chip is guaranteed to run at max temperature 24/7 for say 5 years - which is what Nvidia GPUs are rated for, that's usually plenty of lifespan for a normal user. Not to mention the fact most people don't slam their chips to the max 24/7 anyways.

Update; Honestly now that I'm thinking about my post, we have no idea if CPUs last longer at any temperature anyways. Modern boost algorithums automatically increase voltage at lower temperatures anyways, so that should keep chip "degredation" quite linear, no matter the thermals.
 

letmepicyou

Reputable
Mar 5, 2019
229
39
4,620
You aren't factoring in voltage and amperage. That is as much a manipulator as temperature. A modern 7nm CPU at 1.5v running at a cool 65C degrees will die fast, compared to 100C at a standard 1.0 or 1.1v.

Will a CPU designed to run at up to 100C, last longer at lower temperatures? yeah sure probably, but we don't know how much longer, or if it even matters. If a chip is guaranteed to run at max temperature 24/7 for say 5 years - which is what Nvidia GPUs are rated for, that's usually plenty of lifespan for a normal user. Not to mention the fact most people don't slam their chips to the max 24/7 anyways.

Update; Honestly now that I'm thinking about my post, we have no idea if CPUs last longer at any temperature anyways. Modern boost algorithums automatically increase voltage at lower temperatures anyways, so that should keep chip "degredation" quite linear, no matter the thermals.
I've actually worked in the electronics industry, doing everything from hand insertion to board repair, down to SMT using binocular microscopes. I'm just a tad knowledgeable about electronic componentry. When you say, "You aren't factoring in voltage and amperage.", you are absolutely correct. And nobody that works in electrical engineering is GOING to "factor in voltage and amperage" when doing this type of analysis. When you're discussing MTBF, or "Mean Time Between Failure", which is the given length of time an electronic component is designed to last, the factors of "voltage and amperage" aren't discussed or even mentioned. The components are always expected to be run somewhere within their design voltage.

There is only 2 factors to consider when discussing component life. TEMPERATURE and TIME. And THAT'S IT. There is no place on earth you will find MTBF to be equated to "voltage and amperage". A 20 v axial lead electrolytic capacitor is going to be MTBF rated for XXXXX hours @ XXX degrees. You don't say, well, what if the 20v capacitor only runs at 18 volts. Doesn't matter. You don't say, well what if it's at .12 mAh vs .13 mAh. Doesn't matter. What MATTERS is TEMPERATURE and TIME.

So a transistor rated at 50,000 hours MTBF @ 35c might only give 45,000 hours at 40c, or 40,000 hours at 50c, or 20,000 hours at 70c. There is no need to include "voltage and amperage" because the circuits are going to run at the voltage and amperage they're designed to run at. There is only this one fact in electronic component life - that temperature is inversely proportional to life span. PERIOD. The HOTTER it runs, the LESS it lasts. And you won't find anyone educated in electronics ANYWHERE ON EARTH that will tell you anything different.