News Intel Working on New Cooling for Chips up to 2000W

Finally, something that might cool a 13900k without resorting to liquid nitrogen.
That would be funny if you couldn't cool a 13900k with unlimited power settings enough to get within 8W of the full performance with a $20 cooler.

The $20 Assassin 120 R SE sustained 5055MHz (an increase of 333MHz) with the CPU consuming an average of 245W.

 
If I could buy an Intel AIO that could cool 500w I would be all over that.
I wouldn't run it at 500w often, but it would be nice to when I wanted. Kind of like flooring the gas on a 500hp car.

Edit: I might even get two so I could strap one to my GPU. Nothing wrong with making cooling better.
 
Since the recommended max for a US household outlet is 1500W, I guess your next Intel PC will go in the garage next to the tumble dryer. 😛

Indeed. Going forward are we really going to need 30+amp outlets to run our freaking PCs?

If there's a practical limit to how much power a chip can draw (and how much heat it produces), Intel appears determined to find it.
 
I'm in the US and my pc is plugged into a 20A, 2400w outlet.
You are also supposed to have 20A circuits in the living room, dining room and kitchen, but you can also have them in other places so long as everything ( breaker, wiring, outlet ) is rated for that.
 
2000W? This literally will burn your wallet just by turning on, burn your house due to power strip limit and burn you cola use it generate more heat than most home air conditioning can do..
 
Intel hints that it will be next-gen chips up to 2000W
About time, esp. with the 3090 that already need 2kW on tap, home labs that run solar, heat pumps, some geoloops and unlocked-specials Xeons can foster future startups. Not to say solvents esp. suited to those jet coolers aren't better at it by 40%, it wouldn't suck to have a water still of some sort pc-adjacent either (especially if there were waste stream separation for nanoparticles you do and don't want to add back.

Sometime the 'usual home mods' cold cellar, water caisson, geoloop, elevator, reinsulation and PC inverter/flow battery rooms have to see integration, right?
 
Why do I get the feeling that Intel's goals are opposite of what the rest of the industry wants to do.
Everyone want's infinite performance for zero cost and zero power. It is pretty clear that Intel is aiming at the supercomputer market where speed of light delays become an issue.
They are probably looking at 65,536 cores running at 20GHZ located in a 2 inch x 2 inch (5cm x 5cm) module. The next step is to pack 64 of these modules into a single 6U high server rack and put a few hundred or a few thousand of these racks in a system. You won't be powering this anywhere near home.

There is already a company selling a chip aimed at AI which consumes an entire 12 inch wafer.
The part draws 20KW.
 
  • Like
Reactions: rtoaht
That would be funny if you couldn't cool a 13900k with unlimited power settings enough to get within 8W of the full performance with a $20 cooler.
Compared to the DeepCool LT720 360 mm AIO water cooler, that heatsink loses 7.5% of P-core clock speeds, resulting in a 6% drop in performance.

Still, not bad for $20, but it's a bigger deficit than 8 W makes it sound like.
 
ITT people forget that datacentres exist, and single devices drawing a mere 2kW are rookie numbers.
Not single CPUs consuming 2 kW (aside from mainframes)! That's what the article is about.

Use your braina, guys. 2k W isn't for your chips. It's for data centers. JFC...
Intel's current top-end data center CPU models are rated at only 350 W. So, that's a 5.7x increase they're talking about! More like... The Rapture!


I find it ironic you're both so quick to decry the commenters in this thread that you didn't even check yourselves!
 
Last edited:
Compared to the DeepCool LT720 360 mm AIO water cooler, that heatsink loses 7.5% of P-core clock speeds, resulting in a 6% drop in performance.

Still, not bad for $20, but it's a bigger deficit than 8 W makes it sound like.
Have you seen the post with the der8auer youtube video? It shows up to 50% difference in efficiency between the same model CPU, because one chip will run much hotter than another, 6% is nothing against that.
And sure it will depend on silicon lottery if you get a bigger or a smaller difference, but the point still is that you don't need expensive cooling unless you want overclocking, but that is hardly shocking news, it has been this way since forever.
Why do I get the feeling that Intel's goals are opposite of what the rest of the industry wants to do.
No, all of the industry is trying to get fewer CPUs to have the same performance as more CPUs because they pay licences by amount of cores or CPUs.
But also because then you have to feed less infrastructure with power, if instead of having to feed two x socket mobos with power you could get away with feeding only one x socket mobo but get the same performance then you do save power and it can be a good chunk.

Nobody can do that yet because nobody can cool that much power in one spot.
 
  • Like
Reactions: KyaraM and rluker5
I'm in the US and my pc is plugged into a 20A, 2400w outlet.
You are also supposed to have 20A circuits in the living room, dining room and kitchen, but you can also have them in other places so long as everything ( breaker, wiring, outlet ) is rated for that.
If you have a dedicated line then yes you can get within 90-95% of that rating.

However, most houses have all the gangs (receptacles) run off of 1 line (possibly the ceiling lights and other rooms as well depending on how cheap the builder was).

Without knowing 100% for sure your gang (receptacle) is the only one on the line, never assume more than 80% for newer homes and even less for older ones (pre 1970s). Thus, that 2400w becomes 1900 or less very fast for most builds.

What's more, a lot of really cheap builds / older homes often go with 15A vs 20A since it will save them a few dollars. Now we're around 1400w for your new system that's plugged into the same circuit with your TV, lamps, etc.

This assumes you're in North America, instead of the 240v world.
 
  • Like
Reactions: KyaraM and bit_user
Not single CPUs consuming 2 kW (aside from mainframes)! That's what the article is about.
edzieba said:
ITT people forget that datacentres exist, and single devices drawing a mere 2kW are rookie numbers.



Single CPU, correct with the exception of certain entire wafer chips.

However edzieba said devices. So 4+ CPU, raid of SSD / M.2 / U.3 drives , GPU/Compute cards, multiple 100gb+ networking cards / connections, 1TB+ of Ram and the cooling to cool all that (plus any other add-in cards for specialty purposes) you can easily get over 2kW per 2-4U server (aka a single device).
 
  • Like
Reactions: KyaraM
No, all of the industry is trying to get fewer CPUs to have the same performance as more CPUs because they pay licences by amount of cores or CPUs.
But also because then you have to feed less infrastructure with power, if instead of having to feed two x socket mobos with power you could get away with feeding only one x socket mobo but get the same performance then you do save power and it can be a good chunk.

Nobody can do that yet because nobody can cool that much power in one spot.
The existing licenses that depend on the amount of cores sounds like a legacy BS that needs to be outlawed.

It really should be licensed by a per CPU socket basis.
 
The existing licenses that depend on the amount of cores sounds like a legacy BS that needs to be outlawed.

It really should be licensed by a per CPU socket basis.
Software licensing is based on what the customer is willing to pay for the service. For Engineering software it is often based on the number of users and their location. For example, it costs $20,000/user/yr for a user in the United States or Europe and is $2000/yr for a user in China, Taiwan, Malaysia or The Philippines. This is one reason the company I worked for moved it's engineering work to The Philippines and PC motherboard design to Taiwan.
 
I think people who live in cities with 110V phase-to-neutral (single phase) voltage will need to change their outlets to 110V phase-to-phase (biphasic) to be able to feed future Intel CPUs.

Seriously now... People all over the world want PCs that consume little power. I, for example, I'm a geek and I'm against building a PC with a watercooler, I think a watercooler is a a very clumsy thing. I only buy CPUs that can be used with aircoolers. And most people in the world (who aren't geeks like us) also want PCs that are simple and easy to assemble and that consume little electricity. Intel is going against most people's wishes.
 
edzieba said:
ITT people forget that datacentres exist, and single devices drawing a mere 2kW are rookie numbers.



Single CPU, correct with the exception of certain entire wafer chips.

However edzieba said devices. So 4+ CPU, raid of SSD / M.2 / U.3 drives , GPU/Compute cards, multiple 100gb+ networking cards / connections, 1TB+ of Ram and the cooling to cool all that (plus any other add-in cards for specialty purposes) you can easily get over 2kW per 2-4U server (aka a single device).
Intel also said "devices" rather than chips in their press release (which is also in the "Data centre and HPC" section, not "Client Computing"). Any rack-based fluid cooling system ideally covers the entire device (i.e. all CPUs, GPUs, VRMs, and other major thermal components), otherwise that rack needs to have both a liquid chiller AND an air chiller system installed and operating in order to operate.
 
  • Like
Reactions: rluker5
If you have a dedicated line then yes you can get within 90-95% of that rating.

However, most houses have all the gangs (receptacles) run off of 1 line (possibly the ceiling lights and other rooms as well depending on how cheap the builder was).

Without knowing 100% for sure your gang (receptacle) is the only one on the line, never assume more than 80% for newer homes and even less for older ones (pre 1970s). Thus, that 2400w becomes 1900 or less very fast for most builds.

What's more, a lot of really cheap builds / older homes often go with 15A vs 20A since it will save them a few dollars. Now we're around 1400w for your new system that's plugged into the same circuit with your TV, lamps, etc.

This assumes you're in North America, instead of the 240v world.
That is true, and it isn't just the overheating breaker issue that is listed as the dubious reason if you search.
Just the wires can give you a 5% drop in wattage from voltage drop if you fully load the amp rating and the breaker is on the far side of the house. For example somebody could plug their personal numbers into this calculator: https://www.calculator.net/voltage-drop-calculator.html That isn't including the drops across a daisy chain of receptacles, wire nuts and "off" devices typically found in houses. That 80% rule of thumb also accounts for this.

But it's also true I had a dedicated one outlet 20A line put in for my pc during a remodel when everything was already open. Back in the days when I used incandescent lights that tipped me off to the voltage drop under benchmark loading. My pc line also has a voltage drop of <2% at 20A from the lines per that calculator so maybe a little over 2300W gets to the PSU.

It is possible to get 2KW to a cpu in a residential setting. By comparison my CPU cooler can handle a little over 260W from my 13900kf.

I would like for coolers to get better. It is nice that Intel is working on this.