News Intel's Xeon W9-3495X Can Draw 1,900W of Power

Status
Not open for further replies.
33 watts per core. Is that bad? I am not an engineer so I'm not certain if that is good or bad. Server chips are meant to run much lower on clocks I know.
 
33 watts per core. Is that bad? I am not an engineer so I'm not certain if that is good or bad. Server chips are meant to run much lower on clocks I know.
well, a ryzen r9 7950x3d will draw about 10-15W per core at 5.5ghz, it's harder to tell with intel's current offering due to the two sized/speed cores; which sort of messes with the calculations. but judging by the power draw of the 13000 series 33w/ big core is probably correct. so i'd say that's normal for intel?
 
33 watts per core. Is that bad?
Yeah, that's the first thing I did. Actually, it's almost 34 W/core. And I was going to say I think they could go further.

The main thing to remember is that these are Golden Cove cores (except with more L2 cache) and should be using the same process node as Alder Lake. So, we should be able to look to it for some idea of what's possible.
Power%2012900K%20POVRay%20Ramp%20EP.png

That's right: 78 W package power on one core! That Alder Lake CPU can use 35 W/core on 5 P-cores or about 33 W/core on 6 P-cores. With 7 P-cores, it's down below 30. That's one way to look at it, at least. I don't really look at 8, since it's whacking into the power limit by then.

Perhaps a more constructive way, that could account for a lot of the overheads, would be to look at the delta between 2 P-cores and the 1 P-core case. There, we see an increase of just 33 W. So, that suggests that a marginal power of 33 W/core is indeed what at least the stock K would boost up to. So, there actually might not be much more gas in the tank.

I'm less interested in the question of "badness", however. We all know this is well above the peak-efficiency point, as it's about 3x or more than what Sapphire Rapids or Genoa server CPUs are spec'd to run at. So, it's obviously bad, if you're efficiency-minded, but that wasn't the point of the experiment.
 
well, a ryzen r9 7950x3d will draw about 10-15W per core at 5.5ghz, it's harder to tell with intel's current offering due to the two sized/speed cores; which sort of messes with the calculations. but judging by the power draw of the 13000 series 33w/ big core is probably correct. so i'd say that's normal for intel?
It does use 10-15W per core, but it doesn't run at 5.5, it barely reaches 5Ghz with all cores loaded.
https://www.techpowerup.com/review/amd-ryzen-9-7950x3d/27.html
boost-clock-analysis.png
 
Alone the fact that all of the cores are able to hit 5.5Ghz is a feat.
The 13900k uses 32W for single thread load although it runs higher clocks, the 7950x uses 43W
https://www.techpowerup.com/review/intel-core-i9-13900k/22.html

I think you're comparing apples and oranges here. A 7950x will not use 16x43 watts - that's 688 watts. But for a single core - yes, 43w is maybe right. This server chip OTOH, we're talking 33watts per single core. The number 33 watts popped up with one poster doing 1900/56 and getting 33 (actually it's almost 34). I assume it's only for a short bursts, or I'll be really interested in the cooling used :)
 
Last edited:
The 13900k uses 32W for single thread load
That must be subtracting off idle power consumption. Otherwise, it's quite at odds with Anandtech's measurement for the i9-12900K, although both claim to be measuring CPU package power. In that TechPowerUp chart, they claim i9-12900K's single-threaded power is only 26 W, yet Anandtech got 78 W (or 71 W if you subtract off idle).

The other conclusion we might draw is that MP3 encoder is rubbish at showing the max sustained single-threaded power draw. I think this is definitely a factor, and probably the dominant one.

Here's the single-threaded data from Toms' review of the i9-13900K:
n3Mr6T3J2JCiAEdeRAmHoM.png
It's at least closer to the Anandtech PovRay data, but still a wide gulf exists. Sadly, I haven't found anyone else testing these CPUs with PovRay @ 1 thread.
 
Last edited:
My old xeon can idle about 75w gives 4.68w (for core) with the system 8 dimm and a gtx 1650.
The power of cpu is limited to 140w 8.5w for each cpu. With this 1200w power draw can run 141 cores :) its insane
 
That must be subtracting off idle power consumption. Otherwise, it's quite at odds with Anandtech's measurement for the i9-12900K, although both claim to be measuring CPU package power. In that TechPowerUp chart, they claim i9-12900K's single-threaded power is only 26 W, yet Anandtech got 78 W (or 71 W if you subtract off idle).
Quote from that anand review:
Because this is package power (the output for core power had some issues), this does include firing up the ring, the L3 cache, and the DRAM controller, but even if that makes 20% of the difference, we’re still looking at ~55-60 W enabled for a single core.
I didn't notice techpowerup saying anything about it so I don't know if they are using core only or package power.
Here's the single-threaded data from Toms' review of the i9-13900K:
So if it uses 46W for a power virus why is 33W for a normal workload so unbelievable?!
Heck, this is package power as well so core alone is lower than 46W and could even be around 33W.
 
My old xeon can idle about 75w gives 4.68w (for core) with the system 8 dimm and a gtx 1650.
The power of cpu is limited to 140w 8.5w for each cpu. With this 1200w power draw can run 141 cores :) its insane
That's a little sloppy. Your per-core utilization is actually 8.75 W and that would extrapolate to 137 cores @ 1.2 kW package power.

Your cache & memory subsystem would run out of gas, well before they reached that point, however.

I actually like old servers, but I'd take a 5950X or even a 3950X, before going with an old 16-core server, if I could live with the DRAM capacity limits and didn't need a big RAID or anything like that.
 
Last edited:
I didn't notice techpowerup saying anything about it so I don't know if they are using core only or package power.
Right at the top of the page you linked, they state they're measuring:
"voltage, current and power flowing through the 12-pin CPU power connector(s)​

So if it uses 46W for a power virus why is 33W for a normal workload so unbelievable?!
Context is key. The article quoted the 1.9 kW figure for a Cinebench workload and the post you replied to was asking about the per-core utilization of that benchmark. It's not a good answer to the question if you cite data from a markedly less-stressful workload as though it's comparable.

Heck, this is package power as well so core alone is lower than 46W and could even be around 33W.
Could be. If you're aware of any more relevant benchmarks, please share them.
 
No offense, but this feels like motivated reasoning. If you have more to say about your rationale, I'd be interested in hearing it.

I'm just saying for a sustained all-core overclock ~900mhz past the max single-core turbo speed - which in no doubt requires a bit of a voltage bump - that's not terrible. I've personally drawn over 800W to a Pentium 4 trying to set a world record so 56x more cores at only double the wattage is great in my mind.
 
  • Like
Reactions: bit_user
Any CPU that has the ability to consume 1900 watts of power, is not impressive.

Compare it to the EPYC genoa, 96 cores, 4.4ghz and peak power consumption of 400 watts, for HALF the price of what intel is demanding for their "new" chip, that uses two year old cores. In fact the 96 core EPYC chip is rated for 320-400w while the Xeon is rated for 360-420w. And remember, when you buy this Xeon, Intel still holds some features ransom until you pay them even more money!

Performance isn't the only metric that matters anymore. Efficiency should be the biggest consideration, and intel has never really been great with that, and now they're simply incapable of competing unless they throw it out the window and set it on fire.
 
Any CPU that has the ability to consume 1900 watts of power, is not impressive.
I'm impressed, in a way. I've heard references to mainframe CPUs using multiple kW of power, but I don't know if I've ever previously heard about an x86 CPU using kW of power, and this is using nearly 2!

I'm not saying it's a good thing, but I'm kind of amazed that everything from the board, VRM and the CPU itself could handle so much current!

Compare it to the EPYC genoa,
No, this feat isn't about energy efficiency. By definition anything being overclocked is running well beyond its peak-efficiency point - not to mention LN2 overclocking!

Efficiency should be the biggest consideration, and intel has never really been great with that,
For a lot of people & industry, efficiency is a major consideration. Just not 'leet gamerz or overclockers.

Intel was doing pretty well with efficiency improvements, from about the time of Core 2 until Kaby Lake. That's when they started to go off the rails.

The whole NUC/SFF movement came about because Intel's laptop CPUs were powerful enough to handle even mid-level desktop workloads.

remember, when you buy this Xeon, Intel still holds some features ransom until you pay them even more money!
Not this Xeon. It's from their Xeon W lineup, which I believe doesn't have any "Intel On Demand"-controlled features.



Even among the Xeon Scalable product line, you will find that not all models support "Intel On Demand". The charts in this article detail which ones are, in case you wish to be better informed on the matter. However, it's not at all relevant to this article/overclocking achievement.

 
  • Like
Reactions: UnCertainty08
Intel still holds some features ransom until you pay them even more money!
And do you know why this is possible?!
Because everybody including servers and everything has moved on from using just CPU cores to do their work, you want to talk efficiency?! Everybody in the industry is using accelerators now which is why CPU core efficiency is much less important now.
It's only still important for a very few fields that don't have any accelerators.
 
everybody including servers and everything has moved on from using just CPU cores to do their work, you want to talk efficiency?! Everybody in the industry is using accelerators now which is why CPU core efficiency is much less important now.
It's only still important for a very few fields that don't have any accelerators.
I think you're overstating your case, but I agree that purpose-built accelerators are part of the efficiency formula for many workloads. E-cores are another important part, as is in-package memory - both of which feature in current Intel products.

Looking ahead, processing-in-memory and silicon photonics will be key. I know Intel has been talking about the latter, for a while. Both are specifically highlighted in Lisa Su's keynote at this year's ISSSC.



Missing from that article are key details about what an absolutely pressing problem energy efficiency poses, for further compute scaling. I hate to link to WCCFTech, but their writeup on that talk was truly better and published a least a day earlier. Normally, I don't have such positive things to say about them.



I tried to avoid getting onto a tangent about efficiency, but I guess I failed. Anyway, maybe @iocedmyself will find this interesting.
 
Last edited:
Status
Not open for further replies.