Exclusive: Testing Intel's Unreleased Core i9-9900KS

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
You can't just mix any random info you want to get the outcome you like.
At 65W the 3700x will run an all core of 3.6 maybe 3.7 that means that anything that is not GPU limited will run at least 35% faster on the 5Ghz all core 9900ks,higher if you push it even more.
You only get a 5-10% slower Ryzen system if you push your 3700x quite hard to reach that 4.4Ghz all core which isn't even possible on most CPUs,you surely won't be able to do this at 65W TDP and probably also won't be able to do this on a cheap mobo.
Who said anything about the KS which isn't even out yet? Vinay2070 was talking about the 9900K's retail price. WHEN The -KS is actually out, we'll talk some more, but keep in mind that 5 GHz all core is only in boost mode, which according ton Intel's specs is limited in time. The 3700X though, has a base all core clock of 3.6 GHz, but easily boosts to 3.8and stay there as long as it's kept cool enough - in which case it can even go higher.
And don't get your pants in a knot, it's almost embarrassing seeing how AMD's lowest high-tier CPU can make intel's top-tier CPU sweat... Tier to tier, we'd need to compare the 9900K to the 3900X, and then the 9900KS to the 3950X - none of the latter two are out yet.
 
I'm calling bias here from THG, as eventhough neither CPU is out, one got a big 8-pages long front page test while the other has a puny report comparing it to Threadripper hidden away in the other news... :p I guess THG didn't get a leaked sample for the 3950X to play with.
 

bit_user

Polypheme
Ambassador
The in-silicon mitigations are not meant to hurt performance when compared against a software-mitigated system. If anything, they are meant to claw lost performance back. So, was this comparison made against a 9900K with software mitigations that take care of the exact same bugs? I don't think so. In any case I am willing to bet that this slight 1-2% IPC performance regression has nothing to do with the mitigation and instead is due to being a pre-release sample and due to using an early bios.
You're probably right. I think those tests don't do a good job of testing cases where the mitigations really affect performance, or else we'd have seen at least a few clear wins.

However, we don't know what Intel might've done to the "uncore", which could be largely responsible for the substantial power savings. So, we'll have to wait and see if/how this situation changes with the released version.

On a different note, this is the third time (in the mainstream platform) in recent years that Intel has re-released the exact same processor (4770K -> 4790K, 8700K -> 8086K, 9900K -> 9900KS) with improved frequencies achieved by process maturity alone (i.e. as the process matures, what was once considered to be a golden sample is now the norm so they can increase stock frequencies).
No, it's not the exact same. It's a different stepping, which means they tweaked aspects of the actual design.

Also, Intel has been on 14 nm++ for so long that I'd have thought they already tuned the heck out of that process. I mean, what is 14 nm ++, if not 2 major iterations of tuning of their original 14 nm process? I'll accept that it might've took them a little time to tweak even 14 nm ++ for maximum clock speed. However, at this point, I wouldn't think there have been any process changes unless they specifically say so.


And the fifth time they have done it if you also count ... or a move to an entirely new process node (2600K->3770K), but as these moves also included added features (PCIE3, boost algorithm improvements, 4K video encoding) I don’t personally count them.
Those are different CPUs! Just because they have the same overall microarchitecture does not make them the same processor! When they switch to a different process node, they would redesign the entire chip with a different library. So, even if it's designed according to the same high-level plan, it's fundamentally a different chip.

And that is exactly what will happen again. Comet lake is expected to be available for purchase by February 2020 i.e. in 4 months and that on a new platform, and is poised to have the same or even higher stock speeds and that with 10 cores.
I don't want 10 cores. The further they push it, the higher latencies will get, and the more power will be burned by the "uncore". And dual-channel memory will increasingly become a bottleneck.

On that note, I'll be looking carefully at how well the Ryzen 9 3950X scales. I think it'll probably make a good case for the ThreadRipper platform.
 

PCWarrior

Distinguished
May 20, 2013
199
81
18,670
No, it's not the exact same. It's a different stepping, which means they tweaked aspects of the actual design.

Also, Intel has been on 14 nm++ for so long that I'd have thought they already tuned the heck out of that process. I mean, what is 14 nm ++, if not 2 major iterations of tuning of their original 14 nm process? I'll accept that it might've took them a little time to tweak even 14 nm ++ for maximum clock speed. However, at this point, I wouldn't think there have been any process changes unless they specifically say so.

Those are different CPUs! Just because they have the same overall microarchitecture does not make them the same processor! When they switch to a different process node, they would redesign the entire chip with a different library. So, even if it's designed according to the same high-level plan, it's fundamentally a different chip.
Of course, but obviously I didn’t mean to mean “exact same” in the absolute literal sense of the word! The processors are still identical by 99.999% in terms of design and core functionality and that counts as being same in my book. That is not to diminish the work of Intel engineers who had to develop a new mask for the new stepping in order to clean design from bugs (which up to that point were handled by microcode), introduce security mitigations in the design, improve yields and improve the bin splits. This is what I tried to simplistically encapsulate under the general umbrella “process maturity”. As for the genuine process shifts or iterations, yes it does involve a redesign of the cpu with new libraries, which is a serious undertaking, but a redesign of largely the same thing nonetheless and, sans the frequency or efficiency bump, practically the same product to the end user.

I don't want 10 cores. The further they push it, the higher latencies will get, and the more power will be burned by the "uncore". And dual-channel memory will increasingly become a bottleneck.

On that note, I'll be looking carefully at how well the Ryzen 9 3950X scales. I think it'll probably make a good case for the ThreadRipper platform.
I recall a few years back several tests being done about Intel’s ring bus latency, in particular in relation to the dual ring bus design in high core count Xeons. The maximum number of cores on one single ring bus was 12 cores and pretty much the conclusion was that up to six cores was the highest you could go with virtually no latency impact but practically you could push it to 10 cores without noticeable negative impact. In any case, I hope that with Comet lake, Intel tweaks and improves things a little and reduces the maximum core to core latency. As for RAM becoming a bottleneck, I think it will be alright as long as frequency is high enough - a 3466MHzC16 kit and up should be fine. We will see.
 
The CPU is a little faster than the 9900K, and, ergo, many folks will buy it.

AMD fans that love Cinebench will point to the 3900X...gamers will point to the 9700K and 9900K and now 9900KS in the lead in gaming....

will this change when 3950X arrives?

Stay tuned!
 
  • Like
Reactions: Phaaze88

bit_user

Polypheme
Ambassador
Of course, but obviously I didn’t mean to mean “exact same” in the absolute literal sense of the word!
Well, then maybe say what you actually mean. If you meant "essentially the same", then don't say "the exact same".

The processors are still identical by 99.999% in terms of design and core functionality and that counts as being same in my book.
No, I wouldn't say five 9's. The mitigations can't have been metal-level fixes, so probably not more than three 9's. You might try to downplay this, as well, but 2 orders of magnitude is kind of a big deal. The point is, if they changed it at the RTL level, then there could well be other changes in there that might address various power efficiency or scaling issues.

As for the genuine process shifts or iterations, yes it does involve a redesign of the cpu with new libraries, which is a serious undertaking, but a redesign of largely the same thing nonetheless and, sans the frequency or efficiency bump, practically the same product to the end user.
Eh, if you're willing to go that far, then it's not really a stretch to say that Broadwell vs. Skylake are "practically the same product to the end user".

I hope that with Comet lake, Intel tweaks and improves things a little and reduces the maximum core to core latency.
Maybe they'll even drop the ring bus, in favor of something else. Anyway, I don't need 10 cores, and I don't really see why Intel is intent on having that battle with AMD.

As for RAM becoming a bottleneck, I think it will be alright as long as frequency is high enough - a 3466MHzC16 kit and up should be fine.
When Intel first introduced quad-channel memory, their Sandybridge E CPUs maxed out at 8 cores. Sure, memory has gotten faster, but so have the cores. And there's also an iGPU that wants its share of the bandwidth - which Sandybridge E didn't have. Or, if not that, then (pretty soon) a PCIe 4.0-connected dGPU.

Consider this: https://ark.intel.com/content/www/us/en/ark/compare.html?productIds=186605,64582

To feed the 8-core Xeon E5-2687W (3.1 GHz base; 3.8 GHz turbo), Intel provided 51.2 GB/s of memory bandwidth.
To feed the 8-core i9-9900K (iGPU + 3.6 GHz base, 5.0 GHz turbo), Intel provided 41.6 GB/s of memory bandwidth.

Granted, the E5-2687W has more PCIe lanes to feed that collectively add up to even more than the i9's GPU could probably consume. However, if you compare base GB/s per core GHz, the Sandybridge Xeon gets 2.06 bytes per core clock vs. the Coffee Lake-R i9's 1.44 bytes per core clock. So, just looking at the cores, the i9 is already at a significant deficit.

The extent to which this matters will depend entirely on workload. For computing digits of Pi, it won't matter a bit. Yet, hosting a large, in-memory database was probably already memory-bound with even fewer cores.
 

PCWarrior

Distinguished
May 20, 2013
199
81
18,670
Well, then maybe say what you actually mean. If you meant "essentially the same", then don't say "the exact same".
Well, I apologise for my lack of precision but given the fact that in the first paragraph of my original post I was talking about the in-silicon changes that were made in this version of the processor I thought it would be obvious that the phrase “exact same” wasn’t meant to be taken 100% literally. But if we are going to go down the rabbit hole of such level of precision and pedantry then we should also say that we cannot produce exact same cpus due to manufacturing tolerances. And that, even if we could somehow manufacture cpus that have the exact same atomic or even subatomic structure, they can still never be exactly the same on quantum level because of the Pauli exclusion principle. And even if we could miraculously make them completely identical then there is the philosophical objection that they are not the exact same thing but merely identical clones and that the only way to produce the exact same thing is to go back in time and bring it to the present creating multiple instances of that exact same thing. And even then, it would be debatable…
No, I wouldn't say five 9's. The mitigations can't have been metal-level fixes, so probably not more than three 9's. You might try to downplay this, as well, but 2 orders of magnitude is kind of a big deal. The point is, if they changed it at the RTL level, then there could well be other changes in there that might address various power efficiency or scaling issues.
Hmm. Given that the 9900K has over 3 billion transistors, I am not that sure that 0.001% is such a small of a percentage given that in absolute terms this translates to over 30 thousand transistors worth of changes. A 0.1% worth of changes on the 9900K like you suggest, if all on design, would be 3 million transistors equal to the number of transistors in the original Pentium, i.e. an entire older cpu worth of changes.

Maybe they'll even drop the ring bus, in favor of something else. Anyway, I don't need 10 cores, and I don't really see why Intel is intent on having that battle with AMD.

I doubt they will drop the ring bus with this generation. You may not need 10 cores but the introduction of a new 10c/20t flagship in the same price tier as the current 8c/16t mainstream flagship will mean the release of an 8c/16t part (or an equivalent 10c/10t part) in the lower price tier now occupied by the 8c/8t i7. Personally, I am expecting the following lineup of unlocked Comet lake cpus:
i9 10900K 10C/20T - $499
i7 10700K 8C/16T or 10C/10T - $389
i5 10600K 6C/12T - $279
i3 10350K 4C/8T - $179

When Intel first introduced quad-channel memory, their Sandybridge E CPUs maxed out at 8 cores. Sure, memory has gotten faster, but so have the cores. And there's also an iGPU that wants its share of the bandwidth - which Sandybridge E didn't have. Or, if not that, then (pretty soon) a PCIe 4.0-connected dGPU.

Consider this: https://ark.intel.com/content/www/us/en/ark/compare.html?productIds=186605,64582

To feed the 8-core Xeon E5-2687W (3.1 GHz base; 3.8 GHz turbo), Intel provided 51.2 GB/s of memory bandwidth.
To feed the 8-core i9-9900K (iGPU + 3.6 GHz base, 5.0 GHz turbo), Intel provided 41.6 GB/s of memory bandwidth.

Granted, the E5-2687W has more PCIe lanes to feed that collectively add up to even more than the i9's GPU could probably consume. However, if you compare base GB/s per core GHz, the Sandybridge Xeon gets 2.06 bytes per core clock vs. the Coffee Lake-R i9's 1.44 bytes per core clock. So, just looking at the cores, the i9 is already at a significant deficit.

The extent to which this matters will depend entirely on workload. For computing digits of Pi, it won't matter a bit. Yet, hosting a large, in-memory database was probably already memory-bound with even fewer cores.
As you have very correctly pointed out, the extent that memory bandwidth matters depends on whether the workload makes such use of it that it becomes the bottleneck. It should also be mentioned that the bandwidth that Intel gives is calculated with stock memory speeds which in the case of the 9900K is 2666MHz (2.66GHzx2channelsx64bits/8=42.6GB/s – by the way Intel there made a typo there saying 41.6GB/s). Using a dual channel memory with frequency 3200Mhz increases bandwidth to 51.2GB/s, equal to that of the 1600Mhz quad channel of the Sandybridge Xeon in your example.
 
Last edited:

bit_user

Polypheme
Ambassador
we should also say that we cannot produce exact same cpus due to manufacturing tolerances.
Manufacturing tolerances exist to give designers parameters within which to ensure their chips function consistently and reliably. It was clear that you were talking about the chip design and the manufacturing process - not some kind of science fiction world where atomically-accurate cloning is suddenly possible.

Given that the 9900K has over 3 billion transistors, I am not that sure that 0.001% is such a small of a percentage given that in absolute terms this translates to over 30 thousand transistors worth of changes. A 0.1% worth of changes on the 9900K like you suggest, if all on design, would be 3 million transistors equal to the number of transistors in the original Pentium, i.e. an entire older cpu worth of changes.
These CPUs are octo-core, meaning mitigations will likely be replicated (at least) 8 times. Even LLC is divided into slices that are coupled with the cores.

https://en.wikichip.org/wiki/intel/microarchitectures/coffee_lake

Because modern CPUs spend a lot of their transistor budget to optimize performance, you get pipelining and replication of functions within each core, cache block, etc. Therefore, it's not a given that a particular mitigation will even exist in a single, localized place, per-core. And, as the mitigations need to be replicated and spread out over multiple locations in each core, the transistor footprint obviously grows.

You may not need 10 cores but the introduction of a new 10c/20t flagship in the same price tier as the current 8c/16t mainstream flagship will mean the release of an 8c/16t part (or an equivalent 10c/10t part) in the lower price tier now occupied by the 8c/8t i7.
My concern is that ring bus latency and power scales with the number of stops, and that number is not tied to how many cores are actually enabled.

As you have very correctly pointed out, the extent that memory bandwidth matters depends on whether the workload makes such use of it that it becomes the bottleneck.
Yeah, I haven't seen recent memory-scaling benchmarks, but it would be interesting to know how much the Ryzen 9 3900X and i9-9900K are already hampered by their dual-channel memory, on desktop workloads.
 
Oct 19, 2019
17
4
15
Um, it's not even pretending to be a new generation. How much improvement are you expecting, for simply adding a 'S' on the model number?

For me, the power graphs (page 2) say it all:

z7DEs2TtUWwtaFA6LKoKof-650-80.png

152 vs. 200 Watts? That's huge!

Or, to look at it another way, 4% more overclocking for 10% less power. Definitely a worthwhile update!
are you sure????
intel love ppt in first launch
77k test 5G,but only oc to 4.8g
99k test 5G,but need 280mm>watercooling

9900k 1.26v 5g 200w+
1.26v is good, if 1.3v that need 230w-250w
99ks 150w , hahahahaha 1.2v??????
 

joeblowsmynose

Distinguished
Not very exciting ... but if you were going to buy a 9900k anyway and have a massive cooling system, then for the same price for the cpu ,why not? It breathes a just a little more life into the 14nm++++++ which I am slightly impressed with.

Dual 360 rads on a custom cooling loop tho ... lol.

Due to the massive cooling system they set this up with, I wonder if the power consumption tests aren't a little skewed by that? I know even my Ryzen consumes quite a noticeable amount of power more when its hot, then when it is relatively cool, under the same load.

I never used to think temp influences power consumption to a noticeable degree, but it definitely does make a big difference.
 
Last edited:
  • Like
Reactions: bit_user
Not very exciting ... but if you were going to buy a 9900k anyway and have a massive cooling system, then for the same price for the cpu ,why not? It breathes a just a little more life into the 14nm++++++ which I am slightly impressed with.

Dual 360 rads on a custom cooling loop tho ... lol.

Due to the massive cooling system they set this up with, I wonder if the power consumption tests aren't a little skewed by that? I know even my Ryzen consumes quite a noticeable amount of power more when its hot, then when it is relatively cool, under the same load.

I never used to think temp influences power consumption to a noticeable degree, but it definitely does make a big difference.

It shouldn't make enough of a difference though. The only purpose of the cooling is to move the heat from the CPU to the cooling block.

I wouldn't be surprised if Intel was able to refine their 14nm to use less power. They did the same thing with 65nm and Core 2 with the G0 stepping of the Q6600. The CPU used less power, ran slightly cooler and needed a lower voltage to remain stable.

https://www.anandtech.com/show/2303/4

Its not uncommon for CPU revisions and process refinements to do this.
 

joeblowsmynose

Distinguished
It shouldn't make enough of a difference though. ...

That's what I used to think ... then I heard someone on this forum say that it does make a difference after Paul Alcorn mentioned it and someone tried to say he wasn't right (might have been you even), so I did some testing on my own PC recently.

According to HWMonitor My R7 1700 (non-x, OCd, OVd) will consume ~110w while 3D rendering (more than CPU-z stress test), if I start a large render using Corona or the built in ART renderer (fairly similar in terms of demands on the CPU), in the first 30 seconds or so while the CPU is ~50C. After say 20 minutes of rendering, my AOI liquid is thoroughly saturated and I'm at about 75C and HWMonitor claims my CPU is now using ~130w. If I slow the fans and let the temp get to ~85C I pull over 140w. That's a 30w difference right there on a 35C delta.


The only purpose of the cooling is to move the heat from the CPU to the cooling block.

The hotter a CPU is the more electrons leak.


I wouldn't be surprised if Intel was able to refine their 14nm to use less power.

I would be surprised if they didn't improve efficiency at least a little, but that has nothing to do with the fact that the power numbers in general could be at least somewhat skewed due to this in many reviews, where entirely different levels of cooling is applied to different processors in the test.
 
  • Like
Reactions: bit_user

bit_user

Polypheme
Ambassador
The hotter a CPU is the more electrons leak.
Yes, that seems to be the issue, if I understand correctly. There's a lot of info to be found, if @jimmysmitty would like to have an informed opinion on the matter.

One of the first hits I got looked into power vs. temperature scaling on the venerable i7-2600K, shortly after it was released.


merely allowing the operating temperature of the CPU itself to rise from 47°C to 96°C results in the CPU's power consumption increasing by 23W!

So, this is nothing terribly new, or even specific to AMD or recent TSMC process nodes.

In the 3rd post, the author gets to the underlying phenomenon:

To first-order, temperature only effects power-consumption in terms of static leakage. Of course temperature also effects dynamic power-consumption in terms of increasing resistance of the copper wires and so on, but the contributions from those terms are not meaningful in comparison to the magnitude of the impact that temperature has on power-consumption attributable to static leakage.

And I rather like this bit:

At 2.0GHz clocks, with the Vcc reduced to 0.822V (IBT stable), the 2600K consumes 24W at 38°C under load with IBT.

At 3.4GHz stock clocks, with the Vcc reduced to 1.038V (IBT stable), my 2600K consumes 65W at 48°C under load with IBT.

Pushing it up to 5.0GHz, with the Vcc set to 1.488V (IBT stable), this 2600K consumes 227W at 93°C under load with IBT.

According to the equations used here to describe power-consumption for this chip, theoretically if I could clock my chip to 6GHz while keeping the temps at 93°C the chip would need 2.05V and it would consume 541W during IBT.

Going to a more ludicrous extreme, 7GHz would require 2.94V and 1.4KW of juice
This sums it up, nicely:

It is exponential with respect to temperature, but definitely you could form an easy rule of thumb to treat it as linear because the worse that will come of such an approximation is that you'd be off by a watt or two (no big deal).

Before I came to the realization that I needed to include the Poole-Frenkel effect I was treating the temperature effect as a linear one and it wasn't all that bad.

I was just kinda blown away, I was not expecting the magnitude of the effect, when I saw the data.

Take for example the 2GHz chip at 1.290V and a cool 47°C, it consumes 68W. But let those temps rise from 47°C to 96°C and the power-consumption rises from 68W to 91W.

Consider that this 23W represents a 33% increase in power-consumption D: I just had no appreciation for how deleterious higher operating temps were for the power-consumption of the CPU.
 
  • Like
Reactions: joeblowsmynose
Yes, that seems to be the issue, if I understand correctly. There's a lot of info to be found, if @jimmysmitty would like to have an informed opinion on the matter.

One of the first hits I got looked into power vs. temperature scaling on the venerable i7-2600K, shortly after it was released.



So, this is nothing terribly new, or even specific to AMD or recent TSMC process nodes.

In the 3rd post, the author gets to the underlying phenomenon:


And I rather like this bit:


This sums it up, nicely:
Where are you trying to go with this?
All CPUs are designed to reach a max temp on default settings (so a max power usage as well) and will start to boost the cooler to stay within that if that starts to not be enough they will start to throttle and if all else fails they will just shut down.
Power consumption (overclock basically) will determine how fast all of this will happen not the other way around.
 
Yes, that seems to be the issue, if I understand correctly. There's a lot of info to be found, if @jimmysmitty would like to have an informed opinion on the matter.

One of the first hits I got looked into power vs. temperature scaling on the venerable i7-2600K, shortly after it was released.



So, this is nothing terribly new, or even specific to AMD or recent TSMC process nodes.

In the 3rd post, the author gets to the underlying phenomenon:


And I rather like this bit:


This sums it up, nicely:

The only thing I can say is we would need a in depth wide array of testing. Mainly because there are so many variables that can change anything easily. This guy did his own testing on a single CPU and board setup. It is one test but we would need to test hundreds of chips on hundreds of board designs to see if it is consistent or if it is dependent on certain factors such as the boards power delivery system.

Until then it is possible, I didn't say it wasn't. Just that it shouldn't make a massive enough difference to drop it.

Even so, the 9900KS at stock on the same cooler as the 9900K at 5GHz (the Corsair Hydro H115i) used less power (152W vs 200W).
 

bit_user

Polypheme
Ambassador
Where are you trying to go with this?
I'm just supporting @joeblowsmynose 's post:


Specifically, his last point:
Due to the massive cooling system they set this up with, I wonder if the power consumption tests aren't a little skewed by that? I know even my Ryzen consumes quite a noticeable amount of power more when its hot, than when it is relatively cool, under the same load.
I never used to think temp influences power consumption to a noticeable degree, but it definitely does make a big difference.


All CPUs are designed to reach a max temp on default settings (so a max power usage as well) and will start to boost the cooler to stay within that if that starts to not be enough they will start to throttle and if all else fails they will just shut down.
Power consumption (overclock basically) will determine how fast all of this will happen not the other way around.
If you care to follow the thread, it's all there for you. I'm not going to spoon-feed you.
 

bit_user

Polypheme
Ambassador
The only thing I can say is we would need a in depth wide array of testing. Mainly because there are so many variables that can change anything easily. This guy did his own testing on a single CPU and board setup. It is one test but we would need to test hundreds of chips on hundreds of board designs to see if it is consistent or if it is dependent on certain factors such as the boards power delivery system.
He did analyze the data against theoretical models and found that it fit nicely. That speaks volumes. Also, speaking of data, the amount and consistency of it suggests it's not simply due to weird experimental artifacts, though weirder things have probably happened.

If he were completely on a limb, with nobody else seeing similar behavior and no good physical explanation of the phenomenon, you might have a point. Anyway, like I said, the information is out there, should you value the power of enlightenment over the complacency of ignorance.

https://arxiv.org/pdf/1404.3381.pdf

More at: https://www.google.com/search?q=cpu+power+consumption+vs+temperature
 
  • Like
Reactions: TJ Hooker
He did analyze the data against theoretical models and found that it fit nicely. That speaks volumes. Also, speaking of data, the amount and consistency of it suggests it's not simply due to weird experimental artifacts, though weirder things have probably happened.

If he were completely on a limb, with nobody else seeing similar behavior and no good physical explanation of the phenomenon, you might have a point. Anyway, like I said, the information is out there, should you value the power of enlightenment over the complacency of ignorance.

https://arxiv.org/pdf/1404.3381.pdf

More at: https://www.google.com/search?q=cpu+power+consumption+vs+temperature

I wouldn't call it ignorance. Just wanting more than a single run through sample. There are CPUs that do have higher leakage than others as well. AMD even sold some based on Phenom, the TWKR Edition, that was sold as the higher leakage allowed for higher voltage and better chances at extreme LN2 overclocking.

However a motherboards power delivery system can affect so many factors in power draw on a CPU. Left at stock a lot of power delivery systems use higher voltages than required for stable operation which would lend itself to higher power draw. Voltage would have a much higher, more direct effect on power draw than excess heat would.

As I said I would prefer to see an in depth study of different models and different boards to see if it is consistent with the CPU or if other factors can cause this such as the boards power delivery system or even BIOS.

And again if you look at the numbers even when using the same Corsair Hydro H115i cooling at stock the 9900KS used less power than a 9900K overclocked to 5GHz. So that would go against the idea that the lower power draw is due to a beefier cooling system if the same cooler produces less power.

These are top binned parts and later steppings. I also provided a link showing a later stepping using less power. So combine a newer stepping, process refinements and top binned chips its not hard to see that these CPUs can use less power than previous versions even on the same cooling.
 

bit_user

Polypheme
Ambassador
So combine a newer stepping, process refinements and top binned chips its not hard to see that these CPUs can use less power than previous versions even on the same cooling.
I don't think anyone is saying the KS definitely doesn't use less power than the K.

All @joeblowsmynose was asking is how much the additional cooling capacity might've affected power consumption:
Due to the massive cooling system they set this up with, I wonder if the power consumption tests aren't a little skewed by that?
 
  • Like
Reactions: TJ Hooker
I don't think anyone is saying the KS definitely doesn't use less power than the K.

All @joeblowsmynose was asking is how much the additional cooling capacity might've affected power consumption:

I understand that. But even so the H115i cooled stock KS used less power than the H115i cooled OCed 9900K. So even with a 5.2GHz OC power jump should be more based on the voltage increase which would also mean yes temperature goes up but I do not think temperature is the only factor to power draw increase.