Rest In Peace Intel....When you have to use that much power to prove your 'SPEED', it just goes to show how behind the times you really are...Improve your chip design like Apple and AMD to improve power efficiency and speed. People really need to stop buying into their eco-system..
You know they did it before yet you act as if you have no idea about them...
Look at the 13900k and the 13900ks they have the same max turbo power, it's the base power that gets increased to accommodate the higher clocks at normal workloads.
Power limits removed is a different thing and should be treated the same way as overclocking.
To meet the demand from such customers Intel, began offering special edition KS processors in the recent years, beginning with the Core i9-9900KS and continuing with the Core i9-12900KS and Core i9-13900KS. To that end, chances that Intel is prepping Core i9-14900KS are fairly high.
Rest In Peace Intel....When you have to use that much power to prove your 'SPEED', it just goes to show how behind the times you really are...Improve your chip design like Apple and AMD to improve power efficiency and speed. People really need to stop buying into their eco-system..
I don't know how Apple is doing but between intel and AMD intel has the much more efficient cores, even after a second "overclock" the 14900k at 6Ghz is still a good amount more efficient than the 7950x is at 5.7.
And if someone really cares more about efficiency than performance the 13400 (4.6 Ghz) is almost 50% more efficient than AMD's best.
It's just the e-cores that are terrible at higher power plus the fact that everybody runs them overclocked out of their minds. https://www.techpowerup.com/review/intel-core-i9-14900k/22.html
Rest In Peace Intel....When you have to use that much power to prove your 'SPEED', it just goes to show how behind the times you really are...Improve your chip design like Apple and AMD to improve power efficiency and speed. People really need to stop buying into their eco-system..
It'd be easier if Intel didn't have such a huge advantage with bundle pricing and chipset reliability.
Intel is the best at WiFi/BT, LAN, USB/Thunderbolt, and their drivers. There really is no competition, especially when Intel offers discount pricing with their CPUs.
The only way you get to see Realtek, Mediatek, etc. parts on an Intel mobo/laptop, is if it's priced so low that you can't figure out if the OEM is making any money off of it. And even those tend to use older Intel WiFi/BT cards that were probably reused from scrap.
Where as AMD has a really awful RZ600 series WiFi/BT card. Their USB and chipset drivers work fine... until they don't and you'll never know if it was a bad driver or hardware design flaw.
If you want Intel WiFi/BT, LAN, and Thunderbolt on an AMD mobo/laptop, that's often more expensive than an Intel mobo/laptop with equivalent specs and features.
And then we have Apple...
who has you spending $1600 for 8GB of RAM...
in 2024....
and the SSD and RAM cannot be replaced...
When I see something like this all I really want to know is what they're cooling it with if this is an accurate entry. It's pretty hard to cool that much on a desktop platform without exotic cooling, but that would generally lead to not hitting a thermal limit which the test shows it doing.
As we've all known since ADL if you remove the power limits then the CPU will run as high as it can until it hits 100C so the "high" power consumption is an eyeroll.
As we've all known since ADL if you remove the power limits then the CPU will run as high as it can until it hits 100C so the "high" power consumption is an eyeroll.
I don't know how Apple is doing but between intel and AMD intel has the much more efficient cores, even after a second "overclock" the 14900k at 6Ghz is still a good amount more efficient than the 7950x is at 5.7.
And if someone really cares more about efficiency than performance the 13400 (4.6 Ghz) is almost 50% more efficient than AMD's best.
It's just the e-cores that are terrible at higher power plus the fact that everybody runs them overclocked out of their minds. https://www.techpowerup.com/review/intel-core-i9-14900k/22.html
AMD has the issue of the I/O chip. It requires power even if you are only doing a one core load so it skews things a bit. Intel has a similar problem with Meteor Lake now. Have to wait for desktop chips to see how that plays out.
i3-12100/13100 is amazingly efficient. Also get great efficiency out of the Ryzen 5600. Haven't seen anything with the 5600X3D on it yet, but you can see how well the 7800X3D is doing. Keeping the clock speed low is key.
Yeah I know I'm just tired of seeing everyone who cries over the power consumption when limits are removed acting like this is somehow a 400W+ CPU or the 13900K/14900K are 300W+ CPUs. When the power limits are in place they're within ~10W or so of said limit on even the heaviest of workloads.
Intel and AMD are both guilty of blowing up their efficiency curves for maximum performance. That's why the 13400 and 7800X3D are topping that chart, and the 13400 does well in multithreaded only trailing the regular 7900 and the 7800X3D/7950X3D. The common thread between all of these CPUs is that the clockspeed is kept under control.
wow 400w and the fanbois are creaming themselves, Intel has become the old AMD where heat and power usage is thrown out the door in favour of clocks and some on here claim Intel is more efficient!
I think AMD to be fair made a error with the 7k CPUs the £D versions are really the sweet spot for these as they are designed to just hit thermal targets on the standard chips and that does not work so well, I believe the next CPU has this solved from what am told.
The 3d chips are superb 7800x3d 120w 5,0ghz and currently the fastest games cpu bar none Intels closest hits well over 260w to even get close, the next series from AMD will have a lot of this tech onboard as well as a more efficient IO and their version of the 4c but with more speed, its great for us of course but in no terms can intel be said to be more efficient nope nada not this year
That's what the claim was about, lot of power for speed, the highest speed a CPU has is in single threaded, every core is much more efficient if you run it at low clocks.
"When you have to use that much power to prove your 'SPEED' "
Also if you or anybody has a formula to get the performance and efficiency of the p-cores from a multihtreaded result, that has 8 p-cores+htt at 5.6 and 16 e-cores-htt at 4.4 please let me know.
And I bet some fool will try to cool it with a 212 Evo...
But really 410w sounds right for something pre-production and set high enough to cover as wide of a number of chips as possible. AMD's even worse with their Turbo voltages.
But as we know manual tuning can reduce power draw and increase performance measurably. I point to TH's review of the 13900KS where they (Paul Acorn, managing editor) obtained a faster and more power efficient result than a stock chip running without power restrictions, using 5% less power full core and 15% single core.
Intel is caught with their pants down since the day they lost the fab advantage. Besides the infamous Prescott during the Pentium 4 era, I have not seen them this desperate to push the performance envelope that’s way over the chip’s “comfort zone”. 400W for a CPU only is a sign of trouble over at Intel.
Having said that, I think the observation is that the node shrink is no longer keeping up with the chase for higher performance chips. The so called 4/5nm for example, is just a name and not representative of the actual transistor size. Hence, we start to see power consumption shooting upwards just to deliver on performance across x86 and ARM based chips since you can’t squeeze hardware in, and relying on higher clock speed all the time.
Woahh nice cpu... Now I have a Use to my 7000btu split ac. Can use a hand made evaporator for the 1700 socket... maybe can overclock a little...
410w cpu + 535w ac = Epic win, Amd treadripper cannot stand this power compsumation.
Amd need try hard to beat intel in this game.
I don't know how Apple is doing but between intel and AMD intel has the much more efficient cores, even after a second "overclock" the 14900k at 6Ghz is still a good amount more efficient than the 7950x is at 5.7.
If you want to see Zen 4 demonstrate good efficiency, a multithreaded workload + 65/88 W power limit will force it into its peak efficiency window. Just look at the 7900 strut its stuff!
I think the main problem isn't the I/O die, but rather that Zen 4 doesn't scale performance very well, at the top of its frequency envelope. So, its boost clocks are significantly beyond the point of diminishing returns.
If you just restricted its boost frequency by a couple hundred MHz, its efficiency shoots up. You can see this in the multithreaded benchmarks I posted above, although they also amortize the I/O die power (to your point).
not really a valid arguement, as the majority of those that would buy this chip wont do any of this, they will buy it, and a mobo, put it together, and probably just use it. maybe go into the bios to change a setting or two, but thats it... defiantly not do anything to tune it.
I've found a lot of people don't understand that the what likely could be 410w power draw of this CPU is just under stress test type conditions, and it will usually be less.
I'm not saying that it can't draw that much, Intel 7 can easily draw more with smaller, older chips if you can cool it and give it a stress test type load. But it can also run efficiently. And the power draw can vary drastically depending on your clock speeds.
Power draw goes up a lot with clock speeds even with the same task being performed over the same amount of time. Unless your load is small.
I made a nice example of this using different Windows power plans on the same power hogging overclock of 6.0GHz for 2, 5.8 for all p-cores, 4.5 all e-cores, 5.0 cache, HT disabled (this let me go from 5.5 all p-core to 5.8 all p-core) and it runs at about 1.335v under load depending on the CPU temp. With a 13900kf.
I ran CP2077 at 60 fps in a fairly CPU intensive area in the DLC and screenshotted, Window button minimized it, switched power plans, maximized the game, screenshotted, repeat... Same 60 fps for all, same scene, settings, game etc., same CPU load over time. You can even see the shadows drift across the screen with the time passing in game as I did the power plan switching.
The power plans were:
1. 3.3GHz max p-core (e-core set as high as possible without increasing volts, cache auto sets in this case) and power plan created on power saver base with minimum clocks raised to 1.4GHz
2. 4.4GHz max p-core, rest raised same way, power saver base, 1.4GHz mins
3. 5.5GHz max p-core, rest raised same way, power saver base, 1.4 GHz mins
4. 5.8GHz max p-core, full OC first mentioned, ran on power saver plan with 1.4GHz mins
5. 5.8 same OC on balanced power plan
6. 5.8 same OC on high performance power plan
And if anybody wants to make power plans to check if this really happens here's how to do it with a CPU with p and e cores:
1. In a command, powershell, or terminal prompt, copy/paste whichever lines you want from below (from powercfg -through- _HIDE since after that is just my description of what the line exposes), one line at a time.
2. Hit enter.
3. Maybe restart when all done? It's been a while since I put those in.
4. Change your newly exposed Windows power plan options to your liking in: Change advanced power settings>Processor power management.
powercfg -attributes SUB_PROCESSOR 75b0ae3f-bce0-45a7-8c89-c9611c25e100 -ATTRIB_HIDE max frequency
powercfg -attributes SUB_PROCESSOR 75b0ae3f-bce0-45a7-8c89-c9611c25e101 -ATTRIB_HIDE max frequency 1
powercfg -attributes SUB_PROCESSOR bc5038f7-23e0-4960-96da-33abaf5935ec -ATTRIB_HIDE maximum processor state
powercfg -attributes SUB_PROCESSOR bc5038f7-23e0-4960-96da-33abaf5935ed -ATTRIB_HIDE maximum processor state 1
powercfg -attributes SUB_PROCESSOR 893dee8e-2bef-41e0-89c6-b55d0929964c -ATTRIB_HIDE minimum processor state
powercfg -attributes SUB_PROCESSOR 893dee8e-2bef-41e0-89c6-b55d0929964d -ATTRIB_HIDE minimum processor state 1
The p-cores are controlled by the "processor power efficiency class 1" lines and the e-cores are controlled by the normal lines for Windows reasons.
The values you enter in for max frequency may not be the same as the ones you get on your PC. Often you will get lower values on your PC than the max frequency values you enter in, but they both change proportionately. And the max frequency enabled in Windows power plan cannot exceed the max frequency set by the bios, it can only be decreased.
I check what frequencies and corresponding voltages I get with HWinfo and a stress test and idle.
The changes and enabled new power plans are persistent across shutdowns and reboots, just like the power plans you are used to, and can be changed on the fly if you can get to the control panel.
On a similar note, whenever I see an Intel mobile chip throttle it's iGPU to save power I think this would fix that. Intel should really give the masses an easy tool to turn down CPU clocks when they want to save power.
That doesn't make it a 400W+ CPU anymore than my i7-920 is a 200W+ CPU because I chose to heavily overclock it. I certainly wouldn't let Intel's stock power behavior run wild without power limits given the way they jack up the voltage with minimal return.
OCCT is simular as running Intel Burntest. A linpack based stress test. Problem is, it's a power hog. It will push CPU's to the extreme limit. It obviously cannot be compared to regular workloads, gaming and such. Im sure the power numbers would not be off that much anyway, 410W for a consumer CPU is just insane.
I've found a lot of people don't understand that the what likely could be 410w power draw of this CPU is just under stress test type conditions, and it will usually be less.
I made a nice example of this using different Windows power plans on the same power hogging overclock of 6.0GHz for 2, 5.8 for all p-cores, 4.5 all e-cores, 5.0 cache, HT disabled (this let me go from 5.5 all p-core to 5.8 all p-core) and it runs at about 1.335v under load depending on the CPU temp. With a 13900kf.
Is software compilation a "regular workload"? I do a lot of this, throughout the day, and it's an all-core workload that can easily burn 180 W on an i9-12900 (until it gets throttled down to 65 W).