News Intel unwraps Lunar Lake architecture: Up to 68% IPC gain for E-cores, 16% IPC gain for P-Cores

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

baboma

Notable
Nov 3, 2022
235
233
970
Next up we have MSI's roll-out of its laptop line-up. I've omitted the two models w/o the AI designation. MSI is considered as a 2nd-tier laptop vendor, relative to Lenovo/Dell/HP/Acer/Asus.

MSI Stealth A16 AI+ (premium gaming) - Ryzen AI w/ dGPU
MSI Creator A16 AI (premium creator) - Ryzen AI w/ dGPU
MSI Summit A16 AI+ (business) - Ryzen AI
MSI Prestige A16 AI (business) - Ryzen AI
MSI Prestige 13/14/16 AI+ Evo (business) - Lunar Lake x 3
MSI Summit 13 AI+ Evo (premium ultraportable) - Lunar Lake

LNL gets more design wins here, on par with Ryzen AI. Qualcomm is notably absent. But the same trend is seen here as with the Asus line-up: Ryzen gets wins for gaming & creator segments, while LNL serves the more conservative business segments. LNL did get the premium ultraportable win in Summit 13 AI+ Evo, boasting all-day battery use.

The demarcation line is more apparent with the MSI line-up: LNL wins the power-efficiency battle, while Ryzen wins the power battle.

https://msi.com/news/detail/MSI-Unv...ith-Latest-Processors-at-COMPUTEX-2024-143709
 

bit_user

Polypheme
Ambassador
Are you people serious here?
Yup.

It's faster than Raptor Cove in both Integer and FP.
Nope. You didn't read the slides carefully enough. The normal reaction to such outlandish notions is to double- and triple-check the source, because that should sound too good to be true!

In the lower left, this slide says the comparison is iso-frequency!

TiYbegfgjonfifxe84JFc8.jpg

On the next slide, Intel is clear that Raptor Cove still offers better peak performance.

SCtAd7LHV9me82igNmQej8.jpg


It was previously "Skylake-level"
You're going back 2.5 generations, to the original Alder Lake E-cores (Gracemont). At iso-frequency, those were Skylake-caliber. A 12 W quad-core Alder Lake N97 benchmarks at about the same speed as a 35 W i5-6600T.

This core is a monster for what it is. I'm absolutely ecstatic for what's coming.
That really depends on how big it is. If it's more than half the size of Raptor Cove (iso-node), then I'd say it's not too impressive, considering that Zen 4c is half the size of Zen 4 and has the exact same IPC.

The P core design's days are numbered.
They're not, because Lion Cove not only has higher IPC but also runs at faster clock frequencies. It could easily be 50% faster than Skymont. Intel would never hand that kind of performance advantage to AMD (or any of the ARM folks) by giving them such a lead.
 
Last edited:

bit_user

Polypheme
Ambassador
Those Atoms are rising so fast they're soon to be nearly the fastest thing out there. A couple of months soon.
I wonder if they're trading too much efficiency for performance, though. Perhaps Intel will need to create a new efficiency core, if the Mont series is getting too big & power-hungry!

I wonder if Intel is comparing mainstream efficiency or their lab tuned efficiency?
It should be under normal product conditions.

The Intel volt curves are so far off that I don't know what to make of this latest one you posted. There seems to be so much fat to be cut.
We have to assume Intel's voltages were chosen for a reason. Don't think that just because some people can undervolt their Intel CPUs like crazy that everyone can do it and under all circumstances.
 

rluker5

Distinguished
Jun 23, 2014
687
414
19,260
I wonder if they're trading too much efficiency for performance, though. Perhaps Intel will need to create a new efficiency core, if the Mont series is getting too big & power-hungry!
Intel definitely needs a better way to bias the clocks to the efficient range if that is what the user chooses. Currently the bias is towards max clocks whenever slightly needed which run at way out of the efficiency range. It would work kind of like a balanced power plan vs high performance. Intel would have to do it because Windows never will, All Windows does is make the clocks idle a bit faster or a bit longer, nothing about running in the efficient range. The slides in the presentation show the efficient range growing, but there is still a lot of used frequency past it.
We have to assume Intel's voltages were chosen for a reason. Don't think that just because some people can undervolt their Intel CPUs like crazy that everyone can do it and under all circumstances.
Nobody else cares enough about power consumption to try. Even part of my motivation is from always hearing things about my chip's power consumption that I can tell aren't true by just looking and testing. It's been this way since Alder Lake. But it is nice to only hear my (other than GPU) fans when I wake from sleep or restart my pc. Other than that there really isn't that much benefit from power tuning. But I bet that 4.8GHz for the P-cores and 3.7 Ghz for the E-cores is a pretty common inflection point for the efficient vs inefficient ranges for RPL.

As far as the reason, it seems like mostly ignorance: chasing vdroop deficiencies by increasing volts and power consumption is like fighting fire with gasoline and was the first fix for that rash of instability we heard about.
 

KnightShadey

Reputable
Sep 16, 2020
70
22
4,545
That said, LNL will be bottlenecked by its low core-count to fit into a fairly narrow niche (premium ultraportable), so the memory probably won't be a defining constraint, given the few SKUs it will occupy.

While that's true in general, especially in the past, that's a big un-known in the Ai PC generation these target... that are all sharing RAM for a task that is notoriously SIZE sensitive more so than speed, even to the point where a 16GB 4060 can keep close to a 4080 and outpace an 8GB 4070TI on desktop and VRAM size trumping bitwidth & speed in the mobile segment.

Last year I would've said 8/16/32 is more than enough options for most people, but when your New Pricey UlraPremium Copilot+ Ai PC can't "generate and refine AI images in near real-time directly on the device.." at the full resolution of their screen (1080 let alone higher), not knowing the massive resources required, they might get annoyed their "Ai PC" can't do the Ai part well.

Yes you can often upscale (not always), but now we're compensating for compromising?

Being locked in to a pre-set hard limit when we are just at the starting line of this Ai PC race/revolution seems short sighted for the very thing that is getting them the press and increased valuations this week? Are they really just trying to sell a few % point of performance or power savings this generation or are they selling a promise they might not be able to deliver on for these very choices/limitations.

Right now I'd rather have my SSD size locked to bare minimum than my RAM capped at 'typical' when we don't know what that is yet.

At this point I doubt even intel & AMD have any idea of the impact of the RAM on these platforms as their users expect/intend on using them. Their testing is likely as limited as the benchmarks they provide sofar. They can run a plethora of tests on their pre-release builds, but that never matches real-world for any product, especially for one where the software is also changing by leaps & bounds.
With on-package and soldered it also increases the chance of e-waste iff they/we predicted the requirements/options poorly.
But hey, I mean it's not like they've ever over-promised and under-delivered through any fault of their own before... right? 😉

It's a choice and I get the economics and supply-chain reasoning behind it. However, as a buyer, both personal and corporate, I , like others, are concerned that these choices in one edge case become the default for similar reasons, and that it limits the 5-10% users to the 1% workstation/ultra-high-end gaming laptops.

Yeah, I'm leaning Strix now, but even there, it's not that easy because likely the 7945HX goes aways long before any real equivalent arrives to replace it (already many configurations are sold-out) .
 
Last edited:

KnightShadey

Reputable
Sep 16, 2020
70
22
4,545
The amount or RAM needed depends heavily on the user and his application. For many, the Chrome Book type users or those who use the web for streaming, 16GB should be fine for a decade. The big issue they have is battery life with battery life of 24 hours or more. When I am running large circuit simulations, 128GB is tolerable and 2 to 3 hours of battery life is at the high end. My preference though is to use the server farm for these tasks. RAM mounted on the substrate may actually give the user a little leverage on the price side. It will result is a thinner, lighter, more reliable machine.

That's fine as a previous argument, but if the whole point of this generation of 'Ai PCs' is specifically not going to the cloud and doing everything locally, then really how is that really the better option?

I get the reasoning, but I think the arguments and assumptions are based on a last-generation use-case, and at this point we (or even intel/AMD) don't really know what the impact of a hard-cap memory limit (that seemed like more than enough last year) might be on the very thing that is a RAM hog, and supposed to be the raison-d'ètre?
 
Last edited:

baboma

Notable
Nov 3, 2022
235
233
970
With two vendors' line-ups as a representative sampling, we can make some tentative findings for this AI wave of laptops.

. First, I erred in saying that LNL/Ryzen AI will only be in premium segments. The reality is that LNL/Ryzen/QC X will be in EVERY laptop going forward, not just premium. I credit both MS' clout as well as QC in the agitator role in catalyzing this drastic change.

. Even if you don't care for the AI/NPU craze, the AI laptops bring substantial benefits. iGPU is substantially improved, and is capable of 1080p gaming. Memory baseline is now 16GB, up from 8GB. Storage baseline should be larger as well, to accommodate MS' Recall requirement of 256GB drive space (which most will likely disable).

. Both AMD & Intel are successful in fending off the ARM incursion, at least for this coming cycle. AMD's Ryzen AI went with the conventional approach, tacking on NPU for TOPS while improving both CPU and iGPU perf, allowing ARM the power-efficiency win. Intel's LNL went in the opposite direction, contesting on power-efficiency at the expense of regressing on CPU perf (while improving NPU & iGPU). Judging by design wins, and by their respective positioning in OEM line-ups, AMD made the wiser decision and the better product.

. As corollary, MS' own Surface products aside, QC X (ARM)'s foray into the x86 laptop market seems to be very limited. MSI has no ARM, while both Asus ARM models have close x86 counterparts. The rationale is that if the ARM models fail to catch on, Asus' line-up won't be adversely impacted. MS' exhortation for ARM seems to fall flat here. QC/ARM has a tough road ahead.

. Despite substantial progress on the hardware side, AI's progress on the software (OS) side is much less compelling. Windows' tentpole AI feature, Recall, is getting increased pushback on both the security & privacy fronts. It's not a stretch to predict that it will fail, and MS will have to scramble to find some other selling point to push AI.

. Lastly, OEMs are the actual customers for mobile parts, not end-users. The best way to gauge who wins in the AI contest is simply to count up the design wins.

. Bonus for bargain hunters: The lightning-fast AI makeover of laptops means buying opportunities for this fall for non-AI laptops. Both Meteor Lake and Ryzen 7840/8840 should hit clearance sales. The only small caveat is that iGPU is a bit weaker than the incoming crop, roughly 10-20% lower. But the awesome deals should make up the difference.

It'll be a while before MS can make AI compelling, certainly not this year or likely next. Ignore NPU for now. Just focus on getting the best CPU/(i)GPU for the money.

PS: I'm hoping that the desktop can get some love. Hopefully Intel's grounds-up makeover for LNL will spill over into Arrow Lake.
 
Last edited:
  • Like
Reactions: slightnitpick

usertests

Distinguished
Mar 8, 2013
591
549
19,760
That really depends on how big it is. If it's more than half the size of Raptor Cove (iso-node), then I'd say it's not too impressive, considering that Zen 4c is half the size of Zen 4 and has the exact same IPC.
Based on this I'd say 3.2 Gracemont = 1 Golden Cove when you factor in L2 cache:
View: https://x.com/Locuza_/status/1453524285260247046


Skymont sounds like it could take a hit in E-to-P ratio. But if it doesn't, it will be even more legendary. I'm leaning towards it not being bloated, especially if plans to go to 8+32 for Arrow Lake Refresh were real.

I wonder if they're trading too much efficiency for performance, though. Perhaps Intel will need to create a new efficiency core, if the Mont series is getting too big & power-hungry!
Intel is seemingly done with LP E-cores as of Skymont, or they have made the distinction meaningless in Lunar Lake due to its scalability. Guess I need to read the whole slide deck. I wonder if they will attempt that rumored super P-core in the future.

PS: I'm hoping that the desktop can get some love. Hopefully Intel's grounds-up makeover for LNL will spill over into Arrow Lake.
How much love are you looking for? Arrow Lake will use Lion Cove and Skymont and now we know the supposed IPC gains (always have to note the margin of error) and hyperthreading strategy. We also got some leaked benchmarks for the larger iGPU in Arrow Lake months ago.
 
Jan 14, 2024
76
20
35
AMD gets out Zen2 with 1 or 2 CPU chiplets and an IO chip : Intel, "feh - a glued together CPU. That's ridiculous."
Intel gets Lunar Lake out : the whole CPU is literally glued together, "revolutionary".
don't forget the Pentium D, Core 2 Quad, they used glue too!
 

KnightShadey

Reputable
Sep 16, 2020
70
22
4,545
. Despite substantial progress on the hardware side, AI's progress on the software (OS) side is much less compelling. Windows' tentpole AI feature, Recall, is getting increased pushback on both the security & privacy fronts. It's not a stretch to predict that it will fail, and MS will have to scramble to find some other selling point to push AI

I agree that the development hasn't met the hype, I also find that M$ is really REALLY terrible at exploiting and marketing to their strengths.

The easiest way for me to explain/show the utility of easy 'light AI' was to use Summarize on a spec/compliance/RFIRFQ PDF or use Copilot for Excel to extract & organize data and quickly make a pivot table all using voice prompts. M$ needs to use that type of example that either makes people comfortable doing something they had trouble learning or something that took for ever to do the slow way.
Everyone is pushing zoom & enhance.... errr... generate and refine (intel just launched their version 🙄) but for most people that's fine in the cloud and on their smartphone, not a compelling reason for a new PC with local Ai.

giphy.gif


It'll be a while before MS can make AI compelling, certainly not this year or likely next. Ignore NPU for now. Just focus on getting the best CPU/(i)GPU for the money.

Problem is for those of us who already know where we/others can use it, we're left waiting for not only better implementations, but better guidance.

The launch order seems back to front here, where workstations and desktop replacements should've got this first (in something high margin low volume like Dragon Range or it's refresh) , then let our experiences inform them on how to then market & optimize for the masses.

Instead it seems like even the IHVs & ISVs have no idea how to sell a product they're just starting to understand themselves to the large segment of the public who has no idea what it is/does. Their marketing just reinforces this lack of focus and more just buzz.
AMD, intel, M$ & nV all tout the zoom & enhance equivalent of video call Ai ... when swapping out the terrible 720P webcams on most laptops (even those that cost $2000+) for a selfie cam found on a disposable $99 cellphone would do more to improve a Zoom call than an NPU. 😶‍🌫️

The strategy seems to be to sell them something that both hope they can use in the future but to pay more (to build and buy) in the hopes that it'll one day be useful and not a barrier to entry like TPM for Win11. Admittedly RAM is that 'barrier concern' for me, even with swappable DIMMs.

My only concern, as stated before, is it taking so long that options close before their replacements open-up, while we wait.
 
  • Like
Reactions: TheSecondPower

KnightShadey

Reputable
Sep 16, 2020
70
22
4,545
What you're describing, image-genAI, or multimodal AI, is still a pie-in-the-sky use case that has yet to exist on the average desktop, let alone the average laptop. Running StableDif is still very much enthusiast territory. Not mainstream.

It is mainstream, it's just cloud-based mainstream right now. However, THEY are the ones that are muddying the water with their examples they are showing this week and plastered all over their websites and PR dispatches.

Yes, they should be managing expectations, but at the same time an RTX 3050/3060 (30/50TOPs), & 4050 (60 TOPs w/o sparsity) shows how little power it can take to get 'good enough' results, and especially if optimized and picking the right tools, it's not that big a stretch.

Especially when like I said in the previous post ALL of them are promoting "generate & refine", so THEY seem to think that's exactly what people should expect to do with this right out of the box.

Personally I'm fine using my desktop for that... or going into work and grabbing a workstation, it's not even my interest (mine is far more bland business work).
But if I'm buying an 'Ai PC' laptop, then why can't I get one today with the proper power in it if that's the PR they're selling me?
Also I'm not saying /expectingthese mid-level launches this week to do training or heavy lifting of workstations (you can already buy a 7945HX+RTX4090 for that), but even moderate tasks are memory intensive, and get so quickly, so it shouldn't be a surprise that that's an issue if there's going to be an arbitrary cap.


I'm not sure if you're just waxing rhetorical, because the reality is that more RAM means increased BOM, and OEMs are loathed to make big jumps on pricing for some AI hype from MS.

You're arguing BOM and then promoting locked-in configurations for RAM that they could get stuck with costly variants they can't re-use/re-distribute?
You realize that swappable DIMMs don't cost them more if the platform supports 128+ GB and they only chose to sell the popular/profitable DIMM sizes, right? 🤨

Laptops were on cusp of getting 16GB already, so 16GB as default is not a heavy lift. 32GB is a hard cap for almost all consumer laptops except for a handful of top-end models.

On the cusp, eh? 32GB Hard Cap? Seems like you missed a lot of laptops. I have laptops from 2014 with 16GB in it and another from 2016 with 32, and both were mid-level entries.

Heck the 'Industry Standard' Dell XPS 13 can be configured right now with 64GB of RAM on a 155H, it's the one with the Snapdragon option as well... just so you don't confuse the tier we're talking about as some 'exotic' configuration;

https://www.dell.com/en-us/shop/laptops/intel/spd/xps-13-9340-laptop/usexchcto9340mtl03

Again, it's more about NOT locking in something before even they know the impact on the very thing they are going to great lengths to promote as a primary feature. 🤔
 

Giroro

Splendid
Being locked in to a pre-set hard limit when we are just at the starting line of this Ai PC race/revolution seems short sighted for the very thing that is getting them the press and increased valuations this week? Are they really just trying to sell a few % point of performance or power savings this generation or are they selling a promise they might not be able to deliver on for these very choices/limitations.

I agree that the 32 GB cap would be a significant problem if the Lunar Lake configuration Intel described is going to be their highest-end product for the generation, but I doubt that's the case. It's a 4C/4c low-power processor designed for UltraBooks. It's probably going to replace the Core Ultra 5 135UL and below, not the Ultra 9 185H.
It would be a mistake if Intel put "9" or "7" branding on this product since this 8c/8T product would be competing with more powerful 16c/22T and 12c/14T processors from their own last-gen. I think Intel will understand that it would be too hard to sell "way less cores" as "way more betterer".

Now will Microsoft still try to drop this lower-midrange CPU into an offensively overpriced $3000+ Surface Laptop Studio? Probably. But that just means it will eventually be a decent deal when MS has to clear out nearly every unit produced at less than half it's original MSRP, again.
 

KnightShadey

Reputable
Sep 16, 2020
70
22
4,545
I agree that the 32 GB cap would be a significant problem if the Lunar Lake configuration Intel described is going to be their highest-end product for the generation, but I doubt that's the case. It's a 4C/4c low-power processor designed for UltraBooks. It's probably going to replace the Core Ultra 5 135UL and below, not the Ultra 9 185H.

Definitely, and it's compounded by the question of whether this launch is just a new chip launch into a specific niche ultraportable segment, or is it showing a design ethos that intel and the reviews (including this THG one) use to open the discussion with statements like;
"a strategic evolution in their mobile SoC lineup"...
"where many of the fundamental changes will carry over to Arrow Lake and will be in some of the best CPUs for gaming"...
"that will be the next generation of Core Ultra mobile processors"...

Those comments from THG and Anand imply that more than a little of this will be applied to more demanding segments. Is it just microarchitecture or more? Considering how much of the "breakthrough power efficiency" (including memory) can be attributed to package considerations including, will that extend to those other segments ala M-series for Apple?


Additionally there are two competing issues with this that muddy it so much (and seemed to have side-tracked the conversation with Bomba to segment instead of architecture based on limited existing examples):

* intel's most recent Ai launch product is still something ill equipped to have anything to do with the type of Ai they are promoting now or really have been promoting for a while (like OpenVINo on ML ), and would be better suited for the lighter load productivity centric examples. Where sure you could do that with these NPUs, but definitely NOT if you're going to handicap memory like they did for the previous Ultra series.

* LNL is entering the product lineups like HP Spectre, Lenova Yoga, MSI Prestige where CU5 125/135/155H as well as Ci7 1360P currently reside, so a step backwards or replacement? All of those support much larger memory too (96GB) but previous limiting decisions not only still apply but get lowered and locked-in? 🤨


It would be a mistake if Intel put "9" or "7" branding on this product since this 8c/8T product would be competing with more powerful 16c/22T and 12c/14T processors from their own last-gen. I think Intel will understand that it would be too hard to sell "way less cores" as "way more betterer".

Now will Microsoft still try to drop this lower-midrange CPU into an offensively overpriced $3000+ Surface Laptop Studio? Probably. But that just means it will eventually be a decent deal when MS has to clear out nearly every unit produced at less than half it's original MSRP, again.

I agree completely, and again, I get the cost considerations a performance positioning for other device (especially if this made it's way into handhelds); but specifically for a pretty Pricey segment (even if it's UltraPortable premium) as you clearly understand with the Surface et al, that is now being asked to do a notoriously demanding task, this seems like the wrong product for the launch of a whole new design strategy AND new use-case they are turning it into vs just efficiency.

As is the case for all of this, we shall see if any of this amounts to more than a new sticker next to the touchpad. 🤷🏻‍♂️
 
Last edited:

PCWarrior

Distinguished
May 20, 2013
208
84
18,670
Not sure what the problem of some people on here is. Intel compared the new e-cores not just against LP e-cores but also against Raptor Cove P cores. From the graph you can see clearly that the Skymont e-cores are slightly above Raptor P-cores in single-threaded performance for pretty much the entire applicable frequency spectrum of e-cores. The 2% advantage probably applies at 3.5Ghz and at 4.4GHz we have parity. Of course, with more power the Raptor P cores can boost to higher frequencies (up to 6GHz) and achieve higher performance. Also, Raptor P cores have hyperthreading.

Anyway, let’s compile all of the new information with some known Cinebench R23 results.

1. At 5.5GHz a Raptor Cove P core does 2192 single threaded (1T).

2. At 5.5GHz with hyperthreading the overall contribution (2T) of a Raptor Cove P core is around 3000.

3. At 5.5GHz, without hyperthreading, the contribution of a Lion Cove P-core should be at least around 2500 (given the 14% general claimed IPC increase of Lion vs Raptor).

4. At 4.4GHz a Cracemont E-core does around 1042.

5. At 4.4GHz a Raptor Cove P-core does 1754.

6. At 4.4GHz a Skymont core should be able to do about the same as a Raptor P core (1T). So at 4.4GHz the Skymont core should also do around 1754.

7. So Skymont over Gracemont should have a 68% IPC uplift in workloads like Cinebench.

8. Arrow Lake, with 8P cores running at 5.5GHz and 16 E-cores at 4.4GHz, both without hyperthreading, should get a score of around 48000. Stock 14900K does around 38500. That’s about a 25% increase in MT performance. All while probably also achieving it with at least 25% less power.
 
  • Like
Reactions: TheSecondPower

KnightShadey

Reputable
Sep 16, 2020
70
22
4,545
The "industry standard" XPS 13 model you refer to is Dell's flagship, which is one of the high-end exceptions I referred to. If you had bothered to look at the XPS 14, the 64GB option can only be had in tandem with the dGPU option, which is a +$1000 add. So, yes, 64GB is indeed a rarefied option reserved for the flaghips and extreme high-end.

Now you're the one trying to defend a position by creating a narrative, turning a mid-level option into 'extreme high-end' for a segment with options for more than twice the price with OLED touchscreens, etc. As someone with extensive experience dealing with that segment, your characterization of it (especially the XPS 13) as a 'high-end exception' is false.

Your throwing shade is fine, it's justified in response to my exasperated tone towards you trying to frame the segment as if it were exclusively entry level chromebook so cost/price sensitive. I'll own that, I was tired with little patience for that, so it was edgier than it should have been.

However, that pivot to justify surprisingly low restrictions makes no sense, especially since there are far more exotic/expensive examples in that segment (as if a $3K MSI Summit were all 'bout the pennies instead of the Pentiums) , plus ignoring the fact that the mid-level options that already exist in that overall segment currently having higher memory options (even higher supported) that weren't being touted for this new tougher role, and are far FAR from the BOM centric/limited segment you claim them to be.

If your argument had been different and intel are targeting the low end, handheld gaming etc, it would make more sense from a cost/performance perspective, and where Ai is equivalent to the Ai in my microwave; but you did mention the segment specifically as being a chip "to fit into a fairly narrow niche (premium ultraportable)" [note premium] to dismiss/downplay the concerns regarding memory limitations.

So why go to that argument when it clearly doesn't really apply to that segment, nor justifies the compromises/restrictions for the platform? 🤨
 
Last edited:

bit_user

Polypheme
Ambassador
Intel definitely needs a better way to bias the clocks to the efficient range if that is what the user chooses. Currently the bias is towards max clocks whenever slightly needed which run at way out of the efficiency range.
So, both Windows and Linux have built-in power management settings that are specifically designed to give users a couple of easy knobs they can twiddle, without going to the extreme of dialing in specific frequency or power limits.

The point I was making is that even if you dialed in settings to maximize efficiency, the best it can manage is apparently only 1.2x of the peak efficiency of Raptor Lake's P-core.

efQrRhTeaLWjLHBHcoZzr8.jpg


All Windows does is make the clocks idle a bit faster or a bit longer, nothing about running in the efficient range.
Is there no way to set custom frequency limits in Windows or in BIOS? Sure, maybe you have to use a custom utility.

I bet that 4.8GHz for the P-cores and 3.7 Ghz for the E-cores is a pretty common inflection point for the efficient vs inefficient ranges for RPL.
Here's some relevant data for an Alder Lake i7 that ChipsAndCheese tested:

image-17-1.png

On that CPU, if someone wanted to keep it from running too inefficiently, I'd say they should limit the P-cores to 4.2 GHz and the E-cores to 3.5 GHz. You might find that latter part confusing, but the E-cores @ 3.5 GHz are just below P-cores at 4.2 GHz. So, if you're happy enough with the P-cores at 4.2 GHz, then you really don't need to dial back the E-cores all the way to their point of inflection, which is like 3.2 GHz.

As far as the reason, it seems like mostly ignorance:
Oh, I'd never accuse Intel of that.
 
Last edited:
Intel is the king of marketing. AMD needs to release a 9990X Black Edition that pulls 600W and leave it to the benchers to cool it. It's what NVidia does with its GPUs.
You make it sound as if this is something that is possible to do...
AMD tried to push their CPUs higher and got a whole generation of CPUs that would blow up left and right, you can't just push as many watts through a CPU as you want to.
We have to assume Intel's voltages were chosen for a reason. Don't think that just because some people can undervolt their Intel CPUs like crazy that everyone can do it and under all circumstances.
Intel doesn't choose the voltages/curve , they do have stated min/max values in their datasheets but other than that it's up to the board maker to apply a voltage curve that is safe and adjusted for their boards capabilities.
(AC/DC 🤘 Load line)
Edit: Because mobo makers can't be bothered they just go with the highest possible because that has the smallest chances of having issues)
NqZU8wsFMk3rnSrosL3Xtc-1200-80.jpg.webp
 
  • Like
Reactions: KnightShadey

TRENDING THREADS