News Intel's next-gen Nova Lake CPUs will seemingly use a new LGA1954 socket

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Honestly, I fail to see the benefit of having the LPE cores. E core is meant to mitigate the higher power draw of the P core, but now, they are introducing another layer of E-core, which also introduces another layer of software to allocate the load to the correct core(s). This may be useful for something like laptop, but not so much for desktops.
 
Since I got stuck with a 6700k on a 1151 socket though, with the “next” socket also being 1151 just a little different so I’d have to get a new mobo to get a new cpu, I’ve decided no more intel for me unless they make something so awesome I can’t ignore it.
Wasn't the justification for Coffee Lake having a new socket just about raising its power limits?

I can imagine a scenario where they released the Coffee Lake generations with the same socket as Skylake, but warned that it wouldn't run as fast, on older boards. I'll bet they'd still have gotten some annoyed users who tried a CPU-only upgrade and were upset their machine wasn't as fast as others had posted online. Or maybe even bought a CPU + motherboard, but got the wrong gen motherboard.

I don't want to give Intel a complete pass on that, but I can see both sides of the argument.
 
You need to learn the market, not Intel. Intel sells the overwhelming percentage of its CPU's to OEM's which end up in businesses.
Still, though? I think a lot of corps have moved towards thin laptops and NUC-type mini-PCs. There's no reason why most office workers need any more than that. That just leaves engineering as the main power users.

We keep the desktops 4 to 5 years and then send them to recycling and buy a new one.
We lease them. When that started, back in 2017 or so, they were initially leasing on a 3-year term. Recently, they now extended it to 4.
 
Honestly, I fail to see the benefit of having the LPE cores. E core is meant to mitigate the higher power draw of the P core, but now, they are introducing another layer of E-core, which also introduces another layer of software to allocate the load to the correct core(s). This may be useful for something like laptop, but not so much for desktops.
Yes, you've put your finger on it. LPE cores were introduced in Meteor Lake and I think we were told they're on the SoC tile so the compute tile could be powered down. They said that, for something like video playback, you only needed the codec, display engine, and LPE cores. With all of those being on the SoC tile, you could power down both the GPU tile and CPU tile. I think they were scared of Qualcomm and trying to figure out ways to compete on battery life.

As for why desktops have it? I guess the LPE cores just come along with using the same SoC tile for both platforms.
 
  • Like
Reactions: KyaraM and cerata
Are you sure your system is working properly? You should definitely be noticing a difference between a Zen 1 and Zen 3---especially one with 3D VCache vs one without. Only thing I can think of otherwise is that you play a lot of games that are brickwalled by the GPU.
None of my games were running poorly before the upgrade. AV1 encoding time did go down about 33%, which is nice and what I thought I was most looking forward to, but honestly encoding in 12 hours or 8 doesn't really change my life. It's still a long time and probably not worth doing. Nothing feels more responsive. Cities: Skylines probably would run a lot better if I play that again.
 
As for why desktops have it? I guess the LPE cores just come along with using the same SoC tile for both platforms.
Arrow Lake-S doesn't have LPE cores, why would Nova Lake? What is claimed in the article?
Preliminary silicon configurations allege two clusters of eight Coyote Cove P-cores and 16 Arctic Wolf E-cores, complemented by four Low-Power Efficient (LPE) cores in the SoC Tile, adding up to 52 hybrid cores.
Oh interesting. This is a little odd to me. Many gaming customers are buying an 8-core 6-thread CPU. Why go from a 24-core 24-thread CPU to 48ish? Arrow Lake has a ring bus with E-cores grouped into 4-core clusters. Grouping P cores into clusters and E cores into larger clusters would be a significant architecture change with new tradeoffs. AMD does this but I don't think even Intel server chips do.

LMS3A32uFLLbNBeL2xmqfD-1024-80.png.webp

Lastly, Arrow Lake's idle power use is much lower than AMD's. So why use LPE cores when Intel already has idle power figured out? I think the reason might be because it's still worse than the idle power use of Raptor Lake and possibly even AMD's mobile-based APUs with RDNA3 graphics.
 
  • Like
Reactions: rluker5 and KyaraM
Arrow Lake-S doesn't have LPE cores, why would Nova Lake? What is claimed in the article?
Because Arrow Lake's SoC tile isn't shared with Lunar Lake, so there was no good reason for them to add an LP island to it. But, if Panther Lake shared Nova Lake's SoC tile, that'll be a good reason for it to have an LP island.

Arrow Lake has a ring bus with E-cores grouped into 4-core clusters. Grouping P cores into clusters and E cores into larger clusters would be a significant architecture change with new tradeoffs. AMD does this but I don't think even Intel server chips do.
They're just following AMD, from what I can tell. IIRC, AMD's CCDs each have a ring bus, though I doubt AMD distributes its L3 cache among it cores, the way Intel does.

Arrow Lake's idle power use is much lower than AMD's.
Largely because Ryzen 9000 uses the same inefficient, old I/O dies from Ryzen 7000. I think some of that is also due to EMIB being more efficient than the interconnect Ryzen 9000 uses.

So why use LPE cores when Intel already has idle power figured out?
"Figured out?" As your graph shows, Arrow Lake is using a lot more idle power than Raptor Lake did.

Also, idle isn't the only thing you should be looking at, but rather how power usage looks when one or more cores in an E-cluster are running. In that case, the E-cores in Arrow Lake will be hitting not just the ring bus, but all of the L3 slices on the compute tile, due to how Intel's L3 cache works. The LPE cores don't touch L3 cache, thereby not incurring that penalty.
 
  • Like
Reactions: Makaveli
So you don't buy a Core Ultra 9. You get the U3 or U5 equivalent instead.
I do.
Should be much cheaper too, priced linearly for cores.

This is not the same as the Intel typical disabled processor chips, made really feeble to avoid impacting margins on the mainline chips, this is just the opposite. It just require Intel begin a mindset of optimizing price/performance and low power. Lack of the mindset chased them out of mobile 25 years ago.

You also get more bang for your buck with fewer cores, you don't get overload of cache and IO if you actually try to employ all those cores for typical apps.
 
Basically, if you need to do a lot of computation and you're paying out of pocket, it's cheapest to have your own machine. For commercial users, the costs look a bit different, since they have no depreciating assets on the books and save a bit of money on sysadmins (although there's a certain amount of management of the cloud VMs that still has to come out of somebody's time).

This is very true, I had to crank Azure to the max database size to equal the CPU I could get on a laptop, and on a production box that was I don't recall more than $1000/hour, but bigger shops get much lower rates, and non-production configs cost much less. So sure, if you have something that can use 50 cores without overloading cache and IO, it's going to be 100x cheaper on your own box. Yet, people internal to the hyperscalers tell these amusing stories about honking up 10,000 processors to address some task. Also, these tend to be VMs so are slower, and affected by their neighbors ... I haven't done HPC in the cloud, that's a whole other game, but if you can really deploy 10,000++ cores, that's going to beat your laptop and the budget is someone else's problem, LOL.
 
You also get more bang for your buck with fewer cores, you don't get overload of cache and IO if you actually try to employ all those cores for typical apps.
So sure, if you have something that can use 50 cores without overloading cache and IO, it's going to be 100x cheaper on your own box.
What's all this about overloading cache? Intel has been stingier with L3 cache than AMD, with AMD not disabling any of the 32 MiB on a 6-core while Intel gives a little more cache every step up the ladder. And then there's X3D.

There has been talk of Nova Lake including a 144 MiB cache tile, so maybe Intel is going to become much more aggressive on this aspect too:
https://www.techpowerup.com/332232/...ith-up-to-52-cores-16p-32e-4lpe-configuration
 
that was I don't recall more than $1000/hour,
Yikes! I wasn't thinking that much! You have to get into fairly exotic territory with GPUs or I guess exotic EBS configurations to hit those rates!

Right now, on-demand pricing for an 8-CPU Sapphire Rapids server, with 32 TB of memory, is only $361/hour. However, interestingly enough, you can get the same server with the same CPUs and only 16 TB of RAM for just half the price.

bigger shops get much lower rates, and non-production configs cost much less.
Probably, but the main ways to save money are either to use spot or reserve instances. Both are quite a bit cheaper. 1 yr reserved pricing on that 32 TB machine drops to 61.6% as much, while 3 yr reserved drops to 32.1% as much. If you add it up, running it for a year works out to $1.95M and running it for 3 years works out to $3.05M ($1.02M/yr). Spot pricing isn't available on that particular instance.

Can big customers negotiate even better deals? I'm sure, but probably knocking a few or maybe tens of % off those prices. Amazon has enough competition that I'm sure the list prices aren't padded that much.

The thing that freaks me out about cloud computing is that I might accidentally leave something in use and only notice when I see the bill. But, if I had some isolated tasks that needed a lot of cores for only a small % of the time, I certainly would be tempted to use a cloud instance, rather than buy a huge server or workstation.

these tend to be VMs so are slower, and affected by their neighbors ...
That mainly pertains to the Flex instances, which are priced more than enough cheaper to still be worth using. The main reason not to use them is if you need some more consistent level of throughput.
 
What's all this about overloading cache? Intel has been stingier with L3 cache than AMD, with AMD not disabling any of the 32 MiB on a 6-core while Intel gives a little more cache every step up the ladder. And then there's X3D.
AMD's L3 cache is limited to 32 MiB (or 96 MiB, for X3D) per CCD. Intel's L3 cache is shared among all cores.

Plus, Arrow Lake has 3 MiB of "L2" per P-core and that extra 8 MB of "side cache", which is like a L4. If you add it all up, that's 24 + 36 + 8 = 68 MiB, accessible the P-cores (only 47 MiB accessible to each core) and 16 + 36 + 8 = 60 MiB accessible to the E-cores (only 48 MiB accessible to each E-core).

If we compare that to Zen 5, sure a 9950X has 16 + 64 = 80 MiB of cumulative L2 + L3, but each core only has access to 33 MiB. Of course, each core on a X3D CCD has access to a whopping 97 MiB of L2 + L3.

Where we really see the benefits of AMD's cache architecture is in the latency and aggregate bandwidth of its L3. The 9950X has only 57.2% as high latency (i.e. 42.8% less) and 58.0% more bandwidth than the 285K!
 
  • Like
Reactions: Makaveli
You need to learn the market, not Intel. Intel sells the overwhelming percentage of its CPU's to OEM's which end up in businesses. Of the 100's of desktops the company I work for have purchased, not a single one has been upgraded, nor has it even been considered. We keep the desktops 4 to 5 years and then send them to recycling and buy a new one. By replacing the socket every couple of years, no matter when the cycle comes up to replace a desktop, we're not going to be purchasing a platform that is more than a couple years old which matters if we're using it for 5 years. Not a single one of those desktops has been AMD based either, since there is no compelling reason to switch.

Intel caters to the business world, because that's where the volume/money is. OEM's want something new to sell every year. Upgradability is of zero importance to them. Lenovo shipped nearly 17 million PC's in Q4 of 2024, almost all of them Intel. You want Intel to "learn their weakness" and give a damn about the 3 people in this thread that upgraded their CPU? The DIY market is irrelevantly small.

Your comment is based on the fallacy that somehow having longer-lasting CPU sockets will hurt Intel's OEM business. Partners such as Dell is whom you had in mind?

I would like to see proof for this belief.

If there is no proof, well, then having a longer lasting CPU socket allows Intel to cater to a market, perhaps albeit small, but often times high margin such as gamers buying 1500 dollar motherboards, while completely doing nothing harmful to its business OEM partners.
 
Your comment is based on the fallacy that somehow having longer-lasting CPU sockets will hurt Intel's OEM business. Partners such as Dell is whom you had in mind?

I would like to see proof for this belief.
I think his point wasn't that it'd hurt Dell, but rather that there's little upside for business customers, if they supported CPU-only upgrades.

And, as a matter of fact, even when two CPU generations share the same socket, that still doesn't guarantee you'll be able to upgrade the CPU. We have some Dell desktops at my job, and I wanted to upgrade some from Alder Lake to Raptor Lake. Well, it turns out that Dell's BIOS prevents you from doing that, if the motherboard is the older revision (i.e. the one they shipped right until they switched over to Raptor Lake). Someone on their forums actually tried it and reported that the BIOS displayed an error message (something to the effect of "unsupported CPU") and wouldn't boot.

However, it does seem to me like it would benefit Dell, if they only had to make minor tweaks to their motherboards from one year to the next, instead of designing for an entirely new socket every 2 years.
 
Last edited:
However, it does seem to me like it would benefit Dell, if they only had to make minor tweaks to their motherboards form one year to the next, instead of designing for an entirely new socket every 2 years.
That would make logistics a lot more complicated, they would have to buy a lot more mobos than CPUs store the mobos taking up space and costing money and hope that the mobos are still relevant the next year.
It's much easier to just split the available funds into buying the same amount of everything for the season or production run or whatever, one and done and get the profits for the next thing.
 
  • Like
Reactions: rluker5
Plus, Arrow Lake has 3 MiB of "L2" per P-core and that extra 8 MB of "side cache", which is like a L4. If you add it all up, that's 24 + 36 + 8 = 68 MiB, accessible the P-cores (only 47 MiB accessible to each core) and 16 + 36 + 8 = 60 MiB accessible to the E-cores (only 48 MiB accessible to each E-core).
Even the lower numbers are probably charitable based on the nature of the side cache. But I'll note that is what you're getting at the top of the stack with the 285K, and they're all based on one compute die. As with previous monolithic dies, L3 cache is being taken out as you go down. The 245K loses 12 MiB of L3 cache. This is a strategy of Intel's that AMD doesn't really copy other than some very low-end products (there's APU dies that have less L3 to start with, but there are also a few low-end SKUs you can find that have some L3 disabled).

I never total the cache of different CCXs for AMD. I'm only interested in that unit of 32 MiB, rising to 96 MiB for X3D. Hopefully with the rumored big gains for Zen 6.

But I need to hear from the horse's mouth what is meant by "overload of cache and IO" because that was too vague for me.
 
That would make logistics a lot more complicated, they would have to buy a lot more mobos than CPUs store the mobos taking up space and costing money and hope that the mobos are still relevant the next year.
No, I was allowing for the fact that they might make design changes every year (and certainly at least one production run per year), but just that the amount they have to change is a lot less when the socket stays the same.

In my little anecdote, I pointed out how they did revise their board design between Alder Lake and Raptor Lake. So, I'm assuming they're probably going to do annual revisions, either way.
 
That's because the L3 slices are tied to each P-core. Apparently, the way they disable cores means the corresponding L3 slice is also disabled.
It's interesting that AMD doesn't use that approach (6-core, 8-core get the same L3), and that it just so happens to make the top Intel SKUs look better in gaming benchmarks, providing some upsell pressure.

If the 9600X had only 24 MiB of L3, maybe it would be -10% behind a 9700X in gaming instead of -4%.

That's all the complaining I have to do about this. I'm excited to see the bigger caches of Zen 6 and Nova Lake.
 
However, it does seem to me like it would benefit Dell, if they only had to make minor tweaks to their motherboards from one year to the next, instead of designing for an entirely new socket every 2 years.
LGA 1156-1200 shared the same physical socket design and LGA1700/1851 (presumably 1954 too) do the same. I imagine this minimizes the cost of switching sockets since it cuts down what is changing.
 
Because Arrow Lake's SoC tile isn't shared with Lunar Lake, so there was no good reason for them to add an LP island to it. But, if Panther Lake shared Nova Lake's SoC tile, that'll be a good reason for it to have an LP island.
Arrow Lake-S shares its SoC with Meteor Lake-S (which never came to marker). Arrow Lake-H might share its SoC with Meteor Lake-H (I never did find confirmation). On Lunar Lake, the LPE cores and P cores are on the same die.
Largely because Ryzen 9000 uses the same inefficient, old I/O dies from Ryzen 7000. I think some of that is also due to EMIB being more efficient than the interconnect Ryzen 9000 uses.
I suspect EMIB is 100% of the reason. The Ryzen 7000 I/O die uses the TSMC N6 node. There's no reason that can't match the efficiency of the Intel 7 die Raptor Lake uses. AMD's idle power use went up a lot going from Ryzen 2000 to 3000 because of the chiplets, and to this day most Ryzen mobile processors use monolithic dies instead. And EMIB is supposed to be more efficient than chiplets and less efficient than monolithic, exactly as Tom's test shows.
"Figured out?" As your graph shows, Arrow Lake is using a lot more idle power than Raptor Lake did.

Also, idle isn't the only thing you should be looking at, but rather how power usage looks when one or more cores in an E-cluster are running. In that case, the E-cores in Arrow Lake will be hitting not just the ring bus, but all of the L3 slices on the compute tile, due to how Intel's L3 cache works. The LPE cores don't touch L3 cache, thereby not incurring that penalty.
As I said, it's worse than Raptor Lake. But in terms of competion with AMD, it's good for now.

Under light workloads above idle, I imagine the whole landscape changes. Interconnect power becomes a lower percentage. Don't Intel and AMD both power down portions of the L3 cache when not in use?
 
I think his point wasn't that it'd hurt Dell, but rather that there's little upside for business customers, if they supported CPU-only upgrades.

Perhaps. But that then brings us to the more cynical or perhaps realistic end-result then, even if the other user didn't intend to make that point the idea hit by a truck by it.

If Intel is only considering what is going on for its business customers, then the fact is this:
Intel does not care about us. I mean it really is that simple to cast this in an "us versus them" situation. (Joe consumers versus business consumers)

Intel's only consideration is business customers? Then that's the unavoidable truth: Intel doesn't care about joe consumers. Which is just another much darker way of re-stating what I originally said all they way back up at the top in the first response.

If Intel did care about joe consumer and didn't only want to consider its business customers it would most definitely give us all longer-lasting sockets. Where else are we supposed to go with this?
However, it does seem to me like it would benefit Dell, if they only had to make minor tweaks to their motherboards from one year to the next, instead of designing for an entirely new socket every 2 years.
It would, that is the double-sided nature of discussions about "business customers".

There's direct from Intel, that is, Dell themselves or HP themselves or whomever. Then there's who Dell serves (and Intel has to consider) such as, I don't know some insurance company with 10,000 cubicles worth of workstations to deal with.

Those at the cubicle-level most certainly will not be harmed in any way by longer lasting sockets.
 
Last edited:
Intel needs to learn the weakness of its market position and stop doing this to its customers.

Sockets that don't last long is a major fail. I get that Intel has done this for a long time and it also takes time to change culture. That's fair. I hope this new 1954 will have twice or longer the lifetime of its predecessor.

Its fair to say that that no socket lasts forever. However Intel's socket strategy causes abandonment issues.
This is what has basically killed Intel ( added with a few other things ) but when your direct competition is doing 4 to 5 year life spans on socket life and still kicking your butt there is serious issues ..

I often see Intel lovers spewing the same BS of Socket change is required to further improve CPU's and while this is some what true just not every 2 years..

This is why AMD has taken the crown offering consistent value for money with AM4 and AM5

( i bought my $900aud mobo ROG crosshair x670e gene about 3 years ago now and so far its done my 7600x 7800x3d while im likely to skip 9800x3d and maybe go straight to the 10k series it has done me for a full gen on CPU's)

In that time intel have done what 3 gens of sockets ??

How does Intel trip up and not understand peoples love of simple value for money??