News Intel's next-gen Nova Lake CPUs will seemingly use a new LGA1954 socket

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I suspect EMIB is 100% of the reason.
Well, back in post #64, I posted a comparison of cache & memory latency between the 285K and 9950X. If you look all the way at the right edge, you see that the 9950X offers 14.3% lower DRAM latency. Furthermore, if you know anything about overclocking on Arrow Lake, you'd know the die-to-die frequency is pretty low and apparently has a lot of headroom. Don't you suppose that might have something to do with both their lower idle power and higher DRAM latency?

Under light workloads above idle, I imagine the whole landscape changes. Interconnect power becomes a lower percentage.
You know they have a quad-core part, called the N100, with a TDP of only 6W, right? That's made on N7, BTW. So, don't assume the E-cores are necessarily burning a lot of power. On Alder Lake-N, four of them can run at about 2.1 GHz, and stay within 6W, depending on what they're doing.

Don't Intel and AMD both power down portions of the L3 cache when not in use?
What do you mean "when not in use"? Like, if all the cores go idle? I suppose they could, but I've mainly heard of that sort of thing in the context of phones. In the case of Meteor Lake, all the L3 cache is on the compute die. So, if it gets powered down, then obviously so would the L3 cache.
 
If Intel is only considering what is going on for its business customers, then the fact is this:
Intel does not care about us. I mean it really is that simple to cast this in an "us versus them" situation. (Joe consumers versus business consumers)

Intel's only consideration is business customers? Then that's the unavoidable truth: Intel doesn't care about joe consumers. Which is just another much darker way of re-stating what I originally said all they way back up at the top in the first response.
They obviously care about gamers and power users, which is why they send out review samples to influencers and big publications in these communities. Even if gamers aren't a huge market (I honestly don't know, but just for the sake of argument), they're influential and many gamers are either current or future tech workers and decision-makers.

So, Intel does care about this market. Intel caters to this market with "unlocked" CPUs. They do want to wear the crown of gaming performance champion, even if there are limits to how far they'll go (e.g. no 3D cache, so far).

If Intel did care about joe consumer and didn't only want to consider its business customers it would most definitely give us all longer-lasting sockets. Where else are we supposed to go with this?
I think their socket policy went back to a time when the platform was in much more flux, and it kept them from being tied down too much, for too long. Now that the pattern has been established and all of their partners and ecosystem has adapted to it, it's probably hard for them to change.

BTW, if you're old enough, you might remember a time when Intel sockets weren't so heavily patented that some of the x86 competitors made CPUs that were even socket-compatible!
 
  • Like
Reactions: ezst036
As much as I appreciated AMD's long life sockets, it did bite me when I bought this X570S motherboard to replace my X370 one when AMD was adamant about first gen Ryzen boards not supporting the 5000 series CPUs, which they then caved to.

But really in 2025 we're at the stage where CPUs, and to that effect motherboards and possibly RAM, have reached a performance and features capability so that they should be on a 3-5 year, if not longer, upgrade cycle for most people, longer than the support periods for motherboards, especially given the price and performance benefits of upgrading a GPU over your CPU, so having a socket that'll only last a year or two isn't as big of a deal as it was in.
 
  • Like
Reactions: rluker5
As much as I appreciated AMD's long life sockets, it did bite me when I bought this X570S motherboard to replace my X370 one when AMD was adamant about first gen Ryzen boards not supporting the 5000 series CPUs, which they then caved to.

But really in 2025 we're at the stage where CPUs, and to that effect motherboards and possibly RAM, have reached a performance and features capability so that they should be on a 3-5 year, if not longer, upgrade cycle for most people, longer than the support periods for motherboards, especially given the price and performance benefits of upgrading a GPU over your CPU, so having a socket that'll only last a year or two isn't as big of a deal as it was in.
One could argue its just as important if not more so to have a long socket life span now ..

With the ever rising cost of PC components replacing a Mobo every 2 years to upgrade your CPU is crazy!!

If you can get away with a mobo replacement for longer thats a win in my book..

Jan 2023 was when i bought my mobo and im still using it i dont see me replacing it till AM6 ..

So 2027 i think is the projected AM6 release !!
 
One could argue its just as important if not more so to have a long socket life span now ..

With the ever rising cost of PC components replacing a Mobo every 2 years to upgrade your CPU is crazy!!

If you can get away with a mobo replacement for longer thats a win in my book..

Jan 2023 was when i bought my mobo and im still using it i dont see me replacing it till AM6 ..

So 2027 i think is the projected AM6 release !!

Why would you upgrade your CPU every 2 years? There's not a stark difference between them anymore, nothing like there was between, say, Bulldozer and Zen 1, and may effectively be a negligible to zero percent difference if your GPU isn't up to the task (for gaming) or aren't doing heavy multi-threaded tasks.
 
Why would you upgrade your CPU every 2 years? There's not a stark difference between them anymore, nothing like there was between, say, Bulldozer and Zen 1, and may effectively be a negligible to zero percent difference if your GPU isn't up to the task (for gaming) or aren't doing heavy multi-threaded tasks.
Can be a big difference in 4 years though !!

So thats what AMD supports ..

Hell 5800x3d on the AM4 platform is still viable to a point !!
 
Why would you upgrade your CPU every 2 years? There's not a stark difference between them anymore,
You're saying there's not a stark difference between say a 7700X and 9800X3D? There are always cases like that, even if it's not the norm.

Now, if someone already had a 7800X3D, I wouldn't necessarily expect them to jump to the 9800X3D, unless it's the type of person who's always after those few extra fps, no matter the price or hassle. That would be extremely niche, I think.

If it seems like I'm arguing both sides of the issue, I guess I am. I think it's a mistake to be completely dismissive of the value in longer-lived sockets, yet I and probably most people wait long enough between upgrades that it doesn't make much sense to keep the same mobo, even if you could.

Basically, I think AMD fans tend to exaggerate the benefits, while Intel fans tend to downplay it too much. How much of a positive it is really depends on a given person's circumstances. If you can't afford the nicest CPU or are eying a tasty CPU in the next gen but need a system now, then I can see not wanting to spend too much on the first CPU and then do an upgrade, later. However, for most people, their circumstances don't change much between one upgrade and the next, in which case they would probably space out their upgrades to the point where socket life is of little or no consequence.
 
Last edited:
hum, when we change the cpu, it is mainly to change the platform, like going from pcie3/4 to 5, from ddr3/4 to 5, to get new usb (c), etc..
so no matter the socket, the motherboard is changed any way.

do you know people who only change the cpu when they do a big upgrade? or change the cpu every year? I don't.
I believe it's way more than you're thinking. Also, this forum has proven it time and time again.

I myself have through the entire Ryzen CPU line using only 3 different motherboards. I actually didn't have to change one of them, but I did just because. We're talking 5 different CPUs. I plan on dropping in a 9800X3D onto my B650e motherboard. This will hold me over until until the next generation. It'll be awesome to only need 3 motherboards to go a decade and 5 different CPU architectures. IMO
 
You're saying there's not a stark difference between say a 7700X and 9800X3D? There are always cases like that, even if it's not the norm.

Now, if someone already had a 7800X3D, I wouldn't necessarily expect them to jump to the 9800X3D, unless it's the type of person who's always after those few extra fps, no matter the price or hassle. That would be extremely niche, I think.

If it seems like I'm arguing both sides of the issue, I guess I am. I think it's a mistake to be completely dismissive of the value in longer-lived sockets, yet I and probably most people wait long enough between upgrades that it doesn't make much sense to keep the same mobo, even if you could.

Basically, I think AMD fans tend to exaggerate the benefits, while Intel fans tend to downplay it too much. How much of a positive it is really depends on a given person's circumstances. If you can't afford the nicest CPU or are eying a tasty CPU in the next gen but need a system now, then I can see not wanting to spend too much on the first CPU and then do an upgrade, later. However, for most people, their circumstances don't change much between one upgrade and the next, in which case they would probably space out their upgrades to the point where socket life is of little or no consequence.

Comparing a 7700X and a 9800X3D are two different animals, one is for gaming, and one is not. If you were into gaming primarily you would have bought a 7800X3D, and compared to a 9800X3D the performance is about 16% less at 1920x1080, decreasing to negligible at 4K, but even at 1920x1080 you're looking at 210fps vs 178fps (TomsHardware's numbers), and that's with a RTX 4090, which most people don't have anything close to so their performance gains will be less. Even with a 7700X or even 5800X3D both are still quite capable of 1920x1080 120fps averages assuming you have the GPU hardware to do it. In either case, it doesn't really make much sense to pay north of $400 for a new CPU when you can put that towards a new GPU and experience far higher gains.

Can be a big difference in 4 years though !!

So thats what AMD supports ..

Hell 5800x3d on the AM4 platform is still viable to a point !!

Can be, but it depends. A 5800X3D can still easily give 120fps at 1920x1080 ultra in games, assuming your GPU is up to the task, and while something like a Ryzen 9950X may give a large performance increase over the Ryzen 5950X, 50% or so, for most people that aren't professionals won't see those gains since they're not running multiple VMs, local LLMs, running encoding for hours, etc... especially if their GPU is typical of most people and is not a 4080/5080/4090/5090 and is the prime limiting factor in game FPS. Also, if you care that much about performance, why would you not save a few hundred dollars and wait an additional year and buy the first generation of a new platform? The difference from the 5000 to 7000 series was not unremarkable, a good 25% in games and upwards of 40% if you're comparing X3D to X3D, assuming a high end GPU, which may have been a good 70% faster than first generation Ryzen, 1800X to 5950X, but to lose another 20% by not moving to a new system seems to run contrary to your chasing of performance. Also, of course, 2560x1440 and above cut the performance difference by a large amount due to GPU limitations.
 
Well, back in post #64, I posted a comparison of cache & memory latency between the 285K and 9950X. If you look all the way at the right edge, you see that the 9950X offers 14.3% lower DRAM latency. Furthermore, if you know anything about overclocking on Arrow Lake, you'd know the die-to-die frequency is pretty low and apparently has a lot of headroom. Don't you suppose that might have something to do with both their lower idle power and higher DRAM latency?
Yeah, if the die-to-die interface is clocked a lot lower on Intel then I agree it'd contribute to lower idle power use.
 
Yikes! I wasn't thinking that much! You have to get into fairly exotic territory with GPUs or I guess exotic EBS configurations to hit those rates!
Actually, I think it was $4,000/hour for SQL Server on 11 cores and some modest DRAM allocation, but also full HA/DR. And that didn't even count the max config for web servers, another $1,000 or so, and this was seven, eight years ago.

Big corporate users get aggregate pricing maybe 50% lower and the real big ones probably flat rates per year. Was very frustrating for me working for a small company that didn't qualify.

And that didn't even guarantee that we had the whole server, could still get interference from other users in the same or adjacent VMs.

We would probably only need that for a few hours at a time. Was it worth it? I dunno. As you stated at the top, a lot cheaper to run on your own platforms, but you're also paying for some impressive sysop and global configs, etc.
 
What's all this about overloading cache?
If you're going to load up on cores you have to talk about cache.
I think Intel has done a good job balancing cores and cache over the last ten+ years, one of the few things they got right, but apps and system managers and performance tuners have to worry about this stuff in detail.
 
Only in the PC fanboy world can a company control over 70% of the market as Intel does with the desktop market, and people act like they can't sell anything.
Intel might control 70% of the desktop market, but what proportion of that is Arrow Lake?

e.g. Dell's current lineup of desktops and all-in-ones goes as far back as Alder Lake. If you include thin clients, they even have a Celeron N5105 option, i.e. "Tremont" Atom cores, the generation before Alder Lake's "Gracemont" E-cores.

My entirely-uninformed guess is that Raptor Lake (and its various refreshes) make up at least a plurality of new desktop sales, with Alder Lake-N (and its Twin Lake refresh) a distant second.
 
  • Like
Reactions: bit_user
AMD has shown the way. Intel needs to learn.

AMD likely has to keep the same chipset longer for their board partners because of the much lower overall volume. Not trying to cater to the small CPU upgrade crowd which is quite small. Also reduces their in-house development load.

OTOH the debut of the X3d chips probably did provide the first really compelling case for upgrading a CPU on an older motherboard since all the new improvements were packaged on the CPU and did not depend on chipset assets.
 
Comparing a 7700X and a 9800X3D are two different animals, one is for gaming, and one is not.
7700X was a fine CPU for gaming, especially if someone bought it before the 7800X3D launched. Or, maybe they didn't have a fast enough GPU for it to matter much what CPU they had, but then upgraded their GPU and decided a newer CPU was in order.

It's like I said, in the last paragraph of my post. You're trying too hard to dismiss the benefits of a longer-lived socket. You just can't pretend there aren't cases where it's meaningful for some people.

while something like a Ryzen 9950X may give a large performance increase over the Ryzen 5950X, 50% or so, for most people that aren't professionals won't see those gains since they're not running multiple VMs, local LLMs, running encoding for hours, etc...
Most developers could use that extra horsepower. I have a 24 thread machine at my job, and I could still use more muscle to make those builds go faster.

if you care that much about performance, why would you not save a few hundred dollars and wait an additional year
Silly argument. A year can be a long time to wait, for some people. Whether it makes sense depends a lot of their situation. You can't apply that logic to everyone.
 
And that didn't even guarantee that we had the whole server, could still get interference from other users in the same or adjacent VMs.
AWS has instance types where you're guaranteed you have the full server. However, I'd first check that you couldn't get adequate performance and QoS with cheaper instance types.

you're also paying for some impressive sysop and global configs, etc.
Wow, it's been a long time, since I heard anyone use that term!
: )

Did you ever use BBS's, or did you run across it elsewhere?
 
sysop
Wow, it's been a long time, since I heard anyone use that term!
: )

Did you ever use BBS's, or did you run across it elsewhere?
I suppose I used some BBS with my 300/1200 modem back in the day.
Sysop, sysadmin, devops, who cares.

I was looking at Azure SQL pricing just now, it's changed a lot in seven years, LOL. Maybe I misremember the price, but it was very high, and the max configurations were not really sufficient. Looks like the prices have come way down and the config options have gone way up.
https://azure.microsoft.com/en-us/pricing/details/azure-sql-database/single/#pricing
About time.
Bet I could have fun with these.

But these are probably about the effective prices the big boys got even in 2016/17.
 
  • Like
Reactions: bit_user
Intel really missed out on the naming for this socket! They could have added a couple more pins and named it Intel socket 2k! Just imagine the stupid commercials selling that line to people! 🤣
 
Intel really missed out on the naming for this socket! They could have added a couple more pins and named it Intel socket 2k! Just imagine the stupid commercials selling that line to people! 🤣
They had a LGA 2011 socket, in the year 2011. So, how about that? It's what the Sandybridge and Ivy Bridge HEDT and E5/E7 Xeons used.

Then, confusingly, they revised it without changing the number of contacts, in 2014. To distinguish the two, they simply added a "v3" qualifier, which I guess was meant to match the Xeon model number versioning scheme. I guess it does kinda work out, since 2011 + 3 = 2014.
 
hum, when we change the cpu, it is mainly to change the platform, like going from pcie3/4 to 5, from ddr3/4 to 5, to get new usb (c), etc..
so no matter the socket, the motherboard is changed any way.

do you know people who only change the cpu when they do a big upgrade? or change the cpu every year? I don't.
I did... from 3700x to 5800X3d...
And my friend from 4600x to 5700x3d...

So there are people who do that, But most people buy PC, use it and buy new PC when old is too slow...
 
  • Like
Reactions: bit_user
I did... from 3700x to 5800X3d...
And my friend from 4600x to 5700x3d...

So there are people who do that, But most people buy PC, use it and buy new PC when old is too slow...
More anecdotal evidence: 2700X->3800XT->5900X and now 8500G (while waiting) ->9950X3D for me.

Which reminds me... I have to update the sig, but they lowered the amount of characters allowed. Bummer.

Regards.
 
  • Like
Reactions: bit_user
I always find it amusing when people go all doom and gloom over Intel changing sockets. Do I wish Intel would do longer support? sure, but do I expect it? absolutely not. LGA 1156-1200 was basically arbitrary changes and their real customers (OEMs) didn't care at all. This set the standard going forward that their engineers didn't have to worry about backwards compatibility so they don't.
I would bet OEM's get tired of having to retool for different sockets all the time. It's cheaper to stay on a single platform longer.
 
I would bet OEM's get tired of having to retool for different sockets all the time. It's cheaper to stay on a single platform longer.
Why would they care when Intel makes them the same size so retooling isn't required? LGA1156 launched in 2009 and until ADL launched in 2021 the sockets were the same size. Everything launched since then has also been the same size. I'd be surprised if changing socket is more difficult than any other standard feature shift on a motherboard.
 
  • Like
Reactions: -Fran-
It looks to me like desktop updates will require new motherboards in the next few years, anyway, because of the rapidly expanding memory requirements for running AI models.

The MBs also probably need to be re-designed to space pcie slots at 2.5x the current spacing, since the current AI capable GPUs require the extra room for their coolers.

So, I'm going out on a limb and predicting AMD will be requiring new MBs for the next gen of AI capable desktops, at least as an option.

The other possibility is that workstation boards will become the home for the AI boom. That's probably the only near-term solution. There are already some workstation MBs with wider spacing of the PCIE slots.
 
  • Like
Reactions: bit_user