News Intel's next-gen Nova Lake CPUs will seemingly use a new LGA1954 socket

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Why would they care when Intel makes them the same size so retooling isn't required? LGA1156 launched in 2009 and until ADL launched in 2021 the sockets were the same size. Everything launched since then has also been the same size. I'd be surprised if changing socket is more difficult than any other standard feature shift on a motherboard.

Same size sockets, but pinouts were different. That's still a change to tooling, as the process will be different for said pinouts, plus R&D time to make sure it works. A minor change can be a major pain in the automotive diecast supplier world to get pushed through. I can only imagine how complicated it is for something as technically advanced as a motherboard's circuitry. Keeping the platform the same is going to be far less work to update, once a new CPU gen comes out. OEM's hamstring their systems with crappy, slow ram, and bargain basement SSD's, so all the advances in chipset and PCI-E tech is something they don't even have to do if they don't want to. Bios updates, and drop in the new chip. For AMD the OEM's could have stuck with 300 or 400 series chipsets, for AM4, and the average user wouldn't know, or probably care.
 
Can be, but it depends. A 5800X3D can still easily give 120fps at 1920x1080 ultra in games, assuming your GPU is up to the task, and while something like a Ryzen 9950X may give a large performance increase over the Ryzen 5950X, 50% or so, for most people that aren't professionals won't see those gains since they're not running multiple VMs, local LLMs, running encoding for hours, etc... especially if their GPU is typical of most people and is not a 4080/5080/4090/5090 and is the prime limiting factor in game FPS. Also, if you care that much about performance, why would you not save a few hundred dollars and wait an additional year and buy the first generation of a new platform? The difference from the 5000 to 7000 series was not unremarkable, a good 25% in games and upwards of 40% if you're comparing X3D to X3D, assuming a high end GPU, which may have been a good 70% faster than first generation Ryzen, 1800X to 5950X, but to lose another 20% by not moving to a new system seems to run contrary to your chasing of performance. Also, of course, 2560x1440 and above cut the performance difference by a large amount due to GPU limitations.
Frankly if your running anything more than 1080p the CPU is obviously getting more and more pointless ..

small increases between gen is a myth really ..

Just think if AMD or Intel dropped a 40% increase on the ipc uplift of previous gen no one would buy anything for years and years .

So its not viable to give a huge uplift gen to gen !!
 
"...no one would buy anything for years and years ."

We're not in a normal upgrade cycle. Better and better AI apps are appearing almost weekly.

The new toys will probably require a workstation board and a couple of AI processing cards to take it off-line.
 
It looks to me like desktop updates will require new motherboards in the next few years, anyway, because of the rapidly expanding memory requirements for running AI models.

The MBs also probably need to be re-designed to space pcie slots at 2.5x the current spacing, since the current AI capable GPUs require the extra room for their coolers.

So, I'm going out on a limb and predicting AMD will be requiring new MBs for the next gen of AI capable desktops, at least as an option.

The other possibility is that workstation boards will become the home for the AI boom. That's probably the only near-term solution

too bad not every one runs or uses AI, so this point is kind of moot.
 
  • Like
Reactions: ThereAndBackAgain
Just think if AMD or Intel dropped a 40% increase on the ipc uplift of previous gen no one would buy anything for years and years .

So its not viable to give a huge uplift gen to gen !!
It sounds like you're saying they're not giving bigger IPC increases because they don't want to. However, I'd point out that AMD was delivering big IPC increases only until they basically caught up to Intel. Not because they no longer wanted to, but just because they'd basically reached the limit of what the technology would permit.

Likewise, Intel only delivered big IPC increases when going from 14 nm -> Intel 7 and now from Intel 7 -> N3B. So, both of those coincided with very large jumps in process technology. I think that's really what's required, in order to see a big IPC increase.
 
It sounds like you're saying they're not giving bigger IPC increases because they don't want to. However, I'd point out that AMD was delivering big IPC increases only until they basically caught up to Intel. Not because they no longer wanted to, but just because they'd basically reached the limit of what the technology would permit.

Likewise, Intel only delivered big IPC increases when going from 14 nm -> Intel 7 and now from Intel 7 -> N3B. So, both of those coincided with very large jumps in process technology. I think that's really what's required, in order to see a big IPC increase.
Watch things like moores law is dead and other youtube videos they constantly click bait or in moores laws dead videos claim 40% from their sources..

Ive watched these videos for near on 5 -6 years now and every time the rumour mill starts on the next get CPU's same old BS starts again next gen 9000series is going to be 40% better than 7000series and same with 10000series is going to be 40% better than 9000!!

Only to see 10% 15% uplifts once the products hit shelves!!

If that in the 9000series case it died against the 7000series

I have no doubt 10000series AMD CPU's have been conceived 2 gens ago when the 7000series CPU's were released..

There is no need to crush the competition by generations ( Intel has crushed themselves out of pure hubris and silly mistakes )

Of course they dont want sell you their best of the best its a pure money machine ..

Nvidia dont sell you their best cards first ( disregard the 5090 ) 100% there will be a 5080ti and a better Vram something to keep buyers buying .

Ask yourself why do AMD bother with the non x3d CPU when basically now the 9800x3d is the best gaming CPU and the 9950x3d is basically the best gaming /production CPU ??

Because money ,there is more money in it to sell smaller incremental uplifts than to go big ..

For example with the current state of Intels desktop CPU's if AMD gave us a 50% uplift with 10000series then what are they going to do give us another 50% with 11000 at some point they will only be competing with them selves and thats bad for business !!

Vise versa for intel if they crushed the the 10000series and sent it back the to stone age with something truly crazy good then they will only compete with themselves !!

AMD or Intel only need to go that little bit better than the competition to keep market share and mind share while making money hand over fist ..
 
Last edited:
Of course they dont want sell you their best of the best its a pure money machine ..
No, it's delicate balance of how much IPC they can deliver at a certain clock speed, on a certain process node, at a certain area. Increasing IPC comes at the expense of die area and clock speed. It's not just a dial you can turn up or down, independent of anything else.

It also takes time to increase the sophistication of their designs, which mostly build on what they did in the previous generation. Realistically, they can only change a certain amount between one generation and the next, which goes towards explaining some of the improvements in Zen 3, which was made on mostly the same node as Zen 2 and has only like 10% more die area.

Engineering is a very incremental exercise, which a lot of people might not fully appreciate. You can do all the modeling and estimation you want of a chip, but at some point you just have to build the thing. Then, take detailed measurements, do thorough analysis, and figure out what worked and what didn't, so you can decide what to build on or scrap, in future generations.

Time is also a factor. Not only do they need to do all of the modeling and design, but still leave time for testing, debugging, and a couple respins. And they do a lot of testing, since chip bugs can be so costly. At one point in time, I think the industry average was 2 test engineers for every 1 design engineer.

Nvidia dont sell you their best cards first ( disregard the 5090 ) 100% there will be a 5080ti and a better Vram something to keep buyers buying .
The reason they release the "Super" and other up-spec'd editions is because manufacturing yields improve over time, giving them margins to unlock more of the chip and push clock speeds a bit further.

Ask yourself why do AMD bother with the non x3d CPU when basically now the 9800x3d is the best gaming CPU and the 9950x3d is basically the best gaming /production CPU ??
Because the X3D die adds cost and there are some use cases where it adds basically no performance.

Another reason is that it takes more time to do the X3D CPUs. If they ship the non-X3D version first, they have time to perfect the base CPU. Then, they have a stable platform to use for optimizing the X3D version.

Also, their chiplet desktop & laptop CPUs are sort of a byproduct of their server CPUs, which is a much bigger and more important market for them. The desktop CPUs again function as sort of a development vehicle and secondary market for those chiplets. And plenty of server workloads do not significantly benefit from the extra L3 cache (which is why they can get away with their C-cores having half the per-core L3 as their normal CCDs).

For example with the current state of Intels desktop CPU's if AMD gave us a 50% uplift with 10000series then what are they going to do give us another 50% with 11000 at some point they will only be competing with them selves and thats bad for business !!
The problem with this logic is you're assuming it's even possible to deliver that kind of improvement. Look how much trouble they had in delivering an improvement in either Meteor Lake (so much so, that the desktop version even got cancelled) or Arrow Lake!

Most of the time, the obvious answer is the right one. These companies are trying to give us as much as they realistically think they can deliver and sell. Maybe, around the time of Haswell or Skylake, Intel was holding back on IPC just a little bit, due to little real competition. That's obviously no longer a luxury they can afford. I think Golden Cove was the point where they basically gave their best effort to offer as much IPC as they could, establishing a sort of new baseline.
 
Last edited:
  • Like
Reactions: thestryker
The problem with this logic is you're assuming it's even possible to deliver that kind of improvement. Look how much trouble they had in delivering an improvement in either Meteor Lake (so much so, that the desktop version even got cancelled) or Arrow Lake!
With arrow lake they reduced power by a lot, reduced clocks, and removed hyperthreading, and they are still better than the 14900k and on par with the 9950x in productivity.
Also from early on they used e-cores instead of full cores.
They could have not reduced/removed these things and used full cores with arrow lake and have leapfrogged the 9950x by a huge margin.

Rumors for next gen is doubling of the current intel CPU, basically having two ccds like amd has, which I don't believe will happen for desktop but still the rumor is there.

Intel could also increase cache by a huge amount to get the same boost as the x3d cpus.
Also they could easily reintroduce any of the above in any upcoming CPU.

They won't do any of this because they are selling just fine against amd and they don't need to make their cpus more expensive to produce.
 
With arrow lake they reduced power by a lot, reduced clocks,
We're yet to see what power consumption on the 200S models looks like. You say it's more power efficient than 14th gen, but it regressed on gaming performance. Let's see a proper generational uplift on performance, and then we can talk efficiency.

They could have not reduced/removed these things and used full cores with arrow lake and have leapfrogged the 9950x by a huge margin.
Would've cost a lot more and still not been a better gaming CPU.

Intel could also increase cache by a huge amount to get the same boost as the x3d cpus.
They did go big on L2 cache and add 8 MB of side cache (basically, L4). When you add it all together, that puts the 285K at 84 MB. A regular 9950X has only 80 MB of L2 + L3.

They won't do any of this because they are selling just fine against amd and they don't need to make their cpus more expensive to produce.
Would've? Could've? All I see is that they didn't. I don't hear anyone applauding Intel for that move.
 
They did drop it from client CPUs. The 2x multiplier comes from the there presumably being up to two compute tiles, meaning these are (supposedly) real core counts!
Ohhhhh *facepalm*, lol, I interpreted the 2x as being a SMT multiplier, not dual chiplets. Now this makes sense, TY.
 
  • Like
Reactions: bit_user
...

The reason they release the "Super" and other up-spec'd editions is because manufacturing yields improve over time, giving them margins to unlock more of the chip and push clock speeds a bit further. ...
Well, products are products first, even for hi-tech companies; the Super models in their simplest form are a product refresh, which does make marketing sense and generates some revenue whereas its otherwise looking like stagnating products and somewhat proportionately sales to boot. It just happens to work out nicely that nodes have had that much more time to mature by that time, resulting in slightly better clocks as you mentioned for those technical reasons.

It's completely typical in the industry at this point, e.g. AMD releases a xx5x refresh to an original xx0x release, e.g. Radeon 6750 from the 6700. Interestingly, it even provides the opportunity to shove some faster memory in... lol. Unfortunately for AMD, this isn't as obvious to non-enthusiasts consumers -- IMO, nVidia has the better marketing on the "Super" nameplate, and yes, even as anyone can go "50 > 0".
 
  • Like
Reactions: bit_user
I wonder if mobo makers are going to bring back 8 sata 6 ports? Last gen mobo suppliers said sata was unpopular now and only included 8 on 1000 dollar mobos. If it's so unpopular where does that put Hard Drive manufacturers in just a couple years perhaps? M.2 is still likely a bit prohibitively expensive at the moment for some folks. What are your opinions on the number of sata 6 ports on mobos? Should we have 8 of them? Less? More? Data is getting larger every year it seems like for some businesses and at home enthusiast editors so I can't imagine going forward only having access to 4 sata ports on mobos. Of course cards exist but aren't quite as fast, right, and when we're talking about moving or writing data that time could really add up quickly I bet on the large storage volumes like HDD's provide on the sata interface.
 
Last edited:
Typical Intel, force everyone to change the motherboard too so they make more money.
Intel and AMD actually make very little on motherboards as they only make the chip sets which make up a small portion of the overall cost. They DGAF about motherboard sales. It's the AIBs that like selling new motherboards. I can only assume that's why they've been charging more for similarly equipped AMD boards than Intel boards.
 
I wonder if mobo makers are going to bring back 8 sata 6 ports?
This is easily solved by m.2 add in cards for cheap. And yes they ARE faster than SATA. I find it more concerning many of the "affordable" boards skimp on PCIe slots. Especially on Intel that offers more lanes and no GPU needs PCIe 5 x 16 lanes.

When it comes to extending the life of SATA it's really only usable for spinny boys and many people just opt for a cheap NAS as 10g networking is already faster than real world SATA speeds and even on board 2.5gbit is adequate.

But I get the sentiment. I currently run 3x m.2s and twin HDDs in raid 0 for a total of 13tb of storage and still find myself having to delete stuff to keep capacities below 80%. But I need more room in my case before I can use those other 2 SATA sockets for more HDDs. I probably should have built a NAS already but I just leave my storage open on my network for everyone else in the family to use.
 
Yeah, 2.5 Gbps networking more or less solves the SATA issue. 300 MB/s is faster than hard drives can go. So you can just set up a NAS and get the same results. And even a SATA SSD wouldn't be too hurt by that. And there is always 5Gbps and 10Gbps networking, but that starts getting more pricey on the NAS side/Router side.
 
Most of the time, the obvious answer is the right one. These companies are trying to give us as much as they realistically think they can deliver and sell. Maybe, around the time of Haswell or Skylake, Intel was holding back on IPC just a little bit, due to little real competition. That's obviously no longer a luxury they can afford. I think Golden Cove was the point where they basically gave their best effort to offer as much IPC as they could, establishing a sort of new baseline.
As I recall, in Haswell's case the small IPC gains were also because power efficiency gains were prioritized above IPC gains. Intel was trying to get Haswell into fanless tablets.
This is easily solved by m.2 add in cards for cheap. And yes they ARE faster than SATA. I find it more concerning many of the "affordable" boards skimp on PCIe slots. Especially on Intel that offers more lanes and no GPU needs PCIe 5 x 16 lanes.

When it comes to extending the life of SATA it's really only usable for spinny boys and many people just opt for a cheap NAS as 10g networking is already faster than real world SATA speeds and even on board 2.5gbit is adequate.

But I get the sentiment. I currently run 3x m.2s and twin HDDs in raid 0 for a total of 13tb of storage and still find myself having to delete stuff to keep capacities below 80%. But I need more room in my case before I can use those other 2 SATA sockets for more HDDs. I probably should have built a NAS already but I just leave my storage open on my network for everyone else in the family to use.
How do you build a cheap NAS if none of the more affordable motherboards have a lot of SATA ports? I guess the answer is SATA expansion cards.
 
How often do you plan on maxing out every single ssd in a NAS all at the same time?!?! (While also running max transfer to and from the GPU and the CPU)
It doesn't matter if there are not enough PCIe lanes if you never use all of them at the same time.
Well, I can't speak for the average user, but you would be surprised how many enthusiasts actually use all the lanes in their motherboards for I/O.

Want 10Gbit NIC? Want USB4 or at least 10Gbps ports? Want RAID5? Want a small homebew NAS? Want more than 4 NVMe ports with add-in cards? And so on and so forth.

In my case, I have 3 NVMes and 3 sATA drives and I ran out of "normal" lanes, so I sacrificed the X16 to X8 and the lane splitting is real.

It's not overblown or a stretch to say AMD needs to pick up the slack for enthusiasts. Intel is not much better. The jump from the regular consumer platform to the workstation is too damn much and they need to close the gap.

Regards.
 

TRENDING THREADS