News Detailed Image of Intel's LGA7529 Socket Leaks Online

bit_user

Polypheme
Ambassador
With mainstream server sockets getting so large, Intel is really going to have to bring back a mid-sized socket for smaller servers and workstations. Our software is usually deployed on smaller servers, and I worry the platform costs of these monstrosities is getting out of hand for those who don't need quite so many cores or PCIe lanes.
 
  • Like
Reactions: thestryker

InvalidError

Titan
Moderator
Distributing uniform contact force across all of those pins is going to be challenging.

With mainstream server sockets getting so large, Intel is really going to have to bring back a mid-sized socket for smaller servers and workstations.
That socket is for Intel's top-of-the-line XSP stuff. We're talking multi-socket and optionally multi-chassis systems. We're way beyond small servers there. Intel will undoubtedly maintain an intermediate socket size between desktop and high-end server for everything else in-between.

Back in the good old mainstream vs HEDT days, desktop had ~1100 pins and HEDT had ~2000. Now, desktop sockets are 1700-1800 pins and servers have ~4000 pins. I don't imagine AMD and Intel being particularly eager to produce something that slips between the two if that is what you were wishing for.
 
  • Like
Reactions: bit_user and P1nky
That socket is for Intel's top-of-the-line XSP stuff. We're talking multi-socket and optionally multi-chassis systems. We're way beyond small servers there. Intel will undoubtedly maintain an intermediate socket size between desktop and high-end server for everything else in-between.
That socket will most likely be used for the entire 6th (?) Gen Xeon Scalable line. I would wager that the smallest core count will now be 16 instead of 8 just due to the physical size. For an intermediate I doubt there will be anything. You will have desktop and server, unless Intel wants to get back into HEDT. For something "smaller" from Intel we might be left with embedded systems based on either Xeon or Atom.
 

InvalidError

Titan
Moderator
That socket will most likely be used for the entire 6th (?) Gen Xeon Scalable line. I would wager that the smallest core count will now be 16 instead of 8 just due to the physical size. For an intermediate I doubt there will be anything. You will have desktop and server, unless Intel wants to get back into HEDT. For something "smaller" from Intel we might be left with embedded systems based on either Xeon or Atom.
Intel also has the Xeon W line, albeit primarily pitched at workstations instead of servers. I expect that sort of "middleground" to stick around for a while.
 

bit_user

Polypheme
Ambassador
Intel also has the Xeon W line, albeit primarily pitched at workstations instead of servers. I expect that sort of "middleground" to stick around for a while.
What has me worried is how the current generation of Xeon W uses the same socket as the Xeon Scalable server CPUs, even though they disabled & reassigned some of the pins. I hope you're right and they bring back a middle-sized socket, like we last had with LGA2066.

I truly wonder how many of the pins are active in the Xeon W-2400 series, which supports only quad-channel memory and 64 PCIe lanes.
 

InvalidError

Titan
Moderator
What has me worried is how the current generation of Xeon W uses the same socket as the Xeon Scalable server CPUs, even though they disabled & reassigned some of the pins. I hope you're right and they bring back a middle-sized socket, like we last had with LGA2066.
I actually meant what I wrote as LGA4xxx being the new middle-ground between desktop and 7000+ pins servers, not as expecting anything new being introduced between ~1700 pins desktop and 4xxx pins servers.

The W5-34xx chips support 8-channel memory and 5.0x112 PCIe starting from $1600.
 

ezst036

Honorable
Oct 5, 2018
552
449
11,920
With mainstream server sockets getting so large, Intel is really going to have to bring back a mid-sized socket for smaller servers and workstations.

405px-ROCKY-518HV_-_Socket_7-2375.jpg
 

bit_user

Polypheme
Ambassador
I actually meant what I wrote as LGA4xxx being the new middle-ground between desktop and 7000+ pins servers, not as expecting anything new being introduced between ~1700 pins desktop and 4xxx pins servers.
And I actually meant what I wrote about that being overkill. For entry-level servers and workstations, 4-channel memory is fine. That's exactly what Intel did, in fact. So, they know this quite well.

We don't need the other 4 channels' worth of pins, nor do we need pins for the extra 48 PCIe lanes. Or support for 350 W TDP. And that's just the extra baggage in the current gen.

I expect that their W-3600 workstations will use the monstrosity that is LGA7529. They'll probably need the full 12-channels to counter upcoming ThreadRipper Pro.
 

InvalidError

Titan
Moderator
And I actually meant what I wrote about that being overkill. For entry-level servers and workstations, 4-channel memory is fine. That's exactly what Intel did, in fact. So, they know this quite well.
They may know it but neither AMD or Intel seems to think there is enough of a market below their 4000+ pins server chips to bother with beyond making big-socket CPUs with half the stuff disabled or missing. If they made another HEDT/small-server specific socket, we'd likely be back to 1/3 of people complaining about RAM limitations, 1/3 complaining about insufficient IO, and 1/3 saying it fits just right.
 

JamesJones44

Reputable
Jan 22, 2021
662
593
5,760
With mainstream server sockets getting so large, Intel is really going to have to bring back a mid-sized socket for smaller servers and workstations. Our software is usually deployed on smaller servers, and I worry the platform costs of these monstrosities is getting out of hand for those who don't need quite so many cores or PCIe lanes.

Hopefully this is where the move to chiplet/tile based designs help them. They have definitely grown into a one size fits all model for each segment over the years.
 

1_rick

Distinguished
Mar 7, 2014
104
44
18,620
" In fact, LGA7259 can probably challenge AMD's SP5 socket with 6,096 contacts. "

First off, this is an inherently silly tautology. But second, this e-pin brandishing is getting a bit overboard.
 
  • Like
Reactions: bit_user

bit_user

Polypheme
Ambassador
second, this e-pin brandishing is getting a bit overboard.
Pin counts are growing in response to needs for memory bandwidth and capacity.

The industry appears to be moving towards using in-package HBM to address bandwidth needs, while CXL enables pools of memory to be decoupled from the CPU for greater capacity scaling. This approach also leads to better energy-efficiency, while also leading to some pin-count reduction.

Another contributor to the high pin-counts is the "corpus callosum" connecting multiple CPU packages. As core counts continue to scale, I predict we'll also begin to see a trend towards single-CPU servers. That could eliminate the need for the UPI or Infinity Link, as Intel and AMD respectively term their CPU interconnect bus.

Even today, I read admins posting that the main reason they use dual-CPU setups is for memory capacity. If CXL memory pools can address that issue, then the transition to single-CPU servers might come very quickly.

Further out, there's talk of silicon photonics taking over from copper, as the primary system interconnect medium (i.e. PCIe, CXL).

So, there are good reasons to believe pin-count inflation won't continue, indefinitely.
 

InvalidError

Titan
Moderator
Further out, there's talk of silicon photonics taking over from copper, as the primary system interconnect medium (i.e. PCIe, CXL).

So, there are good reasons to believe pin-count inflation won't continue, indefinitely.
I'm a little surprised PCIe lasted as long as it did and managed 32Gbps through a 20 years old slot, albeit with only ~3" of total trace length from source to sink before requiring regen.

Some 20 odd years ago, Intel had demonstrated a photonics chip capable of taking a bunch of 25Gbps channels directly from electrical to multiplexed optical and back. Back then, I imagined we'd be poking fibers into chips to connect stuff together in 10 or so years. Since then, we've had a slew of single-fiber speed records on increasingly more compact hardware.

The pin count war may be nearing its end only to be replaced by a competition for most whatever those 1+Tbps optical ports will be called. Imagine being able to place your CPU package anywhere in your case (ex.: strapped to the intake fans) and have it connected to the motherboard's chipset via two SMF cables.
 
  • Like
Reactions: bit_user

bit_user

Polypheme
Ambassador
Some 20 odd years ago, Intel had demonstrated a photonics chip capable of taking a bunch of 25Gbps channels directly from electrical to multiplexed optical and back. Back then, I imagined we'd be poking fibers into chips to connect stuff together in 10 or so years.
Loosely-related: back in 2017, Intel introduced some Skylake Xeon SP models with integrated OmniPath PHY, although it seems the optical transceiver was still external to the package (I recall seeing some photos that appeared to show an optical cable connected directly to it).
However, this wasn't for usage as a system-level interconnect, but rather datacenter-level networking. When OmniPath failed to gain market traction, these models were canceled.