News Massive LGA7529 Socket for Intel's 'Sierra Forest' Pictured

More info on this leak:

There are actually even more pictures of the socket and the specific 2S motherboard that is said to be designed for both Sierra Forest and Granite Rapids-AP CPUs.

The posted listing at Goofish (as discovered by HXL) hints that the socket supports up to 128-core and 256-thread Granite Rapids-AP CPUs. The motherboard seems to have a 12-channel DDR5 memory layout featuring 2 DIMMs per channel, for a total of 24 DIMM slots.

So I'm guessing SIERRA lineup should also max out at 128 cores, or maybe even more ? Anyway, we can see that the socket is a pre-production SKU from
LOTES .

By the way, there have been several rumors/reports that the Intel Birch Stream platform was initially designed for the AP line of processors which has since been canceled. So this begs the question if Sierra Forest would be the only CPU lineup designed for this entirely new socket and platform, or will there be more SKUs/chips added in the near future.


O1CN01WVq7dG1lho7i3oQKo_53-fleamarket.heic_q50.jpg.webp


O1CN0114MO9W1lho7m3St3C_53-fleamarket.heic_q50.jpg-1092x1456.webp


O1CN01TPyDDv1lho7Z3yWaR_53-fleamarket.heic_q50.jpg-1456x1092.webp


O1CN01qi2mO91lho7i3qVGr_53-fleamarket.heic_q50.jpg-1456x1092.webp


View: https://twitter.com/9550pro/status/1620413976382443523

EDIT:

As per some recent rumors, it appears the Sierra Forest Xeon chips will house at least 344 cores which will be packed within 4 Compute Tiles, and each tile would pack 86 cores.

The rumors also point to an even higher core count variant in the form of the 528 core variant which could pack up to 132 cores per tile but will more realistically only get 512 cores as one cluster would be disabled, IMO.
 
Last edited by a moderator:
Sierra Forest is rumored to feature up to 300+ (4 tiles) efficient cores, so a socket with massive number of pins is no surprise.
 
After some assumptions I am going to WAG at about 150 per CPU socket.

This assumes E cores take up 50 percent of P cores and adds the additional number of pins mentioned.

So for non-computationally intense loads this would make sense. And would put Intel up in this niche.

Tom M (no relation)
 
May I assume, given the look of that board, that some wild proprietary power supply is necessary to power it?
One 24-pins like connector near the top-right corner, two ATX-12V-like connectors near the top-left corner, two 12V HPWR-like connectors in the bottom left and right corners, then another pair of EPS12/PCIe-AUX-like connectors closer to the bottom-mid, probably pass-through 12V for powering storage backplane boards connected to those Slim-SAS-like connectors.

Definitely doesn't look like what you'd get from a typical ATX jobbie.
 
Sierra Forest is rumored to feature up to 300+ (4 tiles) efficient cores, so a socket with massive number of pins is no surprise.

Yes, as per some recent rumors, the Sierra Forest Xeon chips will house at least 344 cores which will be packed within 4 Compute Tiles, and each tile would pack 86 cores.

The rumors also point to an even higher core count variant in the form of the 528 core variant which could pack up to 132 cores per tile but will more realistically only get 512 cores as one cluster would be disabled, IMO.
 
As per some recent rumors, it appears the Sierra Forest Xeon chips will house at least 344 cores which will be packed within 4 Compute Tiles, and each tile would pack 86 cores.

The rumors also point to an even higher core count variant in the form of the 528 core variant which could pack up to 132 cores per tile but will more realistically only get 512 cores as one cluster would be disabled, IMO.
Yeah, that's pretty insane. I'll believe it when I see it. I wonder if it's disinformation intentionally leaked by Intel, in the hope of trying to scare AMD into making a competing CPU that's too large either to be priced economically or to scale efficiently.

If we do some simple math, treating Gracemont cores as 1/4th of a Golden Cove core, then you could fit 240 in the same area as a 60-core Sapphire Rapids CPU.

However, Sierra Forest being on a smaller node should help. Still, is "Intel 3" 43% denser than "Intel 7"? And that presumes the E-cores don't increase in complexity. Or, if we're comparing the 528-core variant, the density increase would be roughly 120%, presuming the same die area as Sapphire Rapids. And I'm not sure we can presume that, if Intel 3 is more expensive per mm^2 and Intel can't simply pass that price increase on to the customers.
 
Last edited:
Still, is "Intel 3" 43% denser than "Intel 7"?

Yes Intel 3 process node will definitely be denser, around 520 MTr/mm2 . Unless I'm mistaken, Intel 3 will improve high density cell density versus Intel 4 by ^1.07, and also achieve a 1.4x density versus Intel 7. I think this node will be more expensive per mm^2, so Intel might pass on the cost to customers, most likely.

If we assume the same pitches but a smaller track height for Intel 3, we get ~1.07x denser high-performance cells and ~1.4x denser high-density cells versus Intel 10/7.

Intel 3 was previously known as Intel 7+ (not sure why they changed the nomenclature though), will see an increased use of EUV and new high density libraries. This is where Intel’s strategy becomes more modular – Intel 3 will share some features of Intel 4.

Intel 3 is actually a generational optimization of Intel 4 as it delivers an 18% performance per watt gain, offers denser HP libraries, increases the intrinsic driver current, increased EUV use & reduces via resistance, according to Intel. INTEL 4 node seems like a stopgap.

Intel 3 will also be the final and last FinFET process from Intel. Everything thereafter will utilize a new gate-all-around transistor architecture the company calls RibbonFET.
 
Unless I'm mistaken, Intel 3 will improve high density cell density versus Intel 4 by ^1.07, and also achieve a 1.4x density versus Intel 7.
Okay, so then it passes the most basic plausibility test for 344 cores. However, that's 89.6% more threads in a socket with 12-channel memory than AMD is doing with Genoa, so scalability questions remain. Perhaps they'll be partially answered by a scaling analysis of Begamot, which should pack up to 256 threads in the same socket as Genoa.

And yet, the claims of 512 cores remains pretty outrageous. Not only because it's going to be eye-wateringly expensive, but I also just don't see good utilization by most workloads. All of the tricks to keep the cores fed should've long since run out of gas, by that point. Meaning, most of those cores will be spending a good amount of their time idling.
 
And yet, the claims of 512 cores remains pretty outrageous. Not only because it's going to be eye-wateringly expensive, but I also just don't see good utilization by most workloads. All of the tricks to keep the cores fed should've long since run out of gas, by that point. Meaning, most of those cores will be spending a good amount of their time idling.
The chips are intended for workloads that benefit more from extra cores than faster cores. Those usually have tight data locality and don't depend much on memory performance such as most crypto algorithms which have working data sets only a few kiB in size apart from the data being (de)crypted or hashed, easily fits multiples in L2$. I'm sure there are AI models that would scale well with this too.
 
The chips are intended for workloads that benefit more from extra cores than faster cores. Those usually have tight data locality and don't depend much on memory performance
I think quite the opposite. When you decrease single-thread performance (either by clockspeed, IPC, or both), it has the effect of reducing software-visible latencies!

That's actually one reason why lower-clocked, lower-IPC cores should scale better than their big brothers. But, it only gets you so far. Maybe a factor of 2, at the outside.

such as most crypto algorithms which have working data sets only a few kiB in size apart from the data being (de)crypted or hashed, easily fits multiples in L2$.
Name a crypto currency which fits that profile and isn't already being dominated by ASICs.

AFAIK, people generally don't use CPUs for crypto, period. And especially not pricey server CPUs.

I'm sure there are AI models that would scale well with this too.
No. AI is massively bandwidth-intensive and nearly all models in popular use are too big to reside in L2 or even L3 cache. That's the whole reason you see Nvidia, AMD, and Intel delopying HBM in their top server-oriented accelerators/"GPUs". Furthermore, AI gains a decent amount of benefit from AVX-512 and AMX. Have a close look at Intel's marketing materials for the HBM-equipped Xeon Max, which seems heavily targeted at AI.
vzDpaGgFPdMSqBm3ft2bfL.png
Ye3Gqgw8PYaynDKrFEe5PJ.jpg

What workloads Sierra Forest seems to target are traditional transaction-oriented server apps, like databases, web servers, media streaming, etc.
 
Last edited:
Name a crypto currency which fits that profile and isn't already being dominated by ASICs.

AFAIK, people generally don't use CPUs for crypto, period. And especially not pricey server CPUs.
Pick your mind out of the gutter, not all crypto is crypto-mining, especially on a server CPU! Crypto is used for other stuff too, such as all manners of online security such as hashing passwords, checking certificates, generating signatures, generating derrivative keys and certificates, generating session keys, RNGs and PRNGs, etc.
 
Pick your mind out of the gutter, not all crypto is crypto-mining, especially on a server CPU! Crypto is used for other stuff too, such as all manners of online security such as hashing passwords, checking certificates, generating signatures, generating derrivative keys and certificates, generating session keys, RNGs and PRNGs, etc.
The reason I didn't think you meant cryptography is that it tends not to dominate any workload involving it. In other words, it doesn't make sense to talk about as a workload by itself.

At one time, yes. But, it's been adequately accelerated since then.
 
The reason I didn't think you meant cryptography is that it tends not to dominate any workload involving it. In other words, it doesn't make sense to talk about as a workload by itself.

At one time, yes. But, it's been adequately accelerated since then.
With end-to-end-encryption gaining popularity, the number of encrypted transactions is going up too. In a secure database, every encrypted field would need to be decrypted on top of all of the other normal database stuff.
 
That doesn't really make sense. You'd rather just put the database in an encrypted filesystem.
That would only protect the data against the drives getting stolen, not ACL/security bypass bugs that may exist in the DB and apps that access it as the OS has no means of differentiating a legitimate DB query from one initiated by hostile parties.