News Gigabyte twists its RAM slots to fit 24TB of DDR5 sticks into a standard server — AMD EPYC sports an impossible 48 DIMMs in new configs

Saw this when STH posted the news and it's still boggling that they were able to properly engineer even trace lengths and squeeze in all 48 slots.

I'm waiting to see if AMD supports MRDIMMs on Zen 5 as that should mostly negate the need for this type of design. The initial numbers being put out by Micron surprisingly show it with lower latency, but that may also be tied to capacity/throughput so I don't want to make any assumptions there.

Anandtech posted about the Micron release yesterday: https://www.anandtech.com/show/21470/micron-mrdimm-lineup-expands-datacenter-dram-portfolio
 
  • Like
Reactions: bit_user
With how common it is to add DIMM slots in sets of 4 (and occasionally 2) to a board, it still surprises me that singlet sockets remain the standard. Monolithic DIMM 'slot blocks' would allow for mechanically more robust sockets and reduced parts-count. Even if colour-coding is desired, multi-shot injection moulding is hardly a rarity.
 
With how common it is to add DIMM slots in sets of 4 (and occasionally 2) to a board, it still surprises me that singlet sockets remain the standard.
In 24/7 servers, DIMMs probably fail at a sufficiently high rate that I think they'd want to be able to replace them in singles or doubles. Replacing failed components is probably the main reason why server components are still socketed. Even Ampere Altra is socketed!

Monolithic DIMM 'slot blocks' would allow for mechanically more robust sockets and reduced parts-count.
What I think we're going to see is backplanes primarily for CXL.mem devices, not unlike storage backplanes. Imagine 48x M.2 boards plugged into it, perpendicularly.
 
  • Like
Reactions: thestryker