News Samsung: 1TB DDR5 RAM in 2024, DDR5-7200 in 2025

Right now 64GB RDIMMs are the most popular size for use in servers. Those are all manufactured using 16Gb chips. Going to 32Gb chips will allow for 128GB RDIMMs to become the most popular all while costing the same as the current 64GB RDIMMs.
 

TJ Hooker

Titan
Ambassador
Does anyone know why DRR5 ECC DIMMs are still nearly nonexistent (and seemingly not really being discussed either)? I mean DIMMs that have sideband ECC, in addition to the on-die ECC that all DDR5 has.

Are the big memory players just waiting until DDR5 server chips (Sapphire Rapids and/or Genoa) are shipping in bulk? I would have thought you'd want the memory to be available before that happens.
 
Last edited:
  • Like
Reactions: bit_user
Does anyone know why DRR5 ECC DIMMs are still nearly nonexistent (and seemingly not really being discussed either)? I mean DIMMs that have sideband ECC, I'm addition to the on-die ECC that all DDR5 has.

Are the big memory players just waiting until DDR5 server chips (Sapphire Rapids and/or Genoa) are shipping in bulk? I would have thought you'd want the memory to be available before that happens.
I would assume the server vendors are buying all the L/RDIMMs right now so they will have enough stock at launch. ECC UDIMMs are really only used in low end embedded servers from what I've found. Perhaps the on-die ECC will be enough.
 

spongiemaster

Admirable
Dec 12, 2019
2,276
1,280
7,560
Does anyone know why DRR5 ECC DIMMs are still nearly nonexistent (and seemingly not really being discussed either)? I mean DIMMs that have sideband ECC, I'm addition to the on-die ECC that all DDR5 has.

Are the big memory players just waiting until DDR5 server chips (Sapphire Rapids and/or Genoa) are shipping in bulk? I would have thought you'd want the memory to be available before that happens.
Sapphire Rapids has been available for a while for some of Intel's major customers. As Intel said, the delay for SR is for fully functional CPU's. Not all their customers need that, and are fine using what is working. There's no reason for memory makers to sell DDR5 ECC DIMMS to retail when there are no "DIY" platforms for sale that can use it. They're selling what they produce directly to the companies that need it.
 

TJ Hooker

Titan
Ambassador
I would assume the server vendors are buying all the L/RDIMMs right now so they will have enough stock at launch. ECC UDIMMs are really only used in low end embedded servers from what I've found. Perhaps the on-die ECC will be enough.
Hmm, I was under the impression that the majority of L/RDIMMs were also ECC, such that most servers used ECC. Don't really have a source for that though, I could be mistaken.
 
  • Like
Reactions: bit_user

TJ Hooker

Titan
Ambassador
Sapphire Rapids has been available for a while for some of Intel's major customers. As Intel said, the delay for SR is for fully functional CPU's. Not all their customers need that, and are fine using what is working. There's no reason for memory makers to sell DDR5 ECC DIMMS to retail when there are no "DIY" platforms for sale that can use it. They're selling what they produce directly to the companies that need it.
Do you have a source for SPR already being widely available? There was a bit about them planning to start shipping some chips for the Aurora supercomputer near the end of last year, but nothing but reports of delays since. Even Intel's own Xeon page makes makes no reference to 4th gen (SPR) processors.

https://www.intel.com/content/www/us/en/products/details/processors/xeon/scalable.html

And technically all alder lake chips support ECC DDR5. You just need a W680 motherboard (which admittedly seem pretty rare themselves, at least in consumer channels).
 

bit_user

Polypheme
Ambassador
Does anyone know why DRR5 ECC DIMMs are still nearly nonexistent (and seemingly not really being discussed either)?
I noticed the same thing. Once Supermicro's W680 boards started shipping, I started pricing an upgrade for my 10-year-old workstation. This was the first stumbling block I hit.

 
  • Like
Reactions: TJ Hooker

bit_user

Polypheme
Ambassador
Perhaps the on-die ECC will be enough.
No. The on-die ECC is low-density and only used to paper over DDR5's higher intrinsic error rate due to smaller cell sizes, longer refresh intervals, and maybe lower-voltage.

ECC DDR5 DIMMs exist because there's still a need for them. And that's in spite of the fact that they now have to be 80 bits, due to bifurcation of DDR5 DIMMs into 2x 32-bit subchannels.
 

bit_user

Polypheme
Ambassador
Do you have a source for SPR already being widely available?
It's not. It's just being made available on a special-arrangement basis to big customers who are either beta-testing it or who don't care about the specific errata in earlier spins.

Typical for Intel new Xeons. The same happened with Ice Lake SP and probably Cascade Lake, before it.
 
  • Like
Reactions: TJ Hooker

bit_user

Polypheme
Ambassador
The main thing about this announcement that boggles my mind is the cost of replacing a defective DIMM. At these kinds of prices, I'd imagine manufacturers are certainly going to be replacing the defective packages on returned modules and re-selling them as refurb (or maybe taking a page out of HDD makers' playbook and shipping them as warranty replacements).

I also have to wonder about what CXL is going to do to the server memory landscape. The main benefit that Samsung gets from such high densities is going to be watered down with CXL.mem devices, because there won't be the same constraints on physical size or quantity as we have with directly-connected memory. Nor will they need to support such high speeds.
 

InvalidError

Titan
Moderator
The main thing about this announcement that boggles my mind is the cost of replacing a defective DIMM. At these kinds of prices, I'd imagine manufacturers are certainly going to be replacing the defective packages on returned modules and re-selling them as refurb (or maybe taking a page out of HDD makers' playbook and shipping them as warranty replacements).
Or start treating memory a bit like SSDs: failures are normal up to some number of defective pages per DIMM before RMA is warranted, log those bad pages in SPD-EEPROM upon detection and confirmation, then let the OS exclude them from its memory map.

I also have to wonder about what CXL is going to do to the server memory landscape. The main benefit that Samsung gets from such high densities is going to be watered down with CXL.mem devices, because there won't be the same constraints on physical size or quantity as we have with directly-connected memory. Nor will they need to support such high speeds.
Depends on the application. CXL makes sense when your active data set is a subset of relatively fresh data worth keeping somewhere in memory. If you go into supercomputer simulation territory where the PB-scale data set gets processed at practically every step, there is little stale data that can be shifted to slower memory without degrading performance and we'll still need CPUs with either fast low-latency external memory or a lot more on-package memory than we have today.