News Micron Unveils 128GB and 256GB CXL 2.0 Expansion Modules

I get the sense they're holding back, while they test the waters. I think you should probably be able to pack more capacity in that form factor, even before resorting to chip-stacking.

BTW, I think both Genoa and Sapphire Rapids only go as high as CXL 1.1. I wonder if Emerald Rapids is going to move up to CXL 2.0. The difference isn't raw speed, but rather a matter of features and functionality. You have to go all the way to CXL 3.0, before there's a bump in speed (i.e. PCIe 6.0 PHY).
 
I get the sense they're holding back, while they test the waters. I think you should probably be able to pack more capacity in that form factor, even before resorting to chip-stacking.
I'm betting that this is likely the best they can do with 16Gb chips even using both sides of the PCB. I would assume that a move to 24Gb would happen before stacking, and maybe even 32Gb depending on if they hit manufacturing timelines.
BTW, I think both Genoa and Sapphire Rapids only go as high as CXL 1.1. I wonder if Emerald Rapids is going to move up to CXL 2.0. The difference isn't raw speed, but rather a matter of features and functionality. You have to go all the way to CXL 3.0, before there's a bump in speed (i.e. PCIe 6.0 PHY).
Intel has been pretty cagey about EMR specs so I'd assume it's 1.1 again while Granite/Sierra should be 2.0. With any luck there will be details at Hotchips, but if not I'd imagine there should be some during the Innovation event.
 
  • Like
Reactions: bit_user
Intel has been pretty cagey about EMR specs so I'd assume it's 1.1 again while Granite/Sierra should be 2.0.
I think I might have seen a leaked roadmap, and it's as you say.

IIRC, 2.0 adds switching (or maybe it's multi-level switching?), and that will enable much larger memory capacities by allowing more of these devices than there are CXL lanes to support them. 3.0 adds support for fabrics, and that should be an enabler for having shared RAM pools at rack-scale.
 
Last edited:
  • Like
Reactions: thestryker
CXL would have been the perfect use for 3D XPoint (Optane) media:
  • Denser than RAM, and thus higher-capacity modules;
  • But also byte-addressable like RAM; and
  • Both faster and lower latency than NAND
Too bad we’re stuck with either the proprietary DIMMs which work only in special Intel proprietary systems or slower PCIe-limited SSDs.
 
CXL would have been the perfect use for 3D XPoint (Optane) media:
Intel were supposedly working on such a product, when Optane got canceled.

  • Denser than RAM, and thus higher-capacity modules;
  • But also byte-addressable like RAM; and
  • Both faster and lower latency than NAND
You can do almost as well (in some respects, better) with NAND-backed DRAM + power-loss capacitors. Optane was denser and cheaper than DRAM, but not much. It wouldn't offer a capacity advantage over die-stacked DDR5.

The biggest problem with Optane DIMMs was extremely limited software support for using it as persistent memory. If you use it like a fast SSD, then you lose the benefits of byte-addressability and kernel overhead kills most of the performance advantage vs. just having an Optane NVMe SSD.