News Intel celebrates the arrival of MRDIMMs — a plug and play solution for ultrafast memory that offers double the memory bandwidth of standard DRAM

These "MRDIMMs" are actually MCR-DIMMs, which is a proprietary scheme Intel worked up with SK Hynix, as reported here:

Meanwhile, AMD has been working with JEDEC to develop a standard MRDIMM spec, which is why they don't yet have a competing solution on the market.

Intel's rebranding of MCR-DIMMs is just creating market confusion. Saying it offered their MCR-DIMM tech to JEDEC is just further trying to muddy the waters.

I predict Intel will switch to JEDEC MRDIMMs, once they're finalized. I wouldn't count on the current generation of rebranded MCR-DIMMs being supported in future products. This view is underscored by this (paywalled) op ed:
 
These "MRDIMMs" are actually MCR-DIMMs, which is a proprietary scheme Intel worked up with SK Hynix, as reported here:

Meanwhile, AMD has been working with JEDEC to develop a standard MRDIMM spec, which is why they don't yet have a competing solution on the market.

Intel's rebranding of MCR-DIMMs is just creating market confusion. Saying it offered their MCR-DIMM tech to JEDEC is just further trying to muddy the waters.

I predict Intel will switch to JEDEC MRDIMMs, once they're finalized. I wouldn't count on the current generation of rebranded MCR-DIMMs being supported in future products. This view is underscored by this (paywalled) op ed:
The horse is out of the barn. There is a power in being first to market, especially if Intel has memory makers already producing the modules and calling them MRDIMM.

Remember, this isn't done, it's just the first iteration, spec 1. There are certainly plans to increase the bandwidth further, and this seems to fulfill the performance of the first tier. So this will continue to be spec 1.

I'm not sure what is holding back current AMD processors or chipsets from using these modules, but convincing SK Hynix and Micron to make an AMD-only version of what the largest memory producers are already calling MRDIMM might be a tough task. AMD would probably be better off turning their efforts to be first to market to support spec 2.

This kind of thing isn't new. It's quite common, Intel made a chip that can benefit from a new kind of memory. It convinced manufacturers to make it. Waiting on a standards committee to wave a magic wand negates their technological advantage. (See also Microsoft waiting on OpenGL to get off their ass and simply releasing DirectX instead.) Expect the memory manufacturers to dig in here, because they've already tooled up.
 
  • Like
Reactions: rtoaht
I'm not sure what is holding back current AMD processors or chipsets from using these modules, but convincing SK Hynix and Micron to make an AMD-only version
AMD is working to support the official JEDEC standard, not something AMD-specific. Once that's out, it's what memory makers will naturally prefer to manufacture. The computing world consists of more than just Intel and AMD. In the server memory market, there's the growing ranks of ARM and upcoming RISC-V CPUs, which will also prefer to implement a JEDEC standard.

Nvidia has a history of taking Intel's approach and jointly developing custom memory specs, like GDDR6X. The downside is that it ties them to sourcing only from a single manufacturer, rather than being able to harness open market competition to achieve better pricing and availability.
 
AMD is working to support the official JEDEC standard, not something AMD-specific. Once that's out, it's what memory makers will naturally prefer to manufacture. The computing world consists of more than just Intel and AMD. In the server memory market, there's the growing ranks of ARM and upcoming RISC-V CPUs, which will also prefer to implement a JEDEC standard.

Nvidia has a history of taking Intel's approach and jointly developing custom memory specs, like GDDR6X. The downside is that it ties them to sourcing only from a single manufacturer, rather than being able to harness open market competition to achieve better pricing and availability.
What I'm saying is that real standards are what become standards. It's like a discussion I had 15 years ago when some open source evangelist was ranting that the document he was trying to upload wasn't supported. My GOD! He used Libre Office to create that file and 20 nerds had gathered at the conference room at the Peoria Holiday Inn and declared it a standard damnit. He'd burn in hell before he used .doc. But, at that time, what was the real standard?

Even though Microsoft was, by far, the largest financial backer of OpenGL that standard committee was terrified that anything it did might actually benefit Microsoft. The result was that DirectX 9 was released before OpenGL 2.0 and OpenGL was simply pushed to the back burner. Imagine if the only games you could play in the last 25 years had to be OpenGL compliant. So how can it be a "standard"?

One certainly can't blame Intel here. They have a capability, they offered the spec to JEDEC two years ago and, what? Where is JEDEC here? We don't know. We don't know if AMD has a better standard. We don't know if it has manufacturing challenges even it it is a better standard. But, by doing nothing they run the risk of no longer controlling the standard at all because it's out in the wild.

Standards like GDDR6 are still standards even if some governing body didn't approve them simply because they are the most used. This idea that some group of 20 Nerds at the Peoria Holiday Inn has to bless them might look good on paper, but it doesn't necessarily make them standard.
 
  • Like
Reactions: rtoaht and Sluggotg
So is this memory tech poised to replace CUDIMMs? Are CUDIMMs and MRDIMMs compatible w/standard DDR5 tech or is a new IMC and slot std. required? Will MRDIMM be coming to the consumer market?
 
Intel's rebranding of MCR-DIMMs is just creating market confusion.
This is the part I don't really understand at all because I fail to see the benefit to Intel. They donated the MCR design to JEDEC in what I imagine is a fashion similar to Dell and CAMM which makes sense. Rebranding MCR to MR doesn't really make sense though unless it's going to be compatible. I don't see how it would benefit Intel in any way because it would also confuse their own customers.
 
These "MRDIMMs" are actually MCR-DIMMs, which is a proprietary scheme Intel worked up with SK Hynix, as reported here:

Meanwhile, AMD has been working with JEDEC to develop a standard MRDIMM spec, which is why they don't yet have a competing solution on the market.

Intel's rebranding of MCR-DIMMs is just creating market confusion. Saying it offered their MCR-DIMM tech to JEDEC is just further trying to muddy the waters.

I predict Intel will switch to JEDEC MRDIMMs, once they're finalized. I wouldn't count on the current generation of rebranded MCR-DIMMs being supported in future products. This view is underscored by this (paywalled) op ed:

MRDIMM and MCRDIMM are both types of high-performance DDR5 memory modules designed to improve the efficiency and bandwidth of server systems. While they share similarities, they have distinct characteristics and are backed by different industry alliances.
MCRDIMM (Multiplexer Combined Ranks DIMM):
* Backed by: Intel and SK Hynix
* Key feature: Uses a multiplexer buffer to access both ranks of the DIMM simultaneously, doubling the data transfer rate to the CPU.
* Target speed: 8800 MT/s (first generation), with potential for higher speeds in future generations.
* Compatibility: Designed to work with Intel's 6th Generation Xeon Scalable "Granite Rapids" platforms.
MRDIMM (Multi-Ranked Buffered DIMM):
* Backed by: JEDEC, AMD, Google, Microsoft, and Intel
* Key feature: Similar to MCRDIMM, it uses a multiplexer buffer to access both ranks simultaneously, increasing bandwidth.
* Target speed: 8800 MT/s (first generation), with plans to reach 12,800 MT/s and 17,600 MT/s in subsequent generations.
* Compatibility: More widely supported by various vendors and platforms.
In summary:
Both MCRDIMM and MRDIMM aim to enhance server performance by doubling the data transfer rate from the DIMM to the CPU. MCRDIMM is currently more specific to Intel's platform, while MRDIMM is more broadly supported by the industry. As technology advances, both types of DIMMs are expected to offer even higher speeds and capacities, driving the performance of future server systems.

 
  • Like
Reactions: rtoaht and 80251
What I'm saying is that real standards are what become standards.
<example of Libre Office vs. MS Office>
The hardware world works a bit differently than software. As it's very capital-intensive, has long lead times, and is full of litigation (Rambus chief among them), everyone generally likes to stick with JEDEC, at least as a baseline.

Standards like GDDR6 are still standards even if some governing body didn't approve them simply because they are the most used.
GDDR6 is a JEDEC standard.

It's GDDR6X, that Nvidia privately developed with Micron, that's nonstandard. Micron is the only producer and (as far as I know) Nvidia is the only customer for it.
 
  • Like
Reactions: 80251
MRDIMM and MCRDIMM are both types of high-performance DDR5 memory modules designed to improve the efficiency and bandwidth of server systems. While they share similarities, they have distinct characteristics and are backed by different industry alliances.
MCRDIMM (Multiplexer Combined Ranks DIMM):
* Backed by: Intel and SK Hynix
* Key feature: Uses a multiplexer buffer to access both ranks of the DIMM simultaneously, doubling the data transfer rate to the CPU.
* Target speed: 8800 MT/s (first generation), with potential for higher speeds in future generations.
* Compatibility: Designed to work with Intel's 6th Generation Xeon Scalable "Granite Rapids" platforms.
MRDIMM (Multi-Ranked Buffered DIMM):
* Backed by: JEDEC, AMD, Google, Microsoft, and Intel
* Key feature: Similar to MCRDIMM, it uses a multiplexer buffer to access both ranks simultaneously, increasing bandwidth.
* Target speed: 8800 MT/s (first generation), with plans to reach 12,800 MT/s and 17,600 MT/s in subsequent generations.
* Compatibility: More widely supported by various vendors and platforms.
In summary:
Both MCRDIMM and MRDIMM aim to enhance server performance by doubling the data transfer rate from the DIMM to the CPU. MCRDIMM is currently more specific to Intel's platform, while MRDIMM is more broadly supported by the industry. As technology advances, both types of DIMMs are expected to offer even higher speeds and capacities, driving the performance of future server systems.
Thank you for that generative AI summary.

Is it just a matter of Intel originally internally branding it as MCRDIMM but the JEDEC standard named it MRDIMM? In other words, MCRDIMM is a synonym at this point for MRDIMM, i.e. there's no real technical differences in the technology?
 
The hardware world works a bit differently than software. As it's very capital-intensive, has long lead times, and is full of litigation (Rambus chief among them), everyone generally likes to stick with JEDEC, at least as a baseline.


GDDR6 is a JEDEC standard.

It's GDDR6X, that Nvidia privately developed with Micron, that's nonstandard. Micron is the only producer and (as far as I know) Nvidia is the only customer for it.
Would AMD and/or Intel be allowed to buy GDDR6x parts (assuming they modified their respective GPUs to work with it)?
 
Would AMD and/or Intel be allowed to buy GDDR6x parts (assuming they modified their respective GPUs to work with it)?
Yes. Micron said right from the initial announcement, that the memory was available to everyone.

https://investors.micron.com/news-r...iscrete-graphics-memory-micron-powers-nvidias

GDDR6X is now available as part of Micron’s new Ultra-Bandwidth Solutions portfolio. Micron delivers GDDR6X memory in 8 gigabits (Gb) density, with speeds of 19 to 21 Gb/s. Starting in 2021, 16Gb density units will be added. Partners and customers interested in exploring GDDR6X for their high-performance solutions — whether for gaming, artificial intelligence inference or professional visualization — can find out more here.
 
  • Like
Reactions: 80251
These "MRDIMMs" are actually MCR-DIMMs, which is a proprietary scheme Intel worked up with SK Hynix, as reported here:

Meanwhile, AMD has been working with JEDEC to develop a standard MRDIMM spec, which is why they don't yet have a competing solution on the market.

Intel's rebranding of MCR-DIMMs is just creating market confusion. Saying it offered their MCR-DIMM tech to JEDEC is just further trying to muddy the waters.

I predict Intel will switch to JEDEC MRDIMMs, once they're finalized. I wouldn't count on the current generation of rebranded MCR-DIMMs being supported in future products. This view is underscored by this (paywalled) op ed:
A really insightful piece of information
 
The hardware world works a bit differently than software. As it's very capital-intensive, has long lead times, and is full of litigation (Rambus chief among them), everyone generally likes to stick with JEDEC, at least as a baseline.
And this kind of goes back to my point. Micron has spent the money tooling up and producing what Micron calls MRDIMM and has released it to the market as such. That's what I mean about the power of being first to market. It has inertia. JEDEC has lost control of it, or at least the name. So this becomes MRDIMM, or at least MRDIMM 1.0.

Do we get JEDECMRDIMM 1.0 as whatever AMD and JEDEC come up with? MRDIMM 1.01?

I know the answer BTW. AMD will skip 8800, go straight to 12400 and make a huge PR hulapalooza about how much better they are by being second-to-market! But, the companies don't matter, it could have just as well been the other way around.
GDDR6 is a JEDEC standard.

It's GDDR6X, that Nvidia privately developed with Micron, that's nonstandard. Micron is the only producer and (as far as I know) Nvidia is the only customer for it.
But Nvidia got the benefit from it just the same. Is it considered proprietary? Does AMD not have access to it if they want it or have they just chosen not to use it? (Not rhetorical, I don't know.) Obviously if Micron won't sell it to them (at the same terms as Nvidia), then it's proprietary.

EDIT: I just found the answer above. I'd call GDDR6X a defacto standard then.
 
And this kind of goes back to my point. Micron has spent the money tooling up and producing what Micron calls MRDIMM and has released it to the market as such.
Which are they? Are they the Granite Rapids-compatible flavor or the JEDEC flavor?

But Nvidia got the benefit from it just the same.
By working with a single supplier, Nvidia was captive to their pricing and supply constraints. The benefit of a standard is that customers have interoperability with different suppliers, turning the the object into a commodity and providing all the benefits of an open market.

I'd call GDDR6X a defacto standard then.
A de facto standard is when something that's not an official standard fills the same role as one (i.e. to enable multiple producers and consumers). So long as Micron was the sole producer of GDDR6X memory, it cannot be considered a de facto standard. Micron's announcement only said that consumers weren't limited to Nvidia - it didn't say they're giving the rights for other DRAM makers to produce competing chips.
 
Last edited:
Anyone know this compares to DDR5 7200mhz (PC5-57600) modules in terms of performance & latency? Ignoring server features (ECC etc.), is (the JEDEC version of) this technology a path forward for faster off-package memory (on consumer platforms)?

Also, is the benefit here coming from alternating reads from striped ram storage or is there some other magic in the multiplexing? That's the only thing I can think of that could actually increase the memory clock.
 
Anyone know this compares to DDR5 7200mhz (PC5-57600) modules in terms of performance & latency? Ignoring server features (ECC etc.), is (the JEDEC version of) this technology a path forward for faster off-package memory (on consumer platforms)?
There is nothing beyond 6400 available in the server market right now and that's what Intel sent out with GNR along with the 8800 MRDIMMs. Both STH and Phoronix have run tests with both memory speeds, and I don't recall either doing much regarding latency testing, but the bandwidth is significantly higher and in tests that can utilize it very apparent.

Micron's official claims:
  • Up to 39% increase in effective memory bandwidth*
  • Greater than 15% better bus efficiency*
  • Up to 40% latency improvements compared to RDIMMs**
* Empirical data comparing 128GB MRDIMM 8800MT/s against 128GB RDIMM 6400MT/s using the Intel Memory Latency Checker (MLC) tool.

** Empirical Stream Triad data comparing 128GB MRDIMM 8800MT/s against 128GB RDIMM 6400MT/s.
Also, is the benefit here coming from alternating reads from striped ram storage or is there some other magic in the multiplexing? That's the only thing I can think of that could actually increase the memory clock.
I don't know that it's actually increasing the memory clock as that would potentially mess with the IMC operation quite a bit. I get the impression it's doubling the transfer rate so these would likely be 2200MHz modules. The following is the best information I'm aware of regarding MCR/MR specifics, but I'd expect a lot more detail once JEDEC actually publishes MRDIMM specifications:
https://www.anandtech.com/show/18683/sk-hynix-reveals-mcr-dimm-up-to-8gbps-bandwidth-for-hpc
https://www.anandtech.com/show/2132...mcr-dimms-massive-modules-for-massive-servers
 
Anyone know this compares to DDR5 7200mhz (PC5-57600) modules in terms of performance & latency?
The MT/s number in the DIMM speed tells you its nominal bandwidth. Latency is a bit worse than unbuffered memory, because MRDIMMs are always registered. However, these servers would be using registered DIMMs anyhow.

Also, is the benefit here coming from alternating reads from striped ram storage or is there some other magic in the multiplexing? That's the only thing I can think of that could actually increase the memory clock.
The benefit is coming from doing reads on two ranks concurrently, then multiplexing them at double the speed. It's as simple as that. Where things get complicated is if the minimum burst length is pushed beyond 64 bytes, which is the size of a cache line in modern CPUs. I think the CPU's memory controller can chop a DDR5 burst short, but this comes at a penalty. So, if you're doing fully random reads of individual cache lines, maybe the bandwidth scaling isn't very good.

So, those are reasons it might not be a pure win. As @thestryker mentioned, Phoronix benchmarked it and notably found some regressions.

Compilation benchmarks showed either a slight regression or insignificant benefit. A package called BRL-CAD showed a significant regression. Several other benchmarks showed negligible benefit. Then, the rest showed a fairly substantial benefit. On the whole, the MRDIMM configuration was a solid overall win, but this clearly shows that it does have downsides.

Also, Phoronix didn't measure power efficiency. I think efficiency will probably be worse on workloads that didn't get a significant performance benefit, but maybe those which did would also use less energy with the MRDIMMs.