News AMD Confirms Twelve DDR5 Memory Channels For Zen 4 EPYC CPUs

Not exactly a surprise. Milan already has 64 cores and if Genoa is to top out at 96 cores, it's going to need 12 channels to keep an 8:1 ratio.

Also since this was leaked in August, it's not really new news.

Zen 4 Madness: AMD EPYC Genoa With 96 Cores, 12-Channel DDR5 Memory, and AVX-512 | Tom's Hardware (tomshardware.com)
With DDR5 AMD could have gotten away with fewer channels due to the increased bandwidth. Staying with 8 channel would have given a 62.5% increase in RAM bandwidth per socket. I think the bigger thing is AMD wanted more RAM density. I have said on forums that I feel Sapphire Rapids will be behind the curve since it is only going to have 8 channel DDR5. For virtualization you need RAM. While having a lot of CPU helps, the shared nature of virtualization makes it pretty easy to over provision CPU. However, when you over provision RAM you will very quickly run into performance issues. For example, I have some server hosts that have 128 vCPUs but have allocated 158 vCPUs, or a 23% over provisioning. I could easily go to 50% CPU over provision and not have any performance issues (right now it is showing 14% CPU usage).

The key word is it was leaked. You cannot take leaks as proof. This report means that the leak was accurate which is always nice. That said there were rumors back in March that said it would be 12 channels.
 

JayNor

Reputable
May 31, 2019
430
86
4,760
Intel added 4 stacks of HBM to the version of SPR used in HPC. It is being used in Aurora. Those are 16GB per stack. I believe those can be configured as 8x128wide memory channels per stack ... so, effectively adding 32 memory channels to the 8 existing ddr5 channels.

One interesting option with the pcie5/cxl is that large external memory pools will become available via memory pool controllers that will sit on that bus. It will also enable memory package power and physical formats different from DIMMs to exist in those pools. Intel is already working on support for Optane via that path, which would reduce the complexity and timing constraints of their DDR5 memory accesses.

Another interesting configuration would be to use only the HBM stacks as local DDR, and to replace all the external memory pins with more PCIE5.
 
Intel added 4 stacks of HBM to the version of SPR used in HPC. It is being used in Aurora. Those are 16GB per stack. I believe those can be configured as 8x128wide memory channels per stack ... so, effectively adding 32 memory channels to the 8 existing ddr5 channels.

One interesting option with the pcie5/cxl is that large external memory pools will become available via memory pool controllers that will sit on that bus. It will also enable memory package power and physical formats different from DIMMs to exist in those pools. Intel is already working on support for Optane via that path, which would reduce the complexity and timing constraints of their DDR5 memory accesses.

Another interesting configuration would be to use only the HBM stacks as local DDR, and to replace all the external memory pins with more PCIE5.
I believe that the HBM Intel has added is acting like an L4 cache and not viewed as RAM by the OS/Hypervisor. While that will help to alleviate main RAM bandwidth constraints (much like AMDs vCache), Intel will still be at a disadvantage when it comes to RAM capacity like it was with Xeon Scaleable until Ice Lake. Right now the most popular RDIMMs are 64GB DIMMs. You get good capacity and they are relatively cheap. My guess is the 128GB DDR5, maybe the 256GB, will be the go to for the next gen servers. At 1 DIMM per channel with 128GB DIMMs you get 1TB RAM/socket in 8 channel vs 1.5TB RAM/socket in 12 channel. Being at an absolute RAM capacity disadvantage really hurts when it comes to servers. Intel can try pushing next gen Optane DIMMs to "fix" the issue compared to AMD, but Optane DIMMs while cheaper that RAM aren't that much cheaper for the added complexity. Not to mention that if this is a production environment your software needs to be able to support Optane. For example, SAP HANA 2 SPS3 (came out in 2018) supports Optane in App Direct mode and NOT Memory mode. However, there are a lot of restrictions on that especially if you are running your DB on VMware as most companies will be doing. In VMware you are relegated to Cascade Lake only with P100 DIMMs on 6.7 U3 EP14 or 7.0 P01 or later. On top of that you need DIMMs + Optane DIMMs so the savings aren't huge, like 10% just a few months ago for a lot of complexity. Does that really make any sense?
 
  • Like
Reactions: -Fran-

wifiburger

Distinguished
Feb 21, 2016
613
106
19,190
and... still no DDR5 on the market

moving the power delivery on the mem sticks was the biggest fail ever for DDR5

I don't think any of these memory producers will recover in 2022 & AM5 being DDR5 only is dead on arrival
 

Eximo

Titan
Ambassador
Certainly will be interesting. If DDR5 becomes scarce enough, they might keep making DDR4 motherboards for 13th gen.

AMD will have a hard decision to make with AM5. Or has their timing already paid off since they can wait for the DDR5 market to settle before releasing a CPU dependent on it.