News SK Hynix says new high bandwidth memory for GPUs on track for 2024 - HBM4 with 2048-bit interface and 1.5TB/s per stack is on the way

Status
Not open for further replies.
"Demand for also customized HBM is growing, driven by generative AI and so we're also developing not only a standard product, but also a customized HBM optimized performance-wise for each customer by adding logic chips. Detailed specifications are being discussed with key customers."
For me, this is perhaps the most intriguing part. I wonder if they'll be able to overcome the performance limitations of standard HBM4, or if the hybrid stack designs are really just about simplifying routing, saving energy, and/or reducing cost.
 
"According to Micron, HBM4 will use a 2048-bit interface to increase theoretical peak memory bandwidth per stack to over 1.5 TB/s. To get there, HBM4 will need to feature a data transfer rate of around 6 GT/s, which will allow to keep the power consumption of next-generation DRAM in check"
How does hbm effect dram?
 
How does hbm effect dram?
That's a good question. My understanding is that it's not fundamentally different than regular DRAM, but simply has a lot more parallelism and compensates by running at a lower interface speed.

However, I haven't kept up to date on all the various developments, as HBM has continued to evolve.
 
  • Like
Reactions: gg83
Status
Not open for further replies.