News AMD and JEDEC Develop DDR5 MRDIMMs With Speeds Up To 17,600 MT/s

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

InvalidError

Titan
Moderator
That's why my proposed solution for stacking is mounting the DRAM or SRAM stacks on the Opposite side of the PCB, directly behind the CPU/GPU.
That isn't possible. The whole point of HBM is to practically eliminate chip-to-chip wiring so transmission line effects can be ignored altogether and only bare-bones driver-receivers can be used across the interface for minimal latency and extremely low energy per bit. Once traces are more than ~5mm long at 2GHz, you need all of that controlled impedance goodness that will cause the interface to burn 20X more power and die area. Also, the area directly under CCDs, GCDs, MCDs, etc. is where the BGA balls or LGA pads are, putting chips there is going to be problematic not only for mechanical reasons but also for power delivery with all of the displaced power/ground pads.

If by "opposite side of the PCB" you meant going through the socket and motherboard or GPU card, that wouldn't be realistically feasible either when each HBM-like interface would add ~2500 balls or pads (half of which grounds for signal integrity reasons) and you would probably want at least one of those per compute die. Hello LGA/BGA 12000+. I doubt normal PCB materials can reliably accept a ball pitch fine enough to physically manage this.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,438
970
20,060
That isn't possible. The whole point of HBM is to practically eliminate chip-to-chip wiring so transmission line effects can be ignored altogether and only bare-bones driver-receivers can be used across the interface for minimal latency and extremely low energy per bit. Once traces are more than ~5mm long at 2GHz, you need all of that controlled impedance goodness that will cause the interface to burn 20X more power and die area. Also, the area directly under CCDs, GCDs, MCDs, etc. is where the BGA balls or LGA pads are, putting chips there is going to be problematic not only for mechanical reasons but also for power delivery with all of the displaced power/ground pads.

If by "opposite side of the PCB" you meant going through the socket and motherboard or GPU card, that wouldn't be realistically feasible either when each HBM-like interface would add ~2500 balls or pads (half of which grounds for signal integrity reasons) and you would probably want at least one of those per compute die. Hello LGA/BGA 12000+. I doubt normal PCB materials can reliably accept a ball pitch fine enough to physically manage this.
If we're not going to mount it:
  1. directly on top for Thermal Management reasons
  2. directly on the opposite side of the PCB for interface complexity reasons.
That leaves mounting it directly next to the die, adjacent to it; like what Apple has done with their M1 Silicon.
So since Apple's M1 uses System-in-Package mounting, that's the next best option.
The traces should meet your ~5 mm rule when it's physically right next door, almost touching.
Directly on the same PCB substrate, or a interposer of some sort, with tiny connections routing directly to the Memory Controller next door.
 

InvalidError

Titan
Moderator
That leaves mounting it directly next to the die, adjacent to it; like what Apple has done with their M1 Silicon.
Why does Apple get credited for everything? HBM is always immediately adjacent to the chip it connects to out of necessity and the first commercial HBM product was AMD's Fury X back in 2015 and I bet there are even earlier multi-chip packages with bare-bones die-to-die interfaces if we look beyond HBM.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,438
970
20,060
Why does Apple get credited for everything? HBM is always immediately adjacent to the chip it connects to out of necessity and the first commercial HBM product was AMD's Fury X back in 2015 and I bet there are even earlier multi-chip packages with bare-bones die-to-die interfaces if we look beyond HBM.
True, AMD did it once back in 2015 with Radeon Fury & the VEGA line.

But they never brought it back to consumers after that financial fiasco.

HBM has been relegated to Enterprise level products only after that.

Apple has regular, bog standard LPDDR DRAM packages mounted next to their SoC, something your average consumer gets to enjoy.

I'm sure there are, multi-chip packages but when was the last time that a die-to-die interface came to the consumer like that?

How many years ago was it, was it even for consumers?
 

InvalidError

Titan
Moderator
I'm sure there are, multi-chip packages but when was the last time that a die-to-die interface came to the consumer like that?
In recent history, that would go to AMD's dual-chip Ryzen 1700-1800X up to present with Zen 2-4 along with AMD's RX7800-7900s. In more distant memory, I can think of Slot-1/A back when CPUs relied on external L2 cache because large caches were too risky to bake directly into the CPU die, then we also had "dual-core" and "quad-core" CPUs made from slapping two 1C or 2C CPU dies on one substrate. In the imminent future, we also have Intel's Meteor and Arrow Lake lineups. Multi-chip packages have been around for a very long time in the consumer space if we include all variants of it.

The trend has always been to bring more stuff on-die or at least on-package as performance scaling requirements dictate and technological progress allows. DRAM is the next most logical step.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,438
970
20,060
In recent history, that would go to AMD's dual-chip Ryzen 1700-1800X up to present with Zen 2-4 along with AMD's RX7800-7900s. In more distant memory, I can think of Slot-1/A back when CPUs relied on external L2 cache because large caches were too risky to bake directly into the CPU die, then we also had "dual-core" and "quad-core" CPUs made from slapping two 1C or 2C CPU dies on one substrate. In the imminent future, we also have Intel's Meteor and Arrow Lake lineups. Multi-chip packages have been around for a very long time in the consumer space if we include all variants of it.

The trend has always been to bring more stuff on-die or at least on-package as performance scaling requirements dictate and technological progress allows. DRAM is the next most logical step.
I was thinking more of a adjacent mounted interface like EMIB to connect two chips together.

But if we expand the definition to "Any Chiplet" connection technology and we extend the meaning of Die-to-Die interface to any form of chiplets, then yes; all the modern examples of AMD's Chiplet solutions count along with all future solutions from everybody out there.