News Micron ships production-ready 12-Hi HBM3E chips for next-gen AI GPUs — up to 36GB per stack with speeds surpassing 9.2 GT/s

I'll be a monkey's uncle, I thought the much larger numbers, 192gb etc, were single stacks, nope that's eight smaller stacks.
I'm even less enamored of the whole idea now than I was before.
Moving dynamic RAM inside the package?
That's like letting the chickens roost in your bedroom.
 
I'm even less enamored of the whole idea now than I was before.
Moving dynamic RAM inside the package?
That's like letting the chickens roost in your bedroom.
Why? It's a GPU, there are only benefits for placing DRAM closer to the GPU.
GPU DRAM isn't user serviceable anyways
You gain lower latency and higher bandwidth with less power used in maintaining signal integrity.
 
Why? It's a GPU, there are only benefits for placing DRAM closer to the GPU.
GPU DRAM isn't user serviceable anyways
You gain lower latency and higher bandwidth with less power used in maintaining signal integrity.
All that is true, but this is the same old battle we've all been fighting since the 1950s.
The conventional answer has long been cache.
36gb of DRAM might be slower than 1gb of static cache.
Dynamic RAM has to stop and refresh itself too often.
You can put it in your hat or in your shoe but wherever you put it, it's still DRAM.
 
  • Like
Reactions: Notton
1-2 stacks on a consumer GPU could be dope, but only after an AI bubble burst, and expensive HBM on a GPU is not a good look in the recession that would probably happen after the AI bubble burst.