SK Hynix hires logic production specialists to integrate HBM4 memory directly on logic.
SK Hynix Plans to Stack HBM4 Directly on Logic Processors : Read more
SK Hynix Plans to Stack HBM4 Directly on Logic Processors : Read more
I'm not too sure about that. HBM isn't scaling to higher capacities like DIMMs can. I think there will always be a need for some plain, old DRAM that lacks any integrated computation. It could be a slower tier, connected via CXL, that sits in between the HBM tier and the SSD tier."Within 10 years, the 'rules of the game' for semiconductors may change, and the distinction between memory and logic semiconductors may become insignificant," an industry insider told Joongang.co.kr.
Yuck. Submersion cooling is messy and expensive. I'm not interested.tbh anything that pushes advancement (and hopefully drip down to consumer) of submersion cooling is all good in my book :|
I think it's continuing to make inroads into HPC and some datacenters. That's where it should stay, IMO.its literally the one tech thats been mentioned for decade but hasnt really advanced to market.
AMD's Fury was the first GPU to use HBM, but it worked by placing the stacks next to the compute die. This is fusing the HBM atop the compute die. That's the key difference.wasn't this what vega was about?
That's not germane to the issues discussed in the article. They're concerned about how to get heat from the die to the exterior of the package (or, I guess the top of the stack, if you're doing direct-die cooling).Hmm. We're already seeing a rise in power draw, enough to bring water-cooled servers into consideration. Perhaps this sort of thing will accelerate that?
According to this, no Pentium MMX version used more than 17 W.I think it was around the Pentium MMX time that someone I worked with would respond to customers that their PC would use a similar amount of power to an incandescent light bulb.
Exactly. I'm really excited to see where this tech goes. I could see memory storage and compute all in the same package. 3dxpoint was in theory trying to merg ram and storage, right?I see this as the logical follow-on from prior HBM PIM (Proccessing In Memory) efforts.
Samsung's New HBM2 Memory Has 1.2 TFLOPS of Embedded Processing Power
Injecting brains into memorywww.tomshardware.com
SK Hynix announced a version, also. I'm too lazy to look it up.
AMD also seems headed in this direction, as it's in line with their Zettascale roadmap.
I'm not too sure about that. HBM isn't scaling to higher capacities like DIMMs can. I think there will always be a need for some plain, old DRAM that lacks any integrated computation. It could be a slower tier, connected via CXL, that sits in between the HBM tier and the SSD tier.
These days PC RAM has a lifetime warranty. It's not exactly fagile.They are taking the apple approach where if your memory dies you have to replace the entire processor.
60W incandescent bulbs were pretty common indoors. Lamps were often limited to using 60W bulbs. Besides, if you counted the power draw of a CRT monitor, which I'll bet the quoted person was, that could easily reach around 100W.That's not germane to the issues discussed in the article. They're concerned about how to get heat from the die to the exterior of the package (or, I guess the top of the stack, if you're doing direct-die cooling).
According to this, no Pentium MMX version used more than 17 W.
Pentium (original) - Wikipedia
en.wikipedia.org
I remember seeing the inside of a Compaq desktop, with a Pentium MMX, and the thing still had a passive heatsink! It was big and aluminum, but definitely had no integrated fan.
Nothing else inside that PC should've used very much power. Graphics cards of the day had tiny fans, if any, and HDDs rarely burn more than 10 W. That would put even 60 W (which is pretty low, for a conventional screw-in, incandescent lightbulb) as an overestimate.
Even today, if your GPU has bad memory, most people would just replace it - especially if it's still under warranty. Yes, you can potentially have a competent tech locate the bad chip and replace it, but I think people rarely take this route.They are taking the apple approach where if your memory dies you have to replace the entire processor.
And how is that better than routing through the interposer, like we do today??? GPUs like the H100 have a 6144-bit memory interface. It's nuts to route that off package, if you even can!The easiest approach that I can see is putting the HBM on the opposite side of the PCB and have a direct connection through the PCB to the HBM, then you cool the ASIC on one side and the memory on the other side.
Any GPUs with HBM already are that. They're just going a step further and putting it all in the same stack.I could see memory storage and compute all in the same package.
No, totally different. 3D XPoint was a storage-class memory that was meant to be fast enough and have sufficient endurance that you could use it as RAM or storage. However, it's the same memory chip and the only thing that changes would be how you use it.3dxpoint was in theory trying to merg ram and storage, right?
Source?These days PC RAM has a lifetime warranty. It's not exactly fagile.
I was saying 60 W was toward the low-end of normal-sized incandescent bulbs, not that they were rare.60W incandescent bulbs were pretty common indoors. Lamps were often limited to using 60W bulbs.
They didn't specify they were including the monitor. I know how much power CRTs monitors used, and it could range into the hundreds of Watts, depending on size and brightness. Monitors used so much more power than CPUs of that era that the CPU's power consumption hardly mattered. That's why I assumed they meant the computer without the monitor. Otherwise, a PC with a 21" monitor could easily surpass the power of a standard incandescent light bulb!Besides, if you counted the power draw of a CRT monitor, which I'll bet the quoted person was, that could easily reach around 100W.
Corsair, G.Skill, Crucial, Team Group all have limited lifetime warranty (I'd assume most consumer brands do), but the duration can vary depending on region which I assume is why they use "limited". I wouldn't say this inherently speaks to longevity though due to the nature of DRAM and how often errors aren't detected in consumer workloads.Source?
I'm pretty sure every DIMM I've ever bought had a limited warranty. At work, we see DIMMs start to develop errors, after years of continuous use.
Not to mention the memory controller isn't isolated from the VRAM so it can be taken out at the same time depending on the type of failure.Even today, if your GPU has bad memory, most people would just replace it - especially if it's still under warranty. Yes, you can potentially have a competent tech locate the bad chip and replace it, but I think people rarely take this route.
How do you plan on cooling both packages, doesn't matter if GPU is on bottom and HBM on top or vice versa?And how is that better than routing through the interposer, like we do today??? GPUs like the H100 have a 6144-bit memory interface. It's nuts to route that off package, if you even can!
If you read the article, HBM4 is rumored to double the width per stack. So, the equivalent GPU would have 12288-bit databus off-package. That's why they don't even want to route through the interposer, any more. You should do a better job of sanity-checking these ideas, before you post them.
But is so 'cool' 😀.Yuck. Submersion cooling is messy and expensive. I'm not interested.