News SK Hynix Plans to Stack HBM4 Directly on Logic Processors

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
That's not germane to the issues discussed in the article. They're concerned about how to get heat from the die to the exterior of the package (or, I guess the top of the stack, if you're doing direct-die cooling).


According to this, no Pentium MMX version used more than 17 W.

I remember seeing the inside of a Compaq desktop, with a Pentium MMX, and the thing still had a passive heatsink! It was big and aluminum, but definitely had no integrated fan.

Nothing else inside that PC should've used very much power. Graphics cards of the day had tiny fans, if any, and HDDs rarely burn more than 10 W. That would put even 60 W (which is pretty low, for a conventional screw-in, incandescent lightbulb) as an overestimate.
Inside a PC of MMX era there was also a discrete audio card often sound blaster, 2d video card and the ubiquitous separate 3dfx accelerator, HDD, floppy disk drive, cd rom drive and of course motherboard.
Considering that power supply was not so efficient as today's (remember typical 150-250W wall power draw), comparing a PC with a lightbulb is absolutely realistic.
 
Having it mounted on the other side of the PCB and having a direct route to the GPU is the best choice that I can think of, regardless of how many bits you have to route.

That's what TSV's are for and advanced packaging or some other form of directing the data to the GPU, through the PCB, to the GPU.
TSV stands for "Through-Silicon Via", with emphasis on the word Silicon. It's a die-stacking technology.

The whole point of HBM is to exploit being on the same interposer to use more width than would be feasible with off-package memory. Again, consider the widest DDR6 GPUs currently shipping max out at 384-bit. There have been some 512-bit GPUs, in the past. However, Nvidia's H100 has a 6144-bit interface to its HBM. You can't do that if it ceases to be on-package.
 
  • Like
Reactions: thestryker
Inside a PC of MMX era there was also a discrete audio card often sound blaster, 2d video card and the ubiquitous separate 3dfx accelerator, HDD, floppy disk drive, cd rom drive and of course motherboard.
And those used how much power? None of the sound cards I owned ever had so much as a heatsink!

Floppy drives and CD-ROM drives would burn almost no power when you weren't using them, which was most of the time. When you were, probably not more than a couple Watts, max. Keep in mind that portable CD players could run for several hours on a set of AA batteries.

You're grasping at straws, here.
 
Better to raise efficiency and reduce power consumption, than carry on as they (GPU manufacturers) are, so they need to introduce ever more elaborate, bulky, heavy and expensive cooling solutions.
 
  • Like
Reactions: Bluoper
Better to raise efficiency and reduce power consumption, than carry on as they (GPU manufacturers) are, so they need to introduce ever more elaborate, bulky, heavy and expensive cooling solutions.
Dennard Scaling ended around 2006.

So, the only way power consumption stays constant or gets reduced (i.e. from one generation to the next) is if you make a conscious decision not to maximize the performance of a chip. As long as competition remains so fierce, that's unlikely to happen. Perhaps energy costs or cooling challenges will ultimately be the limiting factor.
 
And those used how much power? None of the sound cards I owned ever had so much as a heatsink!

Floppy drives and CD-ROM drives would burn almost no power when you weren't using them, which was most of the time. When you were, probably not more than a couple Watts, max. Keep in mind that portable CD players could run for several hours on a set of AA batteries.

You're grasping at straws, here.
At the time nobody cares about green blabla and power usage, a 200W power supply alone probably wasted 5-20 watt, and every single component was built without considering energy consumption, andmost importantly there was so many components compared to today.
And you think that all that components does not use at least 40/60 watt ?
Really ?
 
Submersion cooling is messy and expensive.
currently usable ones yes.

the cost is due to it not making much progress over all this time.

messy is not really any different from any other home pc. You trade dust for the solution (and the solution has much greater thermal dissipation)
That's where it should stay, IMO.
most of the great tech advances home users have is due to trickling dwon from the server and enterprise products.

"imo" is your view & even if it ever becomes mainstream you can choose not to use it however doesnt mean others think someway and would like having the option.
 
Wouldn't stacking the memory mean that it would be harder to repair/upgrade? I have seen articles here where someone will take like a 3070 and double its memory to 16GB. Not mention that fact that the memory modules might need liquid cooling since HBM in general is power hungry.
 
messy is not really any different from any other home pc. You trade dust for the solution (and the solution has much greater thermal dissipation)
I dunno, man. The chemicals used aren't awesome.

For single-phase cooling, there's Flourinert - which if vaporized is 9000x as potent a greenhouse gas as CO2. It's also not regarded to be without risk to human health.


You could use mineral oil, but it's smelly, messy, flammable, and not considered 100% safe to humans.


For 2-phase cooling, the chemicals involve PFAS, which are known endocrine disruptors and it looks like 3M is bailing out of that business at full speed:


By comparison, any mess or hazards involving water cooling are peanuts.

most of the great tech advances home users have is due to trickling dwon from the server and enterprise products.
I don't want this stuff trickling anywhere near me.

"imo" is your view
Yes, it literally stands for "In My Opinion". I tend to use words rather deliberately. I'm fully aware that it's just one opinion, although it wasn't formed in a vacuum.

even if it ever becomes mainstream you can choose not to use it however doesnt mean others think someway and would like having the option.
I'm not trying to stop you from doing anything, but people should be aware of the risks. This isn't the normal sort of situation where something exists only in servers simply because it's too expensive or energy-intensive. A lot changes, when you start talking about potentially hazardous chemicals.
 
Wouldn't stacking the memory mean that it would be harder to repair/upgrade?
Yes, any GPUs with HBM (which already existed since AMD launched the R9 Fury, back in 2015) are fully non-upgradable! In that regard, hybrid die-stacking will be no different.


I have seen articles here where someone will take like a 3070 and double its memory to 16GB.
This is very uncommon, since it involves the time of a very skilled soldering technician, which is probably expensive enough to make you think twice about just buying the workstation version of the card that already has 2x memory.

It's also not always possible. If there aren't double-density chips available, then you're talking about adding another set to the backside of the card, which probably isn't possible unless the card uses Nvidia's reference design for its PCB.

Not mention that fact that the memory modules might need liquid cooling since HBM in general is power hungry.
HBM, itself, is lower-power than GDDR memory. Or, at least that's traditionally been the case. Not sure about some of these newer HBM versions.
 
  • Like
Reactions: Order 66
Yes, any GPUs with HBM (which already existed since AMD launched the R9 Fury, back in 2015) are fully non-upgradable! In that regard, hybrid die-stacking will be no different.
what is different about HBM that makes it fully nonupgradable? Even if it was soldered, (like you said) you might be able to upgrade it, but it would be expensive.
 
It's attached to the interposer, which is something you can't solder to, except with highly-specialized equipment. In a small area of a few mm by a few mm, there are thousands of contacts that have to be lined up exactly.
Ah, I see. It's too bad, seeing as how the 3070's performance was improved massively with 16GB of VRAM.
 
The chemicals used
somehow you seem to of skipped over my original bit about how the tech hasnt really advanced.

i.e making a safer method.

Nobody is gonna sell a toxic fuming solution as that wouldnt fly with legality.

and not considered 100% safe to humans.
you arent chugging the stuff :|
only real issue with it externally (which would be use case) is that some ppl can have allergic reaction (which again can be avoided with gloves)

your posts are all "i don't want it therefore doesnt matter if it advances"

yes, its your view, but your view shouldnt claim its bad entirely.

traditional watercooling & air cooling will reach their limits as we pump more and more power & heat into small areas.

immersion is a way past that limit (at least until something better is designed)
 
  • Like
Reactions: Order 66
I was saying 60 W was toward the low-end of normal-sized incandescent bulbs, not that they were rare.


They didn't specify they were including the monitor. I know how much power CRTs monitors used, and it could range into the hundreds of Watts, depending on size and brightness. Monitors used so much more power than CPUs of that era that the CPU's power consumption hardly mattered. That's why I assumed they meant the computer without the monitor. Otherwise, a PC with a 21" monitor could easily surpass the power of a standard incandescent light bulb!
You specifically said 60W was "pretty low" for an incandescent bulb. That could make someone think it was at least somewhat unusual rather than it truthfully being the most or second most common indoor bulb rating.

The original quote was vague on the subject of a monitor being there or not but considering the original quote referred to an employee talking to customers, I think it's likely they were referring to a whole computer as the customer would be using it, so the monitor was probably included. A modestly sized CRT like a 15" or 17" could use around 60W, which could make the total pretty similar to a 100W incandescent bulb. Even if the monitor used around 80W, if the computer was using maybe 50W under load, I think most would agree that 130W is "a similar amount of power to an incandescent light bulb" as said in the original quote.
 
your posts are all "i don't want it therefore doesnt matter if it advances"
No, I'm saying it seems to involve serious tradeoffs and isn't very consumer-friendly, therefore I don't expect it will ever reach consumers.

Trust me: nobody from the industry is making decisions about this stuff, based on my posts. My opinion changes nothing. You can debate me on the merits of my points, or simply ignore me. It doesn't matter, either way.

yes, its your view, but your view shouldnt claim its bad entirely.
I posted what information I had. If you have further information which supports your position, you're welcome to provide it. However, essentially telling me to shut up isn't constructive.

BTW, I never said or implied it's "bad entirely". I took the trouble to spell out my position in detail, including that it's being used for specialized applications and I thought that's where it should stay. Mischaracterizing my position it is also not constructive.

traditional watercooling & air cooling will reach their limits as we pump more and more power & heat into small areas.

immersion is a way past that limit (at least until something better is designed)
Understood, however home PCs really won't be able to go about 1.6 kW. In US homes, most outlets are 15 Amp. So, I don't know how much more exotic we really need to get than maybe chilled water.
 
  • Like
Reactions: Order 66
Status
Not open for further replies.