News Huawei backs development of HBM memory in China — new consortium aims to sidestep US sanctions

usertests

Distinguished
Mar 8, 2013
969
858
19,760
It would be amazing if China started increasing the global HBM supply and the AI bubble subsequently burst. Let's get HBM in consumer GPUs/CPUs/APUs already (VRAM, L4 cache).
 

edzieba

Distinguished
Jul 13, 2016
589
594
19,760
It would be amazing if China started increasing the global HBM supply and the AI bubble subsequently burst. Let's get HBM in consumer GPUs/CPUs/APUs already (VRAM, L4 cache).
Even if bit-for-bit HBM was at price-parity with DRAM, cards using it would be substantially more expensive due to the need for the silicon interposer.

Now, if China eschewed HBM and revived HMC...
 

Lewinator56

Distinguished
Jan 8, 2017
51
0
18,545
Even if bit-for-bit HBM was at price-parity with DRAM, cards using it would be substantially more expensive due to the need for the silicon interposer.

Now, if China eschewed HBM and revived HMC...
May I remind you of the R9 fury series and vega.

HBM works on consumer GPUs, but HBM2 had it's limitations. GDDR6 just made more sense as it is so much easier and cheaper to work with.
 

edzieba

Distinguished
Jul 13, 2016
589
594
19,760
May I remind you of the R9 fury series and vega.
Very overpriced and underperformed?

And also acted as a preview of the extremely variable power delivery demands that have now become commonplace amongst GPUs - and was a surprise wakeup call for SFX PSU makers, where the R9 Nano found a niche but was nearly unusable due to it tripping PSU overcurrent protection circuits under load, with PSUs that met the specsheet power draw provided by AMD, but could not meet the transient current draw the card actually demanded. The Vega/Fury series is why GPUs started vastly overstating power supply requirements - to handle those transient current spikes - even after PSU manufacturers tweaked designs to tolerate those.
 

Lewinator56

Distinguished
Jan 8, 2017
51
0
18,545
Very overpriced and underperformed?

And also acted as a preview of the extremely variable power delivery demands that have now become commonplace amongst GPUs - and was a surprise wakeup call for SFX PSU makers, where the R9 Nano found a niche but was nearly unusable due to it tripping PSU overcurrent protection circuits under load, with PSUs that met the specsheet power draw provided by AMD, but could not meet the transient current draw the card actually demanded. The Vega/Fury series is why GPUs started vastly overstating power supply requirements - to handle those transient current spikes - even after PSU manufacturers tweaked designs to tolerate those.
The R9 fury series didn't perform poorly, but they weren't significantly faster than their predecessors, they were also quite expensive and used first generation HBM.

The Vega series however performed extremely well, the Vega64 released as a contender (a yearish later) for the GTX1080, and did particularly well at that, maintaining an average 5-10% lead in performance, while also costing less.

Transient power spikes won't come from the memory though, rather the core design, GCN was really on its way out with the R9 fury and HBM was I guess an attempt to try to get every little bit of performance out of it as possible. Polaris, while still GCN, was a massive step, and Vega built on that. AMD had pretty much been using the same core from the HD 7000 series with tweaks up until Polaris.
 

edzieba

Distinguished
Jul 13, 2016
589
594
19,760
The Vega series however performed extremely well, the Vega64 released as a contender (a yearish later) for the GTX1080, and did particularly well at that, maintaining an average 5-10% lead in performance, while also costing less.
'Launch MSRP' matched the GTX1080, but launch MSRP was not available for long (the first few hundred cards per retailer) with real-world pricing rapidly rising above the GTX1080.
Performance was for all intents and purposes matched with the GTX1080 (some games a few percent faster, some slower) but it was clear from the power consumption being ~1.5x the GTX1080 that the cards were run at the highest voltage AMD could release with to push the clocks necessary to meet that performance goal. This also put the Vega series in the unique situation where any meaningful overclock required undervolting to open up any meaningful power-limit headroom - as well as a healthy dose of silicon lottery to have a card that could operate at even stock clocks when undervolted.
 

Lewinator56

Distinguished
Jan 8, 2017
51
0
18,545
'Launch MSRP' matched the GTX1080, but launch MSRP was not available for long (the first few hundred cards per retailer) with real-world pricing rapidly rising above the GTX1080.
Performance was for all intents and purposes matched with the GTX1080 (some games a few percent faster, some slower) but it was clear from the power consumption being ~1.5x the GTX1080 that the cards were run at the highest voltage AMD could release with to push the clocks necessary to meet that performance goal. This also put the Vega series in the unique situation where any meaningful overclock required undervolting to open up any meaningful power-limit headroom - as well as a healthy dose of silicon lottery to have a card that could operate at even stock clocks when undervolted.
Luckily, I wasn't exposed to the launch pricing. I got my V64 about a year after launch for £399 for a nitro+, which was a pretty damn good deal if you ask me. But launch pricing went a bit wacky.

The power consumption is an interesting one, I never really fiddled with the power limit on my card, left it at default which I think was 250W, never really gained much from increasing it but I think I only went as high as 300W, but I was running ~50MHz off the boost clocks at the stock limit, so maybe I won the silicon lottery? (With a nitro+ I guess that makes sense). Was never thermally limited either. The only thing mine didn't like was a memory overclock. I think I got 30MHz over the base before it became unstable.