AMD's HBM Promises Performance Unstifled By Power Constraints

Status
Not open for further replies.

jtd871

Distinguished
Jan 26, 2012
114
0
18,680
Now if the red team can get their overall power use, pricing and performance competitive with the green team at the mainstream level, we all win.
 
"Four times the capacity sounds outrageous, but the first generation is limited to a maximum of 1 GB per stack, and the tech only allows for four stacks to be used. This means the first generation of graphics cards will have frame buffers that max out at 4 GB."
This is a confirmation of what we knew.
So I guess 390 and 390X would be 4GB cards?
 

plasmastorm

Distinguished
Oct 18, 2008
726
0
19,160
Im fully expecting the issue that Nvidia had a while back with the GPU solder joints failing over time to hit HBM.

Hope it doesn't but something is telling me to be wary
 
''AMD started work on this project over 7 years ago. When the engineers first sat down and started thinking about what problems lay ahead, it became apparent to them that bandwidth per watt would quickly become an issue. As bandwidth demands go up, the traditional solution has been to integrate more into the die. DRAM doesn't have that luxury, and yet its bandwidth demands rise as CPUs and GPUs get faster and faster. For GDDR5 to keep up, more power is required, and at a certain point that is no longer feasible. ''


so is it that AMD's next generation gpu chips going to be so powerful that they need to get this HBM mainstream ??
seeing that the r9 300 is going to be for the most part refresh/ tweaked r9 200 chips that will be use to work the bugs out of this new memory array then ''next year'' [lol] have a new die for a r9 400 card line up that twice as powerful to the point todays ddr5 layout will not be able to keep up with them ??

that statement '' '' above is interesting to what may be coming down the road and the stage is being set for that ..??

 

Larry Litmanen

Reputable
Jan 22, 2015
616
0
5,010
AMD always promises and usually underdelivers, until i will see a card on the market that costs the same as an Nvidia, eats just as much power and gives meaningfully higher FPS i do not want to hear from AMD.

I don't care about technology, is it staked or whatever, is it better?
 
What about latencies ? at a 500Mhz clock, seems impossible to improve latencies...
Latency in graphics is all about completing the task and exporting a frame every 1/60th (or 1/144th for OC'd panels) of a second. As long as it can render a frame at 60 or 140 Hz then it is fine. At 500,000,000Hz and super high bandwidth I really don't think that latency is going to be a noticeable issue. On the GPU side they may need to increase their buffer size a little bit to compensate for the lower clock rate and the insane increase in input, but that goes up with each generation anyways.
 

InvalidError

Titan
Moderator

HBM is coming around because GDDR is no longer able to push the GB/W expected in modern applications. With all GPU designers likely to switch over to HBM, it seems unlikely anyone will bother pushing for GDDR6/7. The problem with off-package memory is that you end up wasting tons of power on bus termination, de-skew, signal equalization, etc. and this gets worse as speeds go up. With HBM, the bus lines are too short for most of these to matter. At least for now.

The next step after HBM would be direct TSV connection between the RAM and GPU/CPU/whatever - skip the interposer and intermediate glue logic altogether.
 
I also like the part of space saving on the card that may get them away from all these 13'' long monsters I passed up a few cards due to size and weight

I do hope AMD secedes in this . they sure need a boost in something and to get better on track all the promising , rumors and delays has me thinking that's there new trade mark ''next year'' its time to put something on the table ..
 
I too hope this lives up to at least some of the hype. I also worry about the solder interconnects between the layers. Will the added weight or pressure of a heatsink cause problems for the solder joints? How well will a heatsink work on the chip boxes when they're thicker and can hold heat in the inside instead of transferring it to the surface?
 

InvalidError

Titan
Moderator

I do not think they will be any worse than the solder interconnects between dies and their carrier substrate. The concept of multi-chip packages has been around for several years already. Stacked dies have also been in use in flash chips for several years already. The HBM stacks are fairly low power and located about as far from the GPU die as they possibly can, so I would not be too worried about heat being that much of an issue - especially not with the 14/16nm die shrink which should bring considerably lower power across the board.

I'm expecting to see performance on par with a 295X2 under 200W next year.
 
also worry about the solder

I guess it comes down to how cheap they go will they use high quality silver or cheap as china can provide lead or tin ?? are the parts going to get so hot to melt / stress crack the solder ?
if its a issue it just goes to show what kind of engineers they got to determine if the stuff used holds to the extreme specs. of use ,but then nothing surprises me these days of the cheaper better product as in invest the least you can and charge the most you can get for it
 

InvalidError

Titan
Moderator

Most solder alloys do not even reach their eutectic phase until 180C, so the chip would fail long before the solder joints across the stack might melt. Silver and tin are much stiffer than lead and lead is not as susceptible to tin pest so were it not for RoHS compliance, you would actually want them to use leaded solder. Since the HBM stacks are relatively low power and tightly coupled thermally, it should be fairly easy to match thermal expansion coefficients and temperature across the stack to mostly eliminate mechanical failure. The heat output from the HBM stacks will also be fairly constant, so the HBM stacks will not need to put up with the sort of temperature gradients present across the GPU die.

I bet thermal management accounts for a large chunk of the reasons why AMD decided to go with a silicon interposer instead of putting the RAM directly under the GPU.

Look at it this way: before HBM, the path from GPU to GDDR dies was: GPU die -> micro-BGA balls to GPU substrate -> BGA balls to PCB -> BGA balls to GDDR package substrate -> uBGA balls to GDDR die. With HBM, you have the same path, except the PCB gets replaced by the silicon interposer and there is a logic chip added to the HBM stack. The total potential points of failure remains mostly unchanged. They have only been relocated.
 

Mitrovah

Honorable
Feb 15, 2013
144
0
10,680



How will this affect cost? Will HBM offer the same performance at a lower price?
 

chopscissors

Reputable
Apr 9, 2015
5
0
4,510
also worry about the solder

I guess it comes down to how cheap they go will they use high quality silver or cheap as china can provide lead or tin ?? are the parts going to get so hot to melt / stress crack the solder ?
if its a issue it just goes to show what kind of engineers they got to determine if the stuff used holds to the extreme specs. of use ,but then nothing surprises me these days of the cheaper better product as in invest the least you can and charge the most you can get for it
FWIW, solder isn't "high quality" because it's silver alloy as opposed to tin/lead. In truth, silver alloy solder presents more challenges in manufacturing then tin/lead.

The only reason for the switch to silver alloys is to meet ROHS reqs.
 

notsleep

Distinguished
Jan 19, 2010
219
0
18,680
how about replace system ram with hbm as well not just on graphic cards? wouldn't that make overall system much, much faster and less power hungry?

just imagine the system spec:

ram: 8 gb hbm
gfx: 8 gb hbm

:p
 

Shankovich

Distinguished
Feb 14, 2010
336
0
18,810
how about replace system ram with hbm as well not just on graphic cards? wouldn't that make overall system much, much faster and less power hungry?

just imagine the system spec:

ram: 8 gb hbm
gfx: 8 gb hbm

:p

That's what AMD is trying to do with HSA. Once they put HBM on an APU die, it'll pretty much be exactly what you said :)
 

InvalidError

Titan
Moderator

It will likely start with somewhat of a premium pricing from the novelty factor before becoming cheaper once it becomes standard across nearly all GPUs. Stacking naked dies and putting them on an interposer should be cheaper once the process matures than routing all those traces out on the PCB with all the extra chip packaging effort and engineering that goes with that.

On the flop side, this does mean all GPU cards will be little more than chip carriers for the HBM GPU, mostly distinguishing themselves by VRM and PCB dressing.
 

Mitrovah

Honorable
Feb 15, 2013
144
0
10,680


That is going to be one seriously stacked chip XD

 
I wondering just how its going to be priced myself .. will it be inline with the 980 or when it releases will NVidia come out with a 980ti for a higher price and the top amd card [I assume the 390x] will be inline for it ?? then where will the non hbm r9 300 cards sit at in price ? the same as there r9 200 were today more or less ??
 
Status
Not open for further replies.