AMD's Radeon 300-Series Desktop GPUs Feature HBM Memory; M300 Teased, Too

Status
Not open for further replies.

PaulBags

Distinguished
Mar 14, 2015
199
0
18,680
So, how much GPU power right now is eaten up by GDDR5? If HBM is supposed to use 50% less power, how much will that impact the GPU's full power draw?
It'll mean more for mobile graphics, but imo any extra efficiency is welcome. It'll also produce less heat, increasing life span and reliability; that might not be an issue now but again it's always welcome.
 
So, how much GPU power right now is eaten up by GDDR5? If HBM is supposed to use 50% less power, how much will that impact the GPU's full power draw?

That is why I hate percentages. They can be extremely deceiving. Say the GDDR5 uses 20w, so the new uses 10w. It's very deceiving for performance charts as well. It'll say GPU A is 20% faster than GPU B, but the reality turns out to be 14 more FPS for a price difference of $100.
 

kyuuketsuki

Distinguished
May 17, 2011
267
5
18,785
Isn't Arctic Islands coming out in 2016. If so why bother with FIJI?
... That same justification can be used for never buying anything. There's always something new around the corner.

Also, ever heard of delays? They happen. Often. For every company. Arctic Islands may not come out as soon as you think.
 

iPanda

Reputable
Mar 25, 2015
87
0
4,640
Wow, nice to see all the promotional lingo being questioned on here. #AttentionToDetail I agree with how much does this actually translate to. Get 'er dun Toms.
 

Mitrovah

Honorable
Feb 15, 2013
144
0
10,680




So you just jump and spend every time a "newer" thing is released? I wish I had that kind of money to buy one series for one or two years and then just suddenly buy the new thing. Yes I have heard of delays, but nothings is always set in stone, especially when it comes to technology, see what I did there haha. You are already assuming a delay before it even has a chance to miss the first deadline, and besides why not wait and endure a delay, its not like being stuck at the airport.

Oh and one more detail neither announcement Fiji or Arctic is counting down to the last month and minute for 2016 either are they
 




So you just jump and spend every time a "newer" thing is released? I wish I had that kind of money to buy one series for one or two years and then just suddenly buy the new thing. Yes I have heard of delays, but nothings is always set in stone, especially when it comes to technology, see what I did there haha. You are already assuming a delay before it even has a chance to miss the first deadline, and besides why not wait and endure a delay, its not like being stuck at the airport.

Oh and one more detail neither announcement Fiji or Arctic is counting down to the last month and minute for 2016 either are they

I don't think he is saying that, rather that if you just wait you will always be waiting. Pretty much every year something new is coming out. I bought my Q6600G0 and a few months later out came the newer 45nm parts that overclocked way better and performed better.

Sometimes you need to just jump but only when it is a good time. I wen from a 9700Pro to a 9800XT but didn't move from that until the X1800 and then 2900Pro. Then I went HD2900Pro to HD4870 and then to a HD7970GHz. Next is a plan to go either R9 390X or GTX980 depending on price and performance.
 

InvalidError

Titan
Moderator
The question nobody asked: how large will the HBM be? Large enough to obviate the need of external memory altogether (2-4GB) or only large enough to keep frequently used resources on-package (512-1024MB) to break the GPU's heavy dependence on high-bandwidth on-board GDDR?
 

plasmastorm

Distinguished
Oct 18, 2008
726
0
19,160
I don't spend upwards of £300 of a gpu, £100 on a psu etc to save a tiny % on an electricity bill each year.

Just wish they would bring out a card now and then where they said screw the power requirements, let's just make it as fast as we can.
 

InvalidError

Titan
Moderator

They cannot do that. There is no point in making GPUs any more powerful than what the memory interface can effectively feed and the density of processing power is increasing much faster than memory bandwidth. If you wanted a more powerful single-chip GPU, you would end up having to pay a huge premium for the 12+ layers PCB required to route a 768+bits wide memory interface and the huge substrate under the GPU to fan out those 3000+ pads. It just would not be economically viable.
 

alextheblue

Distinguished
I don't spend upwards of £300 of a gpu, £100 on a psu etc to save a tiny % on an electricity bill each year.

Just wish they would bring out a card now and then where they said screw the power requirements, let's just make it as fast as we can.

The reduced draw of the memory boosts their efficiency and lets them spend more of their power budget on the GPU. It's not about "spending X dollars to save electricity".
 

alextheblue

Distinguished
So you just jump and spend every time a "newer" thing is released? I wish I had that kind of money to buy one series for one or two years and then just suddenly buy the new thing.
So what about everybody that needs an upgrade (or a new build) after Fiji is released? Do they just buy last-gen and ignore Fiji? Do they wait for the next gen just because some guy on the internet said why bother, there will be something new next year? The point is not to "buy the new thing" for no reason. The point is that when you are ready to buy, for whatever reason, you buy whatever best suits your needs at that point. If there's something just a couple months down the road and you want to wait, that's fine too, but if you are always waiting for the "next big thing" you'll be waiting forever.
 

endeavour37a

Honorable

They cannot do that. There is no point in making GPUs any more powerful than what the memory interface can effectively feed and the density of processing power is increasing much faster than memory bandwidth. If you wanted a more powerful single-chip GPU, you would end up having to pay a huge premium for the 12+ layers PCB required to route a 768+bits wide memory interface and the huge substrate under the GPU to fan out those 3000+ pads. It just would not be economically viable.

I'm not sure I agree with you about GPU's being bound by memory performance today. Take the 280 (Tahiti PRO) and 285 for example, they have a 384-bit and 256-bit bus respectively yet run about the same. If the Tonga PRO in the 285 were being held back I would guess AMD would have kept the 384-but bus for it. From what I gather they are utilizing memory more efficiently and don't need such a wide bus to keep the GPU bussy.

Same for the NV 780 and 970 with 384-bit and 224-bit bus respectively, the 970 performs better will the narrower bus than the 780 because of better usage and does not need more bandwidth. The 980 on the other hand needed more than 224 to keep it bussy, it got a 256-bit bus, still not as large as a 780 because it did not need it and a much faster core.

So I don't think memory is holding these cores back now days anyway. The HBM thing is a whole different animal for cards, a new way of doing things that NV is going to follow also. That's just my take on this of course. HBM should serve newer and faster cores for some time to come..
 

InvalidError

Titan
Moderator

You need to ask yourself how the newer AMD and Nvidia GPUs managed to get the same performance out of a narrower memory bus and that would because of new texture compression algorithms. But that sort of gain is a one-time thing - it does not scale with processing power, so you are back to having to add raw memory bandwidth to push performance further. On graphics settings that stress memory bandwidth more heavily, the wider old models often pull ahead. Being clever to reduce costs is nice but it is not always sufficient to overcome what can be achieve through brute force.

Other problems with compression algorithms is that they cost transistors, they cost die area, they cost power, they cost latency, they add scheduling complexity, they add potential failure points, etc. If memory bandwidth was not a critical bottleneck, neither AMD or Nvidia would throw away transistors and everything that goes with them at things that add unnecessary overhead. They had to throw transistor budget at texture compression because memory bandwidth was a bottleneck to producing lower-cost, higher-performance GPUs.

With the 14/16nm shrink, they will be able to cram more than twice as much processing power in the same die area and power budget but to sustain that processing power, they will need twice as much effective bandwidth. You cannot double bandwidth by being clever alone, especially not when most cost-effective clever tricks are already behind you.
 

You're comparing completely different silicon and contradicting yourself. No one is saying that you can't achieve similar levels of overall performance by using different balances of raw GPU power and memory bandwidth.

Try this: use those same chips you just mentioned and compare them to themselves, albeit with a halved memory pipe. Would the R9 280 perform nearly as well as it does now if it was operating on a 192-bit bus? What about the 980 on a 128 bus or the 290X on a 256 bus?

What we're saying is that it's pointless to make the absolute most powerful GPU chip possible if you can't keep the beast fed with instructions. It's the same reason you don't run a classic muscle car on a low-flow carburetor or a Ferrari on 85 octane.
 

rumors suggest up to 8GB (total capacity) with first gen HBM memory, 4GB is a safe bet. i think amd witheld the ram size because of nvidia. the current titan and upcoming gtx 980TI will have the vram size advantage.
 

wtfxxxgp

Honorable
Nov 14, 2012
173
0
10,680
I don't spend upwards of £300 of a gpu, £100 on a psu etc to save a tiny % on an electricity bill each year.

Just wish they would bring out a card now and then where they said screw the power requirements, let's just make it as fast as we can.

Power = heat = life expectancy which affects warranty and therefore, price. I don't want to purchase a card that is blisteringly fast, heats up my whole house and then needs to be replaced every winter. No thanks.
 

Vlad Rose

Reputable
Apr 7, 2014
732
0
5,160
I don't spend upwards of £300 of a gpu, £100 on a psu etc to save a tiny % on an electricity bill each year.

Just wish they would bring out a card now and then where they said screw the power requirements, let's just make it as fast as we can.

Sorry, but I don't feel like having to run multiple 1200w power supplies to power a video card; or a have a computer case that weighs a ton due to cooling requirements.

The market is steering more and more to the mini-itx factor, meaning less power and heat requirements. AMD already gets ragged on for their heat/power vs Nvidia atm, To come out with something even more demanding would be suicide on their part.
 
HBM is one of those *disruptive* technologies -- smaller and faster with less power draw. Substantially ... if you go out on the internets looking for information. There are some amazing claims about bandwidth and efficiency against GDDR5.

This should be a monster for AMD's mobile APUs.

And this is *HBM1* ... HBM2, reportedly, doubles-up on the HBM1 goodness with rumors of the follow-up coming at the node-shrink next year.

edit:
rumors suggest up to 8GB (total capacity) with first gen HBM memory, 4GB is a safe bet.

My understanding is no 8GB.
4GB (purportedly) brings higher bandwidth than the 12GB of GDDR5 found on the new Titan.

 
Status
Not open for further replies.