Discussion Why don't more GPUs use HBM for VRAM

HBM is expensive. Though how much is up for debate. The only figure I could find was HBM2 was something like up to $150 in 2017. It could've dropped, but we're now on HBM3 and I'm seeing reports that its cost is high due to demand in high performance applications. Most of that cost is likely due to the manufacturing complexities involved.

EDIT: For more info about HBM2 and whatnot: https://www.gamersnexus.net/guides/3032-vega-56-cost-of-hbm2-and-necessity-to-use-it

The tl;dr is 8GB of GDDR5 was around $55-$65, while HBM2 was $150, plus another $25 for the interposer.
 
  • Like
Reactions: Order 66

Order 66

Grand Moff
Apr 13, 2023
2,164
909
2,570
Probably cost?

What research have you done regarding this?
I haven't done much research, but I saw the recent TH article about Samsung HBM4 and that is what prompted the question of why don't GPUs use it (besides cost) since the bandwidth is so much higher than GDDR6. Is HBM3e slower in terms of clock speed than GDDR6 (x)?
 

Order 66

Grand Moff
Apr 13, 2023
2,164
909
2,570
Then start...;)

You can be our researcher, and discover why or why not.
Okay, I found that HBM has half the speed of GDDR and apparently according to this thread:
games aren't limited by memory bandwidth, but I don't think that is necessarily true considering that the games could be limited by the 4060 ti 16GB's lack of memory bandwidth.
 
games aren't limited by memory bandwidth, but I don't think that is necessarily true considering that the games could be limited by the 4060 ti 16GB's lack of memory bandwidth.
Depending on how much cache is on the GPU (Ada has around 10x the amount compared to Ampere), what's actually needed and when it's actually needed, and how much compression is being used, it lessens the need for bandwidth.

For instance I ran Cyberpunk 2077 and Crysis: Remastered at 4K output (I think Crysis was doing native, Cyberpunk was DLSS quality) and monitored the "Memory Controller Load" on GPU-z. Supposedly this measures how much bandwidth is being used. My card has a bandwidth of 504GB/sec, and for these games the load hovered around 35%-40%. Assuming this is a percentage of total bandwidth, this means only ~202GB/sec was used, which is well within the 288 GB/sec the 4060 Ti is capable of.
 
  • Like
Reactions: Order 66

Order 66

Grand Moff
Apr 13, 2023
2,164
909
2,570
Depending on how much cache is on the GPU (Ada has around 10x the amount compared to Ampere), what's actually needed and when it's actually needed, and how much compression is being used, it lessens the need for bandwidth.

For instance I ran Cyberpunk 2077 and Crysis: Remastered at 4K output (I think Crysis was doing native, Cyberpunk was DLSS quality) and monitored the "Memory Controller Load" on GPU-z. Supposedly this measures how much bandwidth is being used. My card has a bandwidth of 504GB/sec, and for these games the load hovered around 35%-40%. Assuming this is a percentage of total bandwidth, this means only ~202GB/sec was used, which is well within the 288 GB/sec the 4060 Ti is capable of.
I am pretty sure that some laptop GPUs have 64 bit memory bus width (assuming the GPU itself was for some reason powerful) wouldn't that be a bottleneck.
 
I am pretty sure that some laptop GPUs have 64 bit memory bus width (assuming the GPU itself was for some reason powerful) wouldn't that be a bottleneck.
How wide the channel is says nothing about its bandwidth. And I've only seen 64-bit memory channels on the bottom or near bottom tier of GPUs.

Also again, it depends on what's going on and how the the GPU was built to handle it. Yes, just because people have proven that increasing the memory bandwidth does improve performance, that doesn't mean that's the end-all-be-all. In another thread I went digging around and found the GT 1010 was a thing, it also used GDDR5. And it was still slower than an Intel Iris Xe despite the iGPU having less bandwidth to work with.
 
  • Like
Reactions: Order 66

Order 66

Grand Moff
Apr 13, 2023
2,164
909
2,570
How wide the channel is says nothing about its bandwidth. And I've only seen 64-bit memory channels on the bottom or near bottom tier of GPUs.

Also again, it depends on what's going on and how the the GPU was built to handle it. Yes, just because people have proven that increasing the memory bandwidth does improve performance, that doesn't mean that's the end-all-be-all. In another thread I went digging around and found the GT 1010 was a thing, it also used GDDR5. And it was still slower than an Intel Iris Xe despite the iGPU having less bandwidth to work with.
I think having extra VRAM (to a point) increases performance more than having more memory bandwidth (to a point) and the same amount of VRAM (I know it sounds obvious). for example, if a GPU had 8GB VRAM on a 64-bit bus (I know this isn't possible in reality) then increasing the VRAM to 10,12 or 16GB wouldn't help much whereas if you would take that same 8GB of VRAM and put it on a 128 or 256-bit bus performance would increase a lot.
 

USAFRet

Titan
Moderator
I think having extra VRAM (to a point) increases performance more than having more memory bandwidth (to a point) and the same amount of VRAM (I know it sounds obvious). for example, if a GPU had 8GB VRAM on a 64-bit bus (I know this isn't possible in reality) then increasing the VRAM to 10,12 or 16GB wouldn't help much whereas if you would take that same 8GB of VRAM and put it on a 128 or 256-bit bus performance would increase a lot.
Cost
Software
Production

If it were as simple as "throw more hardware at it"....it would be done.
 
  • Like
Reactions: Order 66
I think having extra VRAM (to a point) increases performance more than having more memory bandwidth (to a point) and the same amount of VRAM (I know it sounds obvious). for example, if a GPU had 8GB VRAM on a 64-bit bus (I know this isn't possible in reality) then increasing the VRAM to 10,12 or 16GB wouldn't help much whereas if you would take that same 8GB of VRAM and put it on a 128 or 256-bit bus performance would increase a lot.
That still depends on the GPU. Also again, the bus-width alone says nothing about the bandwidth. A 1024-bit bus operating at 100MHz SDR is going to perform worse than a 64-bit one operating at 1.4GHz DDR.

The whole specs of a video card is only pertinent to that video card because there's diminishing returns if you bump up only one spec. Throwing the bandwidth of a 4090 is likely not going to make a 4060 perform significantly faster than say a simple 50-100MHz bump to memory controller clock speed.

In between posts, I decided to throw on Resident Evil 4 Chainsaw Demo and ran with max quality settings at 1440p. Despite there being hitches at times, the memory controller load floated around 40%. So all this really tells me rendering graphics doesn't need a lot of bandwidth per-se.
 
  • Like
Reactions: Order 66

USAFRet

Titan
Moderator
I would love to see benchmarks for a 4090 class card with HBM3e or HBM4. The price would be ridiculous even for a high-end Nvidia card.
People don't play "benchmarks".

Well, rational people don't.

For a GPU, we are well into 'count the nose hairs' level.

It makes little sense if a GPU can do that at 1,000FPS, when there isn't a CPU that can produce that, nor can the human eye and brain detect the difference.
 
  • Like
Reactions: Order 66

Order 66

Grand Moff
Apr 13, 2023
2,164
909
2,570
People don't play "benchmarks".

Well, rational people don't.

For a GPU, we are well into 'count the nose hairs' level.

It makes little sense if a GPU can do that at 1,000FPS, when there isn't a CPU that can produce that, nor can the human eye and brain detect the difference.
You are 100% correct, the reason I said I would like to see benchmarks is because even if such a GPU existed, it would probably be so expensive that it would essentially replace the titan class cards, not to mention it would probably need 3 12VHPWR melting connectors to power it ( I am aware that there is a new version of the connector that does not melt, but the amount of power required to power this thing would probably be a fire hazard by itself.) Also if Nvidia were to make such a card I definitely wouldn't buy it (for gaming anyway, but professional work sure) because Nvidia GPUs (especially 40 series) are not the best value for gaming.
 

TRENDING THREADS