News Report: Big Navi Engineering Sample Points to 16 GB GDDR6 Memory

wr3zzz

Distinguished
Dec 31, 2007
108
45
18,610
Considering AMD put 16GB HBM on the $700 Radeon VII, it's probably not a cost killer putting 16GB on big Navi. Nvidia must got a really good deal on GDDR6X from Micron given that HBM is already on the DTX Ampere.
 
The only thing I could think about once I saw that picture of it was "What the hell kinda GPU cooler setup IS THAT?". It looks like they strapped a CoolerMaster Hyper 212 directly to the GPU but it hangs down so it wouldn't work very well because trying to draw heat down is never a very efficient cooling solution.

Crazy man, crazy!
 
Considering AMD put 16GB HBM on the $700 Radeon VII, it's probably not a cost killer putting 16GB on big Navi. Nvidia must got a really good deal on GDDR6X from Micron given that HBM is already on the DTX Ampere.
HBM2 is still very expensive. Just the RAM costs probably twice as much as GDDR6, maybe even twice GDDR6X. Then you have to add in the silicon interposer as well. There are rumors that Vega 56 / Vega 64 were basically selling at near-cost when they were in the $300-$400 range. Basically, HBM2 really only makes sense financially when you can sell a lot of cards at $500 or more each.
 
  • Like
Reactions: Makaveli
I'm not buying it, look how filthy that case is. I can't see that being their test rig.
That PSU looks like an older Thermaltake Grand 1000W ... which incidentally is what AMD sent out to reviewers for the initial Threadripper launch if memory serves. If so, it's been used for at least three years, and a bit of dust (even in a testing lab) isn't out of the question. But there are plenty of reasons to be skeptical.
 
I feel AMD adds more RAM than is required to some of their GPU’s as a pure marketing gimmick even if the card has no hope of ever using it all.
Sometimes they're a bit too...forward thinking, but that only helps the card age better as VRAM requirements increase. Nvidia still uses VRAM more "efficiently". AMD generally takes a more "brute-force" approach to GPU design thus far.
 
Sometimes they're a bit too...forward thinking, but that only helps the card age better as VRAM requirements increase. Nvidia still uses VRAM more "efficiently". AMD generally takes a more "brute-force" approach to GPU design thus far.
Maybe. For example if we look at the 1060 6gb vs 580 8gb. Very similar performance but has there ever been a scenario with playable FPS where the extra 33% of RAM on the 580 was beneficial? Not that I have seen.

I still think it’s more to do with giving their gpu’s a selling point over NVidia that can be used for marketing. Just my opinion though.
 
HBM2 is still very expensive. Just the RAM costs probably twice as much as GDDR6, maybe even twice GDDR6X. Then you have to add in the silicon interposer as well. There are rumors that Vega 56 / Vega 64 were basically selling at near-cost when they were in the $300-$400 range. Basically, HBM2 really only makes sense financially when you can sell a lot of cards at $500 or more each.
I'm not even sure if it's worth it even then Jarred. I've seen very strong evidence that FS2020 absolutely hates HBM. Consider that five years ago, the R9 Fury was the third most powerful gaming card in the world after the GTX 980 Ti and R9 Fury-X. In FS2020, the R9 Fury gets absolutely demolished by the (relatively) lowly RX 470 by a whopping 37.5% at 1080p and an unbelievable 60% at 1440p:
index.php

This is the exact opposite of what should be happening because in another big DX11 title, Far Cry Primal, at 1440p, the R9 Fury is a whopping 52.4% faster than the RX 470 and other games I've looked at have had similar results.
index.php

The R9 Fury is just in another world of performance compared to the RX 470 so the only thing that could be kneecapping it so badly in FS2020 must be the HBM because there are 5+ cards that list that would lose to the R9 Fury in other titles, all of which also have 4GB of VRAM so it's not VRAM quantity that's the problem.
 
I feel AMD adds more RAM than is required to some of their GPU’s as a pure marketing gimmick even if the card has no hope of ever using it all.
They don't always do that. The HD 7970 was somwhat hamstrung after awhile by the fact that it only had 3GB of VRAM and the R9 Fury was DEFINITELY kneecapped by the fact that it only had 4GB of HBM when it really needed at least 6GB of VRAM even 6GB of GDDR5 would've been better than 4GB of HBM for that card. I owned both of these cards so I know what I'm talking about. If my Fury had 6GB of VRAM, I wouldn't have purchased my RX 5700 XT yet.

As for putting too much, I have an OLD Palit GeForce 8500 GT that has 1GB of DDR3 on it. Hell, that was the larger VRAM amount offered on the HD 4870! Even the standard GTX 260 didn't have that much VRAM so nVidia has done it as well. I only chose the 8500 GT because that was the last nVidia card I ever owned so I know for sure that it's true. LOL
 

TJ Hooker

Titan
Ambassador
The R9 Fury is just in another world of performance compared to the RX 470 so the only thing that could be kneecapping it so badly in FS2020 must be the HBM because there are 5+ cards that list that would lose to the R9 Fury in other titles, all of which also have 4GB of VRAM so it's not VRAM quantity that's the problem.
The Fury is the only card in the FS2020 benchmark you posted that only comes in 4GB model. All other 4GB cards are also available as 8GB cards. Unfortunately Guru3D doesn't specify which ones were used (except to specify the 5500 XT is 8GB). If it was an issue inherent to HBM, you'd expect the Vega cards and Radeon VII to be similarly crippled, but they perform about as expected.

Edit: Oops, the 1650 Super is 4GB only. I've heard Nvidia tends to be more efficient with their VRAM, better compression or whatnot? And maybe AMD improved their own compression/VRAM usage since GCN3 (Fiji)? I don't know.
 
Last edited:
I'm not even sure if it's worth it even then Jarred. I've seen very strong evidence that FS2020 absolutely hates HBM. Consider that five years ago, the R9 Fury was the third most powerful gaming card in the world after the GTX 980 Ti and R9 Fury-X. In FS2020, the R9 Fury gets absolutely demolished by the (relatively) lowly RX 470 by a whopping 37.5% at 1080p and an unbelievable 60% at 1440p:
index.php

This is the exact opposite of what should be happening because in another big DX11 title, Far Cry Primal, at 1440p, the R9 Fury is a whopping 52.4% faster than the RX 470 and other games I've looked at have had similar results.
index.php

The R9 Fury is just in another world of performance compared to the RX 470 so the only thing that could be kneecapping it so badly in FS2020 must be the HBM because there are 5+ cards that list that would lose to the R9 Fury in other titles, all of which also have 4GB of VRAM so it's not VRAM quantity that's the problem.
In my testing, I noticed that even at minimum settings, FS2020 benefits from having more than 4GB VRAM. There are other factors as well, like geometry throughput. IIRC, Fiji (and other R9 and earlier GCN cards) didn't have great geometry throughput compared to GTX 980 / Maxwell. AMD improved that aspect of Polaris quite a bit. Plus, you're showing charts for 1440p ultra quality. The 4GB Fiji GPUs are going to tank hard at ultra settings in FS2020.
 
I've seen very strong evidence that FS2020 absolutely hates HBM.
It doesn't make sense to me that a particular form of memory affects a software application. As far as I know, the only performance values you have to worry about for memory are bandwidth and latency. Since HBM equipped cards aren't really lacking in the former and the latter would've been noticeable elsewhere as latency affects everything, there has to be something else.
 

Chung Leong

Reputable
Dec 6, 2019
493
193
4,860
The description "typical Samsung 16 Gb" seems to suggest there's another card with atypical Samsung 16 Gb memory. Perhaps high-end Big Nav will have Samsung's version of GDDR6X? A 512-bit bus would be expensive and power hungry. It makes sense that AMD would look for a better solution. 16GB of regular GDDR6 over a 256-bit bus doesn't make a whole lot of sense.
 

Chung Leong

Reputable
Dec 6, 2019
493
193
4,860
I feel AMD adds more RAM than is required to some of their GPU’s as a pure marketing gimmick even if the card has no hope of ever using it all.

I think it's a calculated move to make the card attractive to cryptocurrency miners. It's an insurance policy against being badly beaten by Nvidia. The card will still sell at a high price point even if gaming performance isn't anywhere near the RTX 3070's.
 
I think it's a calculated move to make the card attractive to cryptocurrency miners. It's an insurance policy against being badly beaten by Nvidia. The card will still sell at a high price point even if gaming performance isn't anywhere near the RTX 3070's.
They've been doing this long before crypto was even a thing. The earliest I'm pegging this sort of practice was on the X800 XL, which launched with a 256 MB version. But six months later they released a 512MB variant. Literally all they did was double the VRAM. It also didn't really do much for performance.
 

Chung Leong

Reputable
Dec 6, 2019
493
193
4,860
They've been doing this long before crypto was even a thing. The earliest I'm pegging this sort of practice was on the X800 XL, which launched with a 256 MB version. But six months later they released a 512MB variant. Literally all they did was double the VRAM. It also didn't really do much for performance.

It's one thing to bolt on useless stuff, it's quite another to deliberately cripple a product. I just don't see AMD opting to do 16GB over 256-bit instead of 10GB over 320-bit or 12GB over 384-bit. From a marketing standpoint, having less memory bandwidth than a console would be a disaster.
 

eklipz330

Distinguished
Jul 7, 2008
3,034
19
20,795
I feel AMD adds more RAM than is required to some of their GPU’s as a pure marketing gimmick even if the card has no hope of ever using it all.
Oh, they absolutely do that and have been doing that for a very long time.

Aaaaand it always works. The uneducated don't know any better. They will show a total of 2 benchmarks in an unreal scenario where the 3080 throttles against their card, and fanboys will eat it up.

12gb should be good for years to come. by the time we've saturated 8gb thoroughly, a new generation will have already been released.
 
In my testing, I noticed that even at minimum settings, FS2020 benefits from having more than 4GB VRAM. There are other factors as well, like geometry throughput. IIRC, Fiji (and other R9 and earlier GCN cards) didn't have great geometry throughput compared to GTX 980 / Maxwell. AMD improved that aspect of Polaris quite a bit.
That's a fair point that I hadn't considered but a 60% difference is really hard for me to just blame on polygon throughput although it may very well be so. When I looked at the two cards, the Fury seemed to completely dominate the 470 in every metric, but of course geometry throughput isn't something that is often shown so it didn't even occur to me.
Plus, you're showing charts for 1440p ultra quality. The 4GB Fiji GPUs are going to tank hard at ultra settings in FS2020.
I only showed 1440p because while the R9 Fury can game reasonably well at 1440p while the RX 470 emphatically can not. In In Kevin & Igor's review, the R9 Fury reference card did the following (Avg/Min):
  • 62/50 in Battlefield 4
  • 71/46 in Far Cry 4
  • 73/54 in GTA V
  • 82/55 in Metro: Last Light
  • 73/54 in Middle Earth: shadow of Mordor
  • 73/39 in Tomb Raider
https://www.tomshardware.com/reviews/sapphire-amd-radeon-r9-fury-tri-x-overclocked,4216-4.html
At 1080p, the Fury still finished dead-last with the RX 470 beating it by 37.5% (still an absolute massacre of its far more powerful older cousin):
index.php

Could polygon throughput alone really account for massacres of these proportions? I ask this in all honesty because while I do have a more than fair amount of video card expertise, I know that I'm nowhere close to omniscient and am always looking to learn more. Since it isn't discussed all that much (relative to other metrics like FlOps, VRAM capacity/speed/bandwidth, PCI-Express latency, mfg node, etc.), I had assumed that geometric throughput had a relatively minor performance impact (unless FS2020 is an outlier in which it has a major impact).
It doesn't make sense to me that a particular form of memory affects a software application. As far as I know, the only performance values you have to worry about for memory are bandwidth and latency. Since HBM equipped cards aren't really lacking in the former and the latter would've been noticeable elsewhere as latency affects everything, there has to be something else.
The Fury is the only card in the FS2020 benchmark you posted that only comes in 4GB model. All other 4GB cards are also available as 8GB cards. Unfortunately Guru3D doesn't specify which ones were used (except to specify the 5500 XT is 8GB). If it was an issue inherent to HBM, you'd expect the Vega cards and Radeon VII to be similarly crippled, but they perform about as expected.
I don't completely agree with you because those cards use HBM2 which is quite different from HBM (half the bandwidth) but it could, as Jarred pointed out, be a case of polygon throughput.
Edit: Oops, the 1650 Super is 4GB only. I've heard Nvidia tends to be more efficient with their VRAM, better compression or whatnot? And maybe AMD improved their own compression/VRAM usage since GCN3 (Fiji)? I don't know.
I did check if there was any significant performance difference between the 4GB and 8GB models but there isn't any. In Witcher III, ROTTR, FC Primal, TW:WH, Hitman and Anno 2205, the PowerColor RX 470 Red Devil 4GB and the MSi RX 470 Gaming-X 8GB are in a dead-heat at 1080p, 1440p and 2160p. The extra 4GB of VRAM has literally no effect whatsoever because Polaris isn't powerful enough to even use 4GB. This means that the 8GB versions would never have become mainstream cards because the extra cost was a complete waste. Also, Guru3D has a tendency to indicate if a card has a non-standard memory configuration as shown by the fact that 8GB Polaris cards in this chart have "(8GB)" next to them:
index.php

I've been trying to understand the problem with the Fury since this review came out because I personally own two of them and was always quite pleased with their performance at 1080p and even 1440p. Not as pleased as I am with my RX 5700 XT of course, but there's a four year gap between them.
 
  • Like
Reactions: TJ Hooker

Shadowclash10

Prominent
May 3, 2020
184
46
610
You know when we get all these leaked pictues, right (not renders, pictures)? How come people are able to take pictures? Sometimes, it makes sense - like when you get those pictures of the Xbox Series X "in the wild." But how on Earth do people manage to take pictures of GPUs, furthermore in a system?