News AMD's Azor: No Ampere-Like Shortages for RX 6000

hotaru.hino

Prominent
Sep 1, 2020
502
162
590
7
$10 there won't be Ampere-like demand for these either. Regardless, still not a smart thing for AMD to be saying. Just sounds like a challenge to the bot users. Unless retailers suddenly figured out how to combat bots, it won't be any different than Ampere.
I have a feeling since AMD cards were historically better for mining, if word gets out that RDNA2 is much better than Ampere, the miners will flock to these cards. If not the scalpers looking for yet another "investment"
 

d0x360

Reputable
Dec 15, 2016
29
9
4,545
2
nvidia didn't take a gamble using Samsung lol... Samsung gave them a huge discount because everyone wants tsmc and Samsung is basically running at 25% capacity while tsmc is booked solid through 2022. Samsung NEEDED someone to use their fab and they could have easily made enough 3080s for everyone but nVidia kept the order small because Samsung's 8nm sucks.
 
Reactions: vt1 and thisisaname

Chung Leong

Upstanding
Dec 6, 2019
290
82
260
0
Hard to see how their cards can reach stores in time for Black Friday/Cyber Monday. That does reduce the chance of shortages, I suppose.
 
Reactions: Gurg

russell_john

Commendable
Mar 25, 2018
14
7
1,515
0
nvidia didn't take a gamble using Samsung lol... Samsung gave them a huge discount because everyone wants tsmc and Samsung is basically running at 25% capacity while tsmc is booked solid through 2022. Samsung NEEDED someone to use their fab and they could have easily made enough 3080s for everyone but nVidia kept the order small because Samsung's 8nm sucks.
As an Electrical Engineer that has been working and designing with devices from Samsung's 8nm node for over 18 months you are full of sh*t ....... Every Samsung phone since 1st Q 2019 has been using a SoC from that node ......

As you note TSMCs 7nm node is booked solid and they can't even keep up with SoCs for Sony and Microsoft or Renoir CPUs for AMD and you have the Zen 3 line releasing so where the hell do you think AMD is going to find production time for a yet unproven GPU? Pull it out of Azor's behind?

As an AMD stockholder since 2009 I think AMD would be fools to push aside their proven money makers (Consoles SoCs and Zen CPUs) to make room for what is still a niche market share for them ...... Maybe next year after they have satisfied demand for their proven technology but yet this year? Don't count on it
 
Last edited:

tslot05qsljgo9ed

Distinguished
May 22, 2009
51
0
18,530
0
Update this article (and title) to show that Frank has denied making any supply guarantees.

Update: Frank has denied making any supply guarantees. The bet was only about it not being a paper launch. Because of this, we aren't sure what the bet was all about because NVIDIA launched with some volume as well - and technically wasn't a paper launch either.

Quotes:


Frank Azor

Lol, I didn't say all that. You're hilarious. Appreciate the coverage and compliments though.



Usman Pirzada

Not sure I follow

RTX 30 isnt a paper launch too. It's just supply constrained. So what exactly did you bet for? PM? Ps: I'll add this tweet in as well.


Frank Azor

Not exactly. We'll discuss more after we actually launch. Talk is cheap right now, back to work.

I disagree. But sure. By that logic nvidia isn't a paper launch either. The distinction only lies in volume and that is clearly what the OG tweeter intended as well. If you bet it won't be a paper launch "like NVIDIA" - then you are emphasizing volume. Not the launch type.
 
Update this article (and title) to show that Frank has denied making any supply guarantees.

Update: Frank has denied making any supply guarantees. The bet was only about it not being a paper launch. Because of this, we aren't sure what the bet was all about because NVIDIA launched with some volume as well - and technically wasn't a paper launch either.

Quotes:


Frank Azor

Lol, I didn't say all that. You're hilarious. Appreciate the coverage and compliments though.



Usman Pirzada

Not sure I follow

RTX 30 isnt a paper launch too. It's just supply constrained. So what exactly did you bet for? PM? Ps: I'll add this tweet in as well.


Frank Azor

Not exactly. We'll discuss more after we actually launch. Talk is cheap right now, back to work.

I disagree. But sure. By that logic nvidia isn't a paper launch either. The distinction only lies in volume and that is clearly what the OG tweeter intended as well. If you bet it won't be a paper launch "like NVIDIA" - then you are emphasizing volume. Not the launch type.
I look at it this way: RTX30 series might be a paper launch, it might not be. It seems they had very low stock supply which is indicative of a paper launch. But if you buy into tin foil hat rumors, this was intentional and prices will be jacked up shortly.
 
As an Electrical Engineer that has been working and designing with devices from Samsung's 8nm node for over 18 months you are full of sh*t ....... Every Samsung phone since 1st Q 2019 has been using a SoC from that node ......

As you note TSMCs 7nm node is booked solid and they can't even keep up with SoCs for Sony and Microsoft or Renoir CPUs for AMD and you have the Zen 3 line releasing so where the hell do you think AMD is going to find production time for a yet unproven GPU? Pull it out of Azor's behind?

As an AMD stockholder since 2009 I think AMD would be fools to push aside their proven money makers (Consoles SoCs and Zen CPUs) to make room for what is still a niche market share for them ...... Maybe next year after they have satisfied demand for their proven technology but yet this year? Don't count on it
Samsung 8nm compared to TSMC's 7nm is like comparing a corvette to a lambo. They are both damn fast cars, but you have to admit the lambo is sexier and a tad quicker thanks to Audi's tech/production lines.

AMD makes razor thin margins for Console SOCs. There is considerably more profit in separate dGPUs. Zen CPU's are the most profitable. That said, I believe AMD would be contractually obligated to supply Sony and MS with so many units by the Christmas time frame and per year. Otherwise there will likely be penalties involved.
 
Reactions: hotaru.hino

JarredWaltonGPU

Senior GPU Editor
Editor
Feb 21, 2020
576
470
760
0
Samsung 8nm compared to TSMC's 7nm is like comparing a corvette to a lambo. They are both damn fast cars, but you have to admit the lambo is sexier and a tad quicker thanks to Audi's tech/production lines.

AMD makes razor thin margins for Console SOCs. There is considerably more profit in separate dGPUs. Zen CPU's are the most profitable. That said, I believe AMD would be contractually obligated to supply Sony and MS with so many units by the Christmas time frame and per year. Otherwise there will likely be penalties involved.
You also have to admit that if there's no room on the 'lambo' factory to make more cars, being able to manufacture 'corvettes' is a reasonable alternative. :p

TSMC N7 is definitely better than Samsun 8N, and N7P, N7+, N6, and N5 are all better than N7. But I don't think any of those could hit the volumes and prices Nvidia wanted. Which brings up an interesting point: If AMD is paying for the more expensive N7 wafers for RDNA2, it will have to either sell at higher prices or make substantially less profit per sale. Neither is desirable from a business standpoint, obviously, and even if the latter makes people 'like' AMD more, I'm not sure that goodwill is worth the cost either.
 
Reactions: digitalgriffin

InvalidError

Titan
Moderator
Which brings up an interesting point: If AMD is paying for the more expensive N7 wafers for RDNA2, it will have to either sell at higher prices or make substantially less profit per sale.
Not necessarily: if TSMC's 7nm is so much better than Samsung in every way like it appears to be, then AMD should be able to cram similar performance into less silicon with less overall waste and cancel out the higher wafer price handicap.
 
Reactions: sstanic

JarredWaltonGPU

Senior GPU Editor
Editor
Feb 21, 2020
576
470
760
0
Not necessarily: if TSMC's 7nm is so much better than Samsung in every way like it appears to be, then AMD should be able to cram similar performance into less silicon with less overall waste and cancel out the higher wafer price handicap.
AMD is doing 10.3 billion transistors in 251mm square for Navi 10. That's 41 million per mm^2.
Nvidia is doing 28.4 billion transistors in 628mm square for GA102. That's 45.2 million per mm^2.
Nvidia is doing 54.2 billion transistors in 826mm square for GA100. That's 54.7 million per mm^2.

Two things are apparent. First, Nvidia crammed in way more transistors per mm^2 on N7 than AMD did. Second, Nvidia got similar densities to AMD N7 using Samsung 8N. But we don't know exactly why the densities are the way they are. AMD often uses machine optimized layouts for GPUs, which might be easier but less optimal from a density standpoint. Nvidia may hand-tune things more. Or it might be more cache vs. less cache.

Anyway, if AMD can achieve closer to GA100 levels of logic density on RDNA2, that would be great. But considering the wafers on N7 potentially cost 50-100% more than Samsung 8N, that's a big jump. Let's just go with a few hypotheticals, though.

GA102 is 628mm2, so Nvidia gets at most 89 chips per wafer. If the wafer start costs $6000 (possibly $5000), that's $56-$67 per chip as the base price.
Navi 10 is 251mm2, so AMD can get at most 230 chips per wafer. If the wafer start costs $9300 (possibly less, but that's the estimate), that's $40 per chip.

Navi 21 will likely be at least 400mm2, possibly as large as 500mm2. Let's go with 450mm2 though. That's about 124 chips per wafer, and at $9300 per wafer start it would be $75 per chip.

You can try and juggle the numbers, but basically if the wafer start price estimates are right (and they're almost certainly not), AMD would need Navi 21 to be more like 400mm2 or less to match the cost of GA102. And that's not even factoring in memory -- though 16GB GDDR6 14Gbps maybe costs the same as 10GB of GDDR6X 19Gbps? Maybe the cooler costs less as well, assuming 6900 XT isn't a 300+ watt part.

It will be interesting to see how it all shakes out. Regardless, you can see Nvidia intentionally goes after higher profit margins. Just like Intel in that sense. The only place Nvidia uses TSMC N7 is on GPUs that it can sell at "price doesn't even matter" levels. Nvidia A100 are basically $15,000-$18,000 each in the DGX A100.
 

Chung Leong

Upstanding
Dec 6, 2019
290
82
260
0
Or it might be more cache vs. less cache.
The Ryzen 3700X has 51.4 million per mm². Cache definitely seems to be where you can make the most out of the density on paper. That's why the GA100 is on TSMC, I think. The portion of cache to compute units is more similar to that of a CPU.

Rumor has it that Intel is going with Samsung for Xe HPG. I think they're realizing that a process optimized for CPUs doesn't work so well for GPUs.
 

InvalidError

Titan
Moderator
Nvidia crammed in way more transistors per mm^2 on N7 than AMD did.
That may change with Zen 3 and RDNA2 from process refinements and associated primitive libraries optimizations over the last year and change since the last round. Also, RDNA2 is supposed to be a fair bit more efficient than RDNA1, which could mean an increase in GPU performance per GTransistor.
 

vt1

Sep 26, 2020
1
0
10
0
As an Electrical Engineer that has been working and designing with devices from Samsung's 8nm node for over 18 months you are full of sh*t ....... Every Samsung phone since 1st Q 2019 has been using a SoC from that node ......

As you note TSMCs 7nm node is booked solid and they can't even keep up with SoCs for Sony and Microsoft or Renoir CPUs for AMD and you have the Zen 3 line releasing so where the hell do you think AMD is going to find production time for a yet unproven GPU? Pull it out of Azor's behind?

As an AMD stockholder since 2009 I think AMD would be fools to push aside their proven money makers (Consoles SoCs and Zen CPUs) to make room for what is still a niche market share for them ...... Maybe next year after they have satisfied demand for their proven technology but yet this year? Don't count on it
CPU nodes are typically first used for mobile where the clock speed (and historically core count) demands are low. Then they go to laptop where the demands are a bit higher, then to desktop where demands are quite a bit higher, and finally to GPUs where the demands are highest.

TSMC 7nm was able to buck this trend because AMD uses chiplets that allow throwing together many terrible, 'unnaturally' upclocked mobile-level processing cores with one (or two) good laptop-level one(s). Otherwise, they would be stuck slowly introducing laptop-level Zen CPUs, like Intel 10nm is. This is why Zen 2 could not catch up in single-core performance with Intel, but Zen 3 may do it.

While the technical process may improve, a major reason that this trend occurs is that it is impossible to widely bin at the start of a node, but as more dies are produced, more golden samples can be saved up to eventually find their way to (historically) more-demanding CPUs and GPUs.

Therefore, Samsung's 8nm, which as you rightly pointed out has been used extensively, but only on mobile, is still quite a fresh node with no use (as far as I am aware) on laptop or desktop. So to jump from mobile to GPU is a great stretch. The yields and binning cannot be good and there must be great waste. This lack of golden samples good enough for GPU is also likely the reason that power usage is so high.
 
Last edited:
I have a feeling since AMD cards were historically better for mining, if word gets out that RDNA2 is much better than Ampere, the miners will flock to these cards. If not the scalpers looking for yet another "investment"
Nvidia massively increased the FP32 compute performance of Ampere compared to their prior architectures though. If the mining fad were to take off again, unless AMD happened to do something similar with RDNA2, it would likely be Nvidia's cards that would be more attractive to miners this time around.
 

ASK THE COMMUNITY

TRENDING THREADS