News AMD Teases RX 6000 Performance: Big Navi Looks close to the RTX 3080

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,292
824
20,060
Obviously it would have to change if AMD has a far lower cost card that can come close to Nvidia's $700 card. But be real: Not. A. Chance. This is the top model RX 6000. It's 80% faster than RX 5700 XT. I just can't see that happening with anything less than the 5120 shader core model. Especially since you don't get perfect scaling of performance -- the RTX 2080 Ti for example has twice as many cores as the RTX 2060 Super, and yet it's nowhere near twice as fast. (It's 50-70% faster, at 1440p and 4K, depending on the game and settings used.) Considering we expect the 6900 XT to have 16GB of GDDR6, but still on a 256-bit memory bus, even an 80% improvement over the 5700 XT is damn impressive.
If those performance #'s aren't from the 6900XT, but for either the 6800XT or 6700XT, will you buy me a non-alcoholic drink as a friendly wager?
 
  • Like
Reactions: Jim90
I don't think the shown performance was from the topmost card!
There are several reasons:
  1. AMD never seems to start by showing off the best if there's more than one. (They didn't start by showing the Ryzen 9 5950X, but the 5900X.)
  2. They won't give Nvidia that much time to see the actual competition before the official presentation on the 28:th.
  3. They're unable to show the performance of the release cards because the clocks, power limits and voltages haven't been finally decided on yet.
Just like the RTX 3080 and RTX 3090 are almost realities when you visit a store to buy one off the shelf...
  1. AMD only provided somewhat detailed performance charts for 5900X and 5950X -- and the 5900X is the faster CPU for gaming. 5800X and 5600X were barely discussed.
  2. There's not much Nvidia can do at this point. Ampere GA102 is basically maxed out in the 3090, and Nvidia can't produce enough to meet demand as it stands.
  3. Final clocks might not be dialed in, but at this stage AMD is going to be within a few percent. Also, AMD isn't going to screw things up with last-minute clock adjustments like it did with RX 5600 XT.

As I note above, for AMD to get an 80% boost in performance over the 5700 XT is extremely impressive. Unless it has more than 5120 shader cores in Big Navi, that's far better scaling than anything Nvidia got from Ampere. Of course, Ampere's architectural changes are pretty extensive, so it's not that easy to compare. Let me give a few other data points, though.

RTX 2070 Super (2560 cores, 256-bit GDDR6) vs. GTX 1060 6GB (1280 cores, 128-bit GDDR5): 2070 Super is 125-150% faster. Double the cores, higher clocks, over double the memory bandwidth.
Titan RTX (4608 cores, 384-bit GDDR6) vs GTX 1070 Ti (2432 cores, 256-bit GDDR5): Titan RTX is 90-115% faster. Note quite double the cores, but triple the bandwidth.
RTX 2080 Ti (4352 cores, 352-bit GDDR6) vs RTX 2060 Super (2176 cores, 256-bit GDDR6): 2080 Ti is 55-65% faster. Double the cores, but bandwidth is only 38% higher.
RTX 3080 (8704 cores, 320-bit GDDR6X) vs RTX 2080 Ti (4352 cores, 352-bit GDDR6): 31% faster at 4K. Double the cores, roughly the same INT32 performance, and 52% more bandwidth.

Basically, if you want to double performance (or close to it), you have to double everything -- not just the core counts. We don't know for sure what AMD is doing with the RDNA 2 architecture, and it's possible AMD will have separate INT and FP datapaths to boost performance (similar to Turing vs. Pascal). Still, assuming higher clocks, it looks like AMD is basically getting scaling similar to 2080 Ti vs. 2060 Super on the cores and RAM, and then the higher GPU clocks add maybe another 20%. That would be consistent with a 5120 shader core implementation. I think anyone who really believes AMD showed RX 6800 XT numbers will be in for a rude awakening at the end of the month.

To be clear: Matching RTX 3080 is a huge deal for AMD. Last generation, at launch the RTX 2080 was 30-40% faster than AMD's best GPU of the time (Vega 64). And it used less power. When AMD pushed out the Radeon VII, the 2080 was still 5-10% faster on average. With the RX 5700 XT, AMD improved performance per watt but the 2080 was still 10-15% faster. And by that time, the 2080 Super added another 10% to the Nvidia lead. So AMD had to compete on price.

This time, AMD appears to be able to come close to the RTX 3080 -- win some benchmarks, lose some others, probably using less power. That's an excellent improvement over the previous AMD GPUs. It would put AMD in the Zen 2 vs. Coffee Lake Refresh category as opposed to Zen 1 vs. Coffee Lake.
 
Even if this is the 6800 XT, I've no reason to believe a "larger" 6900 XT is around the corner. Assuming that AMD is keeping the 64 shader count per CU, the supposed specification of 5120 shaders puts it at 80 CUs which is pretty huge for AMD (the highest they've done is 64). There would be need to be a significant number on top of this to actually make a difference. And even if there's a significant uptick in CU count, it'll likely just be the same situation as the 3080 is with the 3090: not much of a performance boost for the price difference you have to pay.

In the scenario where they showed the 6800 XT and there's going to be a 6900 XT, then I believe the 6900 XT will just be binned 6800 XT's that can be clocked higher and will sit between the 3080 and 3090 (though closer to the 3080) as far as the price/perf ranking goes.
 
  • Like
Reactions: digitalgriffin

BILL1957

Commendable
Sep 8, 2020
59
17
1,535
TSMC can not make these enough to fullfill the demand! Its is just impossible. Just look how good new Zen3 is... amd is gonna sell tons of those and still there is more demand than TSMC ever can produse those. This big Navi will be halo product that has much bigger demand than there is production! I Expect that most people have to wait 2021 to get their Zen3. Same is true with big Navi. TSMC 7nm is really good node, but the demand will be so huge that TSMC would have to chancel all other busines partners and still it would not be enough... and TSMC is going to make other stuff than AMD Also in this and next year.

Exact reason I believe that Nvidia went with the Samsung option.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,292
824
20,060
Exact reason I believe that Nvidia went with the Samsung option.
I believe the real reason why Jensen Huang & nVIDIA went with Samsung, is from what I've heard that Jensen wanted a discount on 7nm Waffers from TSMC and Jensen used Samsung Foundry designs as a bargaining chip to leverage against TSMC. TSMC wasn't going to have none of that and told Jensen to go to Samsung. TSMC has more demand than supply and it's more than happy to send Jensen Huang away to Samsung given the acrimonious history between nVIDIA and TSMC. TSMC still remembers all the times in the past that Jensen Huang has publically badmouthed and blamed TSMC. Despite nVIDIA being a OG partner with TSMC, the history between the two companies isn't rosey at all. So there was no real reason to keep Jensen Huang and nVIDIA on TSMC.

And the rest is history.
 
  • Like
Reactions: Avro Arrow
Obviously it would have to change if AMD has a far lower cost card that can come close to Nvidia's $700 card. But be real: Not. A. Chance. This is the top model RX 6000. It's 80% faster than RX 5700 XT. I just can't see that happening with anything less than the 5120 shader core model. Especially since you don't get perfect scaling of performance -- the RTX 2080 Ti for example has twice as many cores as the RTX 2060 Super, and yet it's nowhere near twice as fast. (It's 50-70% faster, at 1440p and 4K, depending on the game and settings used.) Considering we expect the 6900 XT to have 16GB of GDDR6, but still on a 256-bit memory bus, even an 80% improvement over the 5700 XT is damn impressive.
I'm not so certain of that Jarred. We can't forget that AMD does still have a pretty good silicon advantage over nVidia (TSMC vs Samsung) so they should have more voltage headroom to play with. Do I think that this makes a difference? I have no idea but it definitely could.

Here's what's really puzzling though; if it is the 6900 XT, then why wouldn't they just say so? It's what a lot of people are thinking anyway and it's not like it would've been some huge surprise to anyone. I can't see any advantage of AMD not saying what it is if it's the best that they have. Showing the results of the top card while being noticeably "mum" on what it is could actually have a negative result if there's nothing else to show. It would cause rampant speculation (Let's be honest, it's US we're talking about here and that's what enthusiasts do) that would end with a bit of a dejected sigh (not good). If I were AMD and that was ATi's best offering, I would have said so. The numbers are good enough that there was no need to be coy about which card it was.

Now, since I'm at work with nothing to do ATM, I'm going to jump down this rabbit hole and conspiracy theory the crap out of this because I'm bored.

Ok, so AMD claimed that the top Big Navi card would have a 100% performance increase over the 5700 XT. So, they literally doubled the CUs and are using a newer process node. Now, we all know that shaders don't scale perfectly linearly but they DO scale. It is possible that just the doubling of the CUs and the use of a new process node ALONE could cause an 80% increase in performance.

Remember that for the RTX 3090, 20% more shaders meant 15% more performance (in some cases). That's an example of shaders scaling at about 75%. If AMD achieved 75% scaling with its shaders, that would mean that they could have gained 80% just with the doubling of the CUs and using the new process node while still using RDNA 1. Then add the new architecture with the IPC increase and new cache system that ATi came up with and you could be looking at a 20% performance increase on top of that.

It is not impossible that what AMD showed really was a 6800 XT and that ATi really DID manage a full 100% improvement on the 5700 XT with the 6900 XT. It would also explain why they were being coy about which card they were showing off. Sure, they showed the Vega 64 first and sure, they showed the 5700 XT first, but neither of those cards had Earth-shattering performance and showing anything else would have made them look even worse.

"Well, it's just a thought. Y'all have a good night!"
- Beau of the Fifth Column
 
Last edited:
if this gpu is in reality as good as these leaks shows and the price is even semi reasonable... TSMC can not make these enough to fullfill the demand! Its is just impossible. Just look how good new Zen3 is... amd is gonna sell tons of those and still there is more demand than TSMC ever can produse those. This big Navi will be halo product that has much bigger demand than there is production! I Expect that most people have to wait 2021 to get their Zen3. Same is true with big Navi. TSMC 7nm is really good node, but the demand will be so huge that TSMC would have to chancel all other busines partners and still it would not be enough... and TSMC is going to make other stuff than AMD Also in this and next year.
All of this would seem to be correct except that AMD publicly stated that they WILL have stock and won't have the same situation as nVidia. If that wasn't true, then they would've kept their mouths shut because at this point, while people could complain that AMD also had a terrible launch, they couldn't say that they were any worse than nVidia. By saying that it won't happen, AMD has changed that dynamic. Now if it happens, then we could call AMD a bunch of liars. If AMD doesn't have stock, they'll have needlessly put themselves behind the 8-ball with that statement and I don't think that they're that stupid.
Looks good for the 6900 but things we do not know yet are power consumption and VRAM size. If AMD can offer the 6900 with 12+ GB VRAM and less power consumption for 650$ then this would be serious competition for the RTX 3080.
Well one thing's for sure, it can't be any worse than Ampere. Ampere makes Fermi look like a firefly being compared to a dragon.
Your Charts assume that's the 6900XT, what if those figures turn out to be from the 6700XT?

How will your analysis change?
There's no way that it's a 6700, but a 6800 is definitely a possibility.
not really , rumored 20GB 3080 ti is almost a reality. Nvidia is waiting for AMD move.
And just how much do you think that nVidia will be asking for a 20GB version of the RTX 3080, like $1000? If the RX 6900 XT (which IS 16GB) is priced below the 10GB RTX 3080, a 20GB version would quickly become irrelevant.
Their reveal of the last generation hardware (or current generation until next month?) showed performance numbers for the 5700 XT. Similar story with the Vega 64 reveal.
That is very true, but neither of those cards offered Earth-shattering performance, especially when compared to Pascal and Turing. If they were able to get near-3080 performance with the 6800 XT, it would make sense NOT to show the 6900 XT or XTX (rumoured liquid-cooled version).
I believe the real reason why Jensen Huang & nVIDIA went with Samsung, is from what I've heard that Jensen wanted a discount on 7nm Waffers from TSMC and Jensen used Samsung Foundry designs as a bargaining chip to leverage against TSMC. TSMC wasn't going to have none of that and told Jensen to go to Samsung. TSMC has more demand than supply and it's more than happy to send Jensen Huang away to Samsung given the acrimonious history between nVIDIA and TSMC. TSMC still remembers all the times in the past that Jensen Huang has publically badmouthed and blamed TSMC. Despite nVIDIA being a OG partner with TSMC, the history between the two companies isn't rosey at all. So there was no real reason to keep Jensen Huang and nVIDIA on TSMC.

And the rest is history.
Yep, nVidia really burned its bridges with that. I'm sure that TSMC took great delight in telling nVidia how short a pier off of which to take a long walk. It happened ten years ago but smart people don't forget and TSMC is chock-full of smart people. I remember good ol' Charlie's take on it:
 
Here's what's really puzzling though; if it is the 6900 XT, then why wouldn't they just say so? It's what a lot of people are thinking anyway and it's not like it would've been some huge surprise to anyone. I can't see any advantage of AMD not saying what it is if it's the best that they have. Showing the results of the top card while being noticeably "mum" on what it is could actually have a negative result if there's nothing else to show. It would cause rampant speculation (Let's be honest, it's US we're talking about here and that's what enthusiasts do) that would end with a bit of a dejected sigh (not good). If I were AMD and that was ATi's best offering, I would have said so. The numbers are good enough that there was no need to be coy about which card it was.
It's to keep the hype train going.

View: https://www.youtube.com/watch?v=iBky_XyuetM
 

Turtle Rig

Prominent
BANNED
Jun 23, 2020
772
104
590
After reading all these posts looks like Big Navi is a generation behind nVidia lol. As fast as a 2080Ti I just read on a post. I can't wait for the 3090Ti to show up then we will see how small Navi is,,, no phun... Im not a fan boy of any company but sometimes you must accept second place. AMD tries and tries and I credit them for a great effort but just like my Riva TNT card decades ago nVidia is simply a graphics card company and AMD is trying to do it all CPU APU and GPU,, and that is what messes them up IMHO.👶🎗💯👽
 
Oh sure, it COULD be but too much hype is a BAD thing and AMD should have learnt this by now. Of course, that doesn't mean that they have.

In any case, the scenario that I described doesn't really sound all that crazy. I have seen that video BTW, I watch GamersNexus religiously along with "Harbour On Box", RedGamingTech and AdoredTV. Steve Burke is awesome because he just says what he thinks (and is usually right) and doesn't give a damn what anyone thinks about it.
 
Last edited:

Turtle Rig

Prominent
BANNED
Jun 23, 2020
772
104
590
So, 8% is a generation? You haven't been around tech very long have you? A difference of 8% isn't even enough to be the difference between different tiers of the same GPU family. Posts like this don't do you any favours when it comes to building credibility.

EDIT: I just read your tech specs. DEFINITELY new to tech.
No no no so sorry I got you confused. What I meant was. A 3950x vs a 10900k the 10900k wins big time in gaming. So yes its 19 percent IPC they need 19 percent to come closer to the 10900k performance but there is no proof it will be faster then the 10900k @ 5.2Ghz all core. Sorry about confusion. Yes its 19 percent but half of that is just to reach the 10900k level and actually a little behind. The extra percentage will be a close call with the 10900k if you compare the gaming performance. I dont know if this made sense but ya. If it was a 8 percent IPC it would barely tie a 10900k but since its 19 percent,, they get a additional 11 percent to compete with the 10900k and possibly overtake it. We won't know until benchmarks start showing up and legit ones at that.👽💯✌👶
 
Oh sure, it COULD be but too much hype is a BAD thing and AMD should have learnt this by now. Of course, that doesn't mean that they have.

In any case, the scenario that I described doesn't really sound all that crazy. I have seen that video BTW, Steve Burke is awesome because he just says what he thinks (and is usually right) and doesn't give a damn what anyone thinks about it.
I'm not denying the possibility the figures they showed are for a 6800 XT either, but I'm in the same boat as Steve when it comes to RTG's marketing. They really need to stop acting like a bunch of dudebros. So anything the RTG marketing team puts out I treat it as if they're just trying to hype things up until they appear to have gotten their act together.
 
  • Like
Reactions: Avro Arrow

mihen

Honorable
Oct 11, 2017
466
54
10,890
With a 256-bit bus, I imagine AMD will use a power of 2 vram amount. So 8 or 16 GB.
If console ports translated to gains with pc performance, then it would have showed itself last generation. Most game studios redo the rendering pipeline for PC because nVidia pays them to do so. This usually means that AMD cards will suffer without there own rendering pipeline. In games with limited editing from console to pc AMD cards have smoked nVidia, but these tend to be limited to Microsoft Game Studios.
 
I'm not denying the possibility the figures they showed are for a 6800 XT either, but I'm in the same boat as Steve when it comes to RTG's marketing. They really need to stop acting like a bunch of dudebros. So anything the RTG marketing team puts out I treat it as if they're just trying to hype things up until they appear to have gotten their act together.
On that we can agree 100%. I have no idea how AMD can have actually good marketers on the CPU side but their GPU marketers come across as having only finished grade school. Putting the render of the card in Fortnite really made me facepalm. It's like watching the old Keystone Cops screwing things up left and right. Hopefully, Lisa brought them into her office and told them to lay off of their meth pipes because when marketing becomes completely nonsensical, it's time to clean house. I'll never forget how infantile their marketing was for Vega and how I had hoped that Lisa fired the lot of them.

I would've expected it to be common sense that if you have a product that isn't competitive, generating more hype only makes things worse.
 
Last edited:
  • Like
Reactions: hotaru.hino

BeedooX

Reputable
Apr 27, 2020
71
53
4,620
...You usually put your best foot forward when giving benchmarks. 8% slower on two of three is discouraging.
You categorically do NOT put your best foot forward during a 'tease' for a product which is not related to the event in which it is teased. Especially given the teased product has an official event that's happening in 2-3 weeks time.

To tease a product that will fall short of the competition puts a dampener on the success of the main event - i.e. the CPUs look great, but the positive vibe is dampened by an underwhelming GPU. AMD would have been better off showing nothing at all!

You must be aware of the mind games currently being played out by the various manufacturers at the moment. If Lisa Su just openly showed off AMDs best GPU offering - and that's all she's got, then she just showed herself to be a real amateur and there's nothing to show for the official GPU launch later this month.
 
  • Like
Reactions: Kamen Rider Blade

nofanneeded

Respectable
Sep 29, 2019
1,541
251
2,090
And just how much do you think that nVidia will be asking for a 20GB version of the RTX 3080, like $1000? If the RX 6900 XT (which IS 16GB) is priced below the 10GB RTX 3080, a 20GB version would quickly become irrelevant.

We dont know yet RX 6900XT price . and we dont know yet the performance of RTX 3080 "ti" at all.
 
You categorically do NOT put your best foot forward during a 'tease' for a product which is not related to the event in which it is teased. Especially given the teased product has an official event that's happening in 2-3 weeks time.

To tease a product that will fall short of the competition puts a dampener on the success of the main event - i.e. the CPUs look great, but the positive vibe is dampened by an underwhelming GPU. AMD would have been better off showing nothing at all!

You must be aware of the mind games currently being played out by the various manufacturers at the moment. If Lisa Su just openly showed off AMDs best GPU offering - and that's all she's got, then she just showed herself to be a real amateur and there's nothing to show for the official GPU launch later this month.

Armchair amateur? Interesting. So you think engineering and running a company like this is easy? And if you make mistakes you are somehow an amateur?

RTG's past had been littered with over hype which lead to disapointment.

Fury lacked memory and was too expensive.

Gcn was slower than nvidia's 3 top cards. It also ate power.

Vega had teething issues with cooling and was slower than a 1080 and way late to the game.

Im not trying to slam AMD here. I owned a 7970, a 580, and a 5700XT. But i have seen the bad taste these launches have left in people's mouth.

Amd has been silent thus far. I think they are learning from past mistakes in terms of marketing.

But hype was slowly brewing that Big Navi would match a 3080. I thought this was possible also.

That said this may be a controlled "realistic expectations" to prevent another letdown. And its also a teaser as they polish the final numbers. But I wouldnt expect miracles. Its more about stability at this point.

Releasing information early doesn't really affect nvidia that much. 3080 and 3090 are out. 3070 is set to be released after big Navi. So NVIDIA can adjust plans if an announcement was made now or on the 28th. So it really doesn't matter there.

You also have to remember they showed these numbers based off ryzen 5000 series. Thats faster than intel's best. That also means nvidia might get a boost, albeit a very small one at 4k.

Either way this is conjecture until cards are in reviewers hands. But I'm guessing 3080 will be a tad faster from these numbers.

That said, Paul at RTG thinks the lack of bandwidth at 4k is what is holding the 3080 back. Im not so sure this is the case based on how gpu's break down renders into small chunks for each cu. If anything an onboard large cache benefits more from a large scene. But thats getting into tech details a little too much if how gpu's work.
 
Last edited:
Here's what's really puzzling though; if it is the 6900 XT, then why wouldn't they just say so? It's what a lot of people are thinking anyway and it's not like it would've been some huge surprise to anyone. I can't see any advantage of AMD not saying what it is if it's the best that they have. Showing the results of the top card while being noticeably "mum" on what it is could actually have a negative result if there's nothing else to show. It would cause rampant speculation (Let's be honest, it's US we're talking about here and that's what enthusiasts do) that would end with a bit of a dejected sigh (not good). If I were AMD and that was ATi's best offering, I would have said so. The numbers are good enough that there was no need to be coy about which card it was.
The biggest reason: Keeping the final (official) name a secret, so that AMD doesn't spoil the Oct. 28 reveal. It seems silly, but I've seen so many "silly" shenanigans from tech companies over the years that it's the most likely case.

Hypothetically, AMD could be showing RX 6800 XT numbers and sandbagging for the Oct. 28 reveal. But it would have to have some absolutely amazing gains to be able to do that -- gains that we have probably never seen from AMD in the past. Double the cores, same memory bandwidth (maybe 16Gbps gives 14% more bandwidth), double the memory capacity. It's that second one that will be a massive sticking point. Unless AMD has another ace up its sleeve and it will actually use GDDR6X or GDDR6 clocked at 20Gbps?

But I'd be very surprised if AMD used GDDR6X -- just like GDDR5X was only ever used by Nvidia on the 1080 and 1080 Ti, I suspect GDDR6X will be an Nvidia-Micron exclusive. Not because Micron wouldn't sell to other companies, but just that no one else is willing to pay for it.

Even massive improvements in caching won't fully negate the need for more memory bandwidth. The RTX 3080 has 760GBps of bandwidth, and based on what we've heard, RX 6900 XT will only have 512GBps ... and yet AMD is coming close to (or even matching) the 3080 performance, albeit in an AMD-promoted game. Caching and other architectural enhancements could allow AMD to achieve that, sure. But if you then cut the memory interface down to 192-bit, even at 20Gbps it would still only be 480GBps, and at the more likely 16Gbps it would be 384GBps. That's basically half the bandwidth of the RTX 3080.

Traditionally (like, for the past decade or more), Nvidia has had superior memory controllers and made apparently better use of available bandwidth than AMD. Big Navi would need to actually flip the tables and have AMD delivering more effective usable bandwidth relative to Ampere for the 6800 XT to match the 3080. Plus, credible rumors put the 6800 XT at 60 CUs and 3840 shader cores, which is also less than half of the FP32 performance of the 3080. AMD might have concurrent FP + INT paths, but that's still not enough to match 8704 FP32 + 4352 INT32 cores.

I could see maybe one of the above things panning out -- like AMD improving the FP32 and INT32 performance, or AMD improving usable bandwidth, or AMD having faster memory than expected. But for all three to be true, which is what would be required for the mystery RX 6000 card to be close to the 3080? That's just too much wishful thinking in my book. I'll be happy if I'm wrong, as it will mean RX 6900 XT could take on RTX 3090 in traditional gaming performance (still skeptical of ray tracing performance). That would be amazing! And it would also mean Nvidia would be in second place for a bunch of gaming tests for the first time in nearly a decade.
 
  • Like
Reactions: Avro Arrow
The biggest reason: Keeping the final (official) name a secret, so that AMD doesn't spoil the Oct. 28 reveal. It seems silly, but I've seen so many "silly" shenanigans from tech companies over the years that it's the most likely case.

Hypothetically, AMD could be showing RX 6800 XT numbers and sandbagging for the Oct. 28 reveal. But it would have to have some absolutely amazing gains to be able to do that -- gains that we have probably never seen from AMD in the past. Double the cores, same memory bandwidth (maybe 16Gbps gives 14% more bandwidth), double the memory capacity. It's that second one that will be a massive sticking point. Unless AMD has another ace up its sleeve and it will actually use GDDR6X or GDDR6 clocked at 20Gbps?

But I'd be very surprised if AMD used GDDR6X -- just like GDDR5X was only ever used by Nvidia on the 1080 and 1080 Ti, I suspect GDDR6X will be an Nvidia-Micron exclusive. Not because Micron wouldn't sell to other companies, but just that no one else is willing to pay for it.

Even massive improvements in caching won't fully negate the need for more memory bandwidth. The RTX 3080 has 760GBps of bandwidth, and based on what we've heard, RX 6900 XT will only have 512GBps ... and yet AMD is coming close to (or even matching) the 3080 performance, albeit in an AMD-promoted game. Caching and other architectural enhancements could allow AMD to achieve that, sure. But if you then cut the memory interface down to 192-bit, even at 20Gbps it would still only be 480GBps, and at the more likely 16Gbps it would be 384GBps. That's basically half the bandwidth of the RTX 3080.

Traditionally (like, for the past decade or more), Nvidia has had superior memory controllers and made apparently better use of available bandwidth than AMD. Big Navi would need to actually flip the tables and have AMD delivering more effective usable bandwidth relative to Ampere for the 6800 XT to match the 3080. Plus, credible rumors put the 6800 XT at 60 CUs and 3840 shader cores, which is also less than half of the FP32 performance of the 3080. AMD might have concurrent FP + INT paths, but that's still not enough to match 8704 FP32 + 4352 INT32 cores.

I could see maybe one of the above things panning out -- like AMD improving the FP32 and INT32 performance, or AMD improving usable bandwidth, or AMD having faster memory than expected. But for all three to be true, which is what would be required for the mystery RX 6000 card to be close to the 3080? That's just too much wishful thinking in my book. I'll be happy if I'm wrong, as it will mean RX 6900 XT could take on RTX 3090 in traditional gaming performance (still skeptical of ray tracing performance). That would be amazing! And it would also mean Nvidia would be in second place for a bunch of gaming tests for the first time in nearly a decade.
You could be right, I won't say that I know for sure but... I still don't buy it. Teasing numbers like that for the RX 6900 XT(X) would only serve to spoil the launch on the 28th. AMD knows that tech enthusiasts fall into two categories. The first category is the one that doesn't look at the big picture and it is by far the larger category. These are users that are either too new or don't have the necessary neuronic (if that's a word) processing power to see the patterns of the industry over time. These are the people who make assumptions based solely on what is shown to them. They don't try to "read between the lines" because they don't even know what "between the lines" are or could be.

Then there's category #2. Category #2 includes people like you, me, Jim, Linus, Steve Burke, Steve Walton, Paul('s Hardware), Kyle Bitwit, Paul RGT and everyone else who has watched the industry change and evolve over time. This usually requires AT LEAST fifteen years and is often based on when someone did their first build (in my case, 1988). We've seen every trick in the book already. We recognised that the nVidia RTX 30 series was actually one of their worst launches ever and (most of us) take one look at the way the new RX 6000 tease was handled, listen to the language used and immediately know that something else is out there. The internet has become abuzz with the exact theory that I postulated in my previous response to you.

I want to say it now (before someone else does), that after the RDNA 2 launch, we'll look back on the month we're in right now and call it "Red October".
 
Another thing to consider is that Nvidia undoubtedly has informants inside AMD, giving them an idea of what their competitor is planning. That's likely why they decided to price the 3070 and 3080 at a level they couldn't adequately supply, rather than marketing those cards as something like a 3080 and 3080 Ti at higher price points, and making more money off of them.

Of course, if that's the case, then their uncompetitive pricing of the 3090 should be telling, as it's indicative that they don't feel AMD will have a card quite at that performance level. And considering the 3090 typically only manages to be 10-15% faster than the 3080 at 4K, we can kind of surmise that they don't expect AMD to have a card much faster than the 3080. Performance right behind a 3080 seems right in line with that. And if I had to guess, that's also why they needed to push the 3080 and 3090 up to 320 and 350 watt TDPs with abnormally large coolers. Perhaps they originally planned the 3080 to have about a 250 watt TDP, and the 3090 to be around 300 watts when they were designing them, but after catching wind of AMD's plans, felt they needed to push the clock rates higher to keep AMD from outperforming the 3080.

So, Nvidia's actions surrounding their Ampere cards gives me the impression that these performance numbers probably are for AMD's top-end card. Or a card right near the top-end, at the very least. I suppose it's possible they could have a slightly faster card, but even if that were the case, it would probably only manage to be marginally faster than a 3080, seeing as Nvidia doesn't appear to be concerned about making the 3090's pricing competitive with it.

After reading all these posts looks like Big Navi is a generation behind nVidia lol. As fast as a 2080Ti I just read on a post. I can't wait for the 3090Ti to show up then we will see how small Navi is,,, no phun... Im not a fan boy of any company but sometimes you must accept second place. AMD tries and tries and I credit them for a great effort but just like my Riva TNT card decades ago nVidia is simply a graphics card company and AMD is trying to do it all CPU APU and GPU,, and that is what messes them up IMHO.👶🎗💯👽
Did you actually read the article? Evidence is suggesting performance not far behind a 3080, while the 3070 is expected to perform close to a 2080 Ti. As for the prospects of a "3090 Ti", what would it offer? The 3090 is only around 10-15% faster than a 3080 while having 24GB of VRAM and costing $1,500+. With the 3090 already pushing its graphics chip to its limits, there isn't much room to go up from there. Likewise, with only a 10-15% performance difference between the 3080 and 3090, there isn't much room for something like a 3080 Ti either, aside from maybe adding more VRAM and perhaps pushing performance slightly higher.

In any case, the performance compared to $1000+ cards doesn't actually tend to matter all that much to the vast majority of people. What matters is the performance at the price points that are common for people to actually buy, and it seems like AMD should be covering that range rather thoroughly, even if we don't know what pricing will be like quite yet.
 
  • Like
Reactions: Avro Arrow

Olle P

Distinguished
Apr 7, 2010
720
61
19,090
  1. There's not much Nvidia can do at this point. ...
  2. Final clocks might not be dialed in, but at this stage AMD is going to be within a few percent. Also, AMD isn't going to screw things up with last-minute clock adjustments like it did with RX 5600 XT.
To be clear: Matching RTX 3080 is a huge deal for AMD.
1. Nvidia seems to rush in the 20GB RTX 3080 to meet the threat, and there might be some price adjustments.

2. We're still weeks away from the official presentation, and then more weeks before lift of review embargo and sale start. There's still time to optimize clock/voltage (aiming to get the best possible reviews), and still have more time left than the RX 5600XT debacle.
The "few percent" might be the difference between being beaten by the RTX 3080 on average (bad for AMD) and beating the RTX 3090 in a few benchmarks (great for AMD). At the same time it's also the difference between great and lousy power efficiency, which can't be neglected given that efficiency has been a major argument in the marketing this far.

The rumors I've heard are in line with:
AMD's development objective: Beat the RTX 3070.
Initial test results: Roughly on par with RTX 3080.
With some tuning: Closing in on RTX 3090.

This is why I think that what was shown was at most an RX 6900 XT at the lowest clock speed considered. I'd be very surprised if there's not an increase in performance between what was shown and what will be presented in two weeks.
 
  • Like
Reactions: Avro Arrow
Of course, if that's the case, then their uncompetitive pricing of the 3090 should be telling, as it's indicative that they don't feel AMD will have a card quite at that performance level. And considering the 3090 typically only manages to be 10-15% faster than the 3080 at 4K, we can kind of surmise that they don't expect AMD to have a card much faster than the 3080.
Under normal circumstances, I would agree 100% but having worked at Tiger Direct taught me something that I found astonishing but also true. There are some people who will pay whatever nVidia wants for their top card no matter what it is.

I sometimes wonder if nVidia's pricing of the RTX 2080 Ti wasn't a social experiment to see just how much people were willing to bend over just to get the "best". If people were willing to pay $1200 for the RTX 2080 Ti, then they'd be willing to pay $1500 for the RTX 3090. NEVER underestimate the stupidity of the average human.
1. Nvidia seems to rush in the 20GB RTX 3080 to meet the threat, and there might be some price adjustments.

2. We're still weeks away from the official presentation, and then more weeks before lift of review embargo and sale start. There's still time to optimize clock/voltage (aiming to get the best possible reviews), and still have more time left than the RX 5600XT debacle.
The "few percent" might be the difference between being beaten by the RTX 3080 on average (bad for AMD) and beating the RTX 3090 in a few benchmarks (great for AMD). At the same time it's also the difference between great and lousy power efficiency, which can't be neglected given that efficiency has been a major argument in the marketing this far.

The rumors I've heard are in line with:
AMD's development objective: Beat the RTX 3070.
Initial test results: Roughly on par with RTX 3080.
With some tuning: Closing in on RTX 3090.

This is why I think that what was shown was at most an RX 6900 XT at the lowest clock speed considered. I'd be very surprised if there's not an increase in performance between what was shown and what will be presented in two weeks.
I'm not sure that AMD did want to just beat the RTX 3070 because that would be the same as targeting nVidia's last generation's flagship which would've made ATi look really weak. I believe that the difference between this launch and previous launches is BUDGET. In the past, AMD was so cash-strapped that ATi was forced to have only one architecture for everything instead of the two that they had previously (Radeon and FirePro).

That unified architecture was GCN and it's why Radeons were strangely better than GeForce cards at GPUCompute and worse at gaming and RadeonPro cards were worse than Quadro cards at GPUCompute and better at gaming (bass-ackwards). This is what caused the massive shortage and jacked prices of Polaris cards during the mining craze. This is why the vast majority of Steam users have a GTX 1060. This is also why the RX 5700 XT was good, but not great. Don't get me wrong, GCN was a good architecture and the engineers at ATi performed some miracles with it but it was in production for far too long.

Now, with RDNA 2 and CDNA 2, ATi was able to separate the architectures and optimise them for different tasks, RNDA for gaming and CDNA for GPUCompute. I truly believe that this is the reason for ATi's fall from grace after they were on top of the world with the Evergreen HD 5xxx series. The Northern Islands HD 6xxx series was a decent refresh of the HD 5xxx series (the HD 6850 is still known as one of the best video card values in history along with the HD 4870) but a lot of the lower numbers were just re-brands (HD 6450 was just an actively-cooled HD 5450).

Then, with the Southern Islands HD 7790, GCN was born. Now, since it was a brand-new architecture, it did still have some upside. The first total victory for GCN was in the form of Tahiti XT, the GPU that powered the HD 7970. Then ATi hit a grand-slam with what was not only the fastest card in the world but also another video card value Hall-of-Famer with the stunning Hawaii XT-powered R9 290X and the Hawaii Pro-powered R9 290, respectively.

Unfortunately, the HD 7970 and R9 290X were as good as it was going to get for GCN with regard to competition with nVidia. With Fiji XT and Fiji Pro (Fury series), GCN did a swan-song of sorts and while the performance was great, the power draw showed just how tapped out that iteration of GCN was. Polaris and Vega would follow but neither would challenge nVidia because GCN was way past its "best before date" by then. I believe that having some elements of GCN in RDNA 1 greatly hampered its performance.

Now, without being hampered at all by anything from GCN, RDNA 2 has what it takes to be a great gaming card and I think that all we saw as a teaser was the RX 6800 XT, maybe with 60 CUs instead of 80. I don't think that it's 72 like some people say because there would be little point in having two SKUs that are only 8CU apart at that level. Having the 6700 at 40 CUs, the 6800 at 60 CUs and the 6900 at 80 CUs would make the most sense from a market segmentation standpoint. Other than having obvious performance differences, there would be room in the middle to grow and react like the way nVidia does with their Ti and Super cards. This would leave room for XTX variants.
 

Olle P

Distinguished
Apr 7, 2010
720
61
19,090
... There are some people who will pay whatever nVidia wants for their top card no matter what it is.
... NEVER underestimate the stupidity of the average human.
The willingness to pay was something that also surprised Christian Königsegg (founder of the car maker Koenigsegg) when he began his career.
He was almost embarrased to charge a reasonable (based on the actual production cost) about $350,000 for his first car model. Now he can charge five times as much and still sell out in no time. One customer was willing to pay an additional $1.5M to have some special aerodynamic feature developed for one single car.

I'm not sure that AMD did want to just beat the RTX 3070...
Perhaps a matter of semantics...
I think that
  1. AMD expected Ampere to be even better performing.
  2. Underestimated their own abilities.
So beating just the RTX 3070 wasn't what they really wanted, but what they saw as a realistic goal.
As Jarred pointed out:
The 80% performance increase they've achieved is almost crazy, and thus probably beyond their own expectations (at the start of the project)!
 
  • Like
Reactions: Avro Arrow