News AMD's Big Navi Benchmarks: 4K and 1440p Numbers for Radeon RX 6900 XT, 6800 XT, 6800

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

hannibal

Distinguished
so maybe 10% more performance is not that big of a deal? folks are still willing to put up with crazy high cpu power usage from intel due to maybe an even smaller performance gain over amd.
.

on the average it seems more like 2-3%nmore speed. Douple digits seems to be quite exeptiona...
But 3% is 3%! Why not use it if it is available.
i don`t personally care what gpu is the fastest. What I really care is that there is competition and that is what we get this time!
Good to customers, good to me!
 
Last edited:
  • Like
Reactions: Loadedaxe
I look at this way:

If you do NOT have a 500 chipset with 5000 products, it is at best a draw value wise at 4k. RX6000 ties at rasterization, and loses horribly at ray tracing. Super resolution is inferior.

AMD is no longer the value leader at CPU's either. They charge a premium over intel. $550 for a 5900X and $175 for a new B550 board is a lot of scratch to get those extra frames. And I have to wonder why ryzen 3000 chips dont support this? They are pcie 4 compliant and have the same memory io chip.

Sorry AMD. Lady Loyalty can be a real bitter mistress when value leaves town.
 

bwohl

Distinguished
Apr 21, 2008
48
0
18,530
Haters will be haters...and yes - I'm an Intel hater. :)
I bought AMD stock 20 years ago and have watched it languish in the IT cellar due to Intel's unscrupulous business practices.
I thought it was a mistake for AMD to sell/split their foundries off - I can now admit it was a brilliant decision - if TSMC and Global Foundries can keep up with AMD's orders.
Intel are stuck on their current node in their current foundries.

Datacenter, consoles, graphics, CPUs - these bunch of engineers deserve every dime coming their way...
 

VforV

Respectable
BANNED
Oct 9, 2019
578
287
2,270
Why are so many people so rigid? What's with all this blind loyalty or hating crap?

I don't get this type of thinking. Even if now I own an AMD CPU and I do like the new Radeon 6000 series, I'm still considering the best perf/$ GPU from both AMD and nvidia when I'll be buying at the end of year.
And again in 1 or 2 years when I upgrade my CPU, I'll do the same, whichever product from whichever company gives me the best perf/$ gets my money, AMD or Intel.

Availability is also very important lately, so even if I would buy one product, but it's not in stock and I would have to wait another 3-4 months to get it, I will probably not wait that long and buy what's available now from the competition.

I may like one company over the other at any given time, but that can change fast with one great new product. I don't hate the opposition, nor am I blindly loyal to anyone.
 
Sep 13, 2020
58
8
45
Unless none AMD Benchmarks can verify those high numbers and unless we get believable power consumption, cooling and noise stats, I do not believe AMD a single thing much the same as I didn´t believe team green without neutral tests. For the moment it seems that AMD did indeed cheat, in such as presenting benchmarks without ray tracing. Second no real power consumption charts or heat maps were presented, though, to be fair NVIDIA did neither as well. But then, everyone said, "take the results with a grain of salt" while now, most people hype AMD for no reason. I own a Ryzen 3900x and it does not perform the way it was hyped. It is a powerful cpu but it is not as AMD advertised it
 

EatMyPizza

Distinguished
Feb 23, 2013
176
0
18,710
Smart Memory Access. "It shouldn't be considered cheating, though, since chipmakers have the liberty to develop new technologies to give them an edge over their competitors." It isn't cheating but it is unfortunate that it will only work on an all AMD system. If Smart Memeory Access significantly lifts performance of the new RX6000 cards over the base performance of the same card but can't be used in an Intel system, it will look bad for AMD and push some buyers toward Nvidia. I am waiting for supply to catch up to demand to buy a new gpu and if I'm leaving 5-10% of my performance on the table because I have an Intel system then I will buy Nvidia. Otherwise, if RX6000 is close to RTX 3000 it will come down to cost/performance/efficiency for which card I will buy, RX6800XT or RTX 3080.

Are these new benchmarks with smart access memory + rage mode on? Even if they are, it seems it's usually less than a 5% boost anyway, so the numbers will still be good, but a few of those close wins, would be losses on a different system.
 
For the moment it seems that AMD did indeed cheat, in such as presenting benchmarks without ray tracing.
One thing to consider is that the current RT-enabled games didn't come out until well after Nvidia launched the RTX 20-series. They've been optimized specifically for Nvidia's implementation of RT, since no RDNA2 cards were available for the developers to test with. It's very possible that AMD's implementation of RT could be faster at some things and slower at others, and existing games might not be optimized to take advantage of the areas where the cards perform well. So, even if they did provide RT performance numbers in some existing games, it wouldn't mean much, unless the developers had time to go through and optimize those existing games for the specific performance characteristics of AMD's new hardware.

It's also possible that at least some of these games may have implemented RT in a way that uses proprietary Nvidia libraries, since Nvidia worked closely with the developers to add these effects, so it might not be possible to enable RT effects in those titles, at least unless they get an update following the release of the RX 6000 series.

So, I can see why AMD didn't give too many details about RT, aside from mentioning some games in development being built with their hardware in mind. If I had to guess, the RT performance in existing titles probably isn't going to be at the same level, at least at launch, otherwise they would have given some performance numbers, but that could certainly improve in the future as developers get their hands on the new hardware. Even once independent reviews come out, it might be a bit vague how the performance of AMD's RT implementation will compare in the long-term, at least until we start seeing games optimized for it.

Taking advantage of being a CPU as well as a GPU manufacturer is just good sense, it's shocking they've taken so long to do so. It's only logical that systems built around their hardware exclusively should provide some benefit. Maybe Intel and Nvidia will join forces to combat a common enemy and make similar attempts.
AMD did launch "Hybrid CrossFire" around 10 years ago, which allowed some of their graphics cards to be combined with the integrated graphics in some of their APUs to improve performance. In practice, I don't think it worked that well though, and was mostly just beneficial to some low-end cards, since the integrated graphics weren't exactly fast enough to notably improve the performance of better cards, while in some cases causing performance anomalies due to the assymetrical nature of the multi-GPU setup.

As for Intel and Nvidia teaming up, why would they do that? Intel has their own line of dedicated graphics cards coming out sometime next year that will be competing directly with Nvidia, and Nvidia is in the process of buying ARM, a processor company that competes with Intel in some markets, even if they don't make x86 processors.

No, SAM improves communication between the CPU and the GPU by giving the CPU access to the full frame buffer on the GPU. The purpose of RTX I/O is to bypass the CPU and allow the GPU to pull compressed data straight from a storage device and get processed by the GPU, reducing work for the CPU. Two different technologies addressing two different bottlenecks
And of course, RTX I/O is just Nvidia's implementation of Microsoft DirectStorage, something the Ryzen 6000-series supports as well. It's anyone's guess how the DirectStorage performance might compare between cards though, or if such performance differences even matter, since no games support it yet, and apparently won't until at least sometime next year.

If they really want to put some FUD into NVidia's release, just lift the NDA early for cards sent to reviewers!
I doubt reviewers even have cards yet, seeing as the release is still a few weeks away.

Are these new benchmarks with smart access memory + rage mode on?
This is just with Smart Access Memory enabled, not the auto-overclocking feature.
 
One thing to consider is that the current RT-enabled games didn't come out until well after Nvidia launched the RTX 20-series. They've been optimized specifically for Nvidia's implementation of RT, since no RDNA2 cards were available for the developers to test with. It's very possible that AMD's implementation of RT could be faster at some things and slower at others, and existing games might not be optimized to take advantage of the areas where the cards perform well. So, even if they did provide RT performance numbers in some existing games, it wouldn't mean much, unless the developers had time to go through and optimize those existing games for the specific performance characteristics of AMD's new hardware.

It's also possible that at least some of these games may have implemented RT in a way that uses proprietary Nvidia libraries, since Nvidia worked closely with the developers to add these effects, so it might not be possible to enable RT effects in those titles, at least unless they get an update following the release of the RX 6000 series.

So, I can see why AMD didn't give too many details about RT, aside from mentioning some games in development being built with their hardware in mind. If I had to guess, the RT performance in existing titles probably isn't going to be at the same level, at least at launch, otherwise they would have given some performance numbers, but that could certainly improve in the future as developers get their hands on the new hardware. Even once independent reviews come out, it might be a bit vague how the performance of AMD's RT implementation will compare in the long-term, at least until we start seeing games optimized for it.


AMD did launch "Hybrid CrossFire" around 10 years ago, which allowed some of their graphics cards to be combined with the integrated graphics in some of their APUs to improve performance. In practice, I don't think it worked that well though, and was mostly just beneficial to some low-end cards, since the integrated graphics weren't exactly fast enough to notably improve the performance of better cards, while in some cases causing performance anomalies due to the assymetrical nature of the multi-GPU setup.

As for Intel and Nvidia teaming up, why would they do that? Intel has their own line of dedicated graphics cards coming out sometime next year that will be competing directly with Nvidia, and Nvidia is in the process of buying ARM, a processor company that competes with Intel in some markets, even if they don't make x86 processors.


And of course, RTX I/O is just Nvidia's implementation of Microsoft DirectStorage, something the Ryzen 6000-series supports as well. It's anyone's guess how the DirectStorage performance might compare between cards though, or if such performance differences even matter, since no games support it yet, and apparently won't until at least sometime next year.


I doubt reviewers even have cards yet, seeing as the release is still a few weeks away.


This is just with Smart Access Memory enabled, not the auto-overclocking feature.

If you look at the AMD footnotes on slides. You will see AMD's RT performance on Microsofts on ray tracing test.

The performance about equals that of a 3070 which is about 40% behind 3080.

With games like watchdog legions and cyberpunk and control with no dlss equivalent available i would mark the 6000 series as a hold and wait.
 

Johnpombrio

Distinguished
Nov 20, 2006
248
68
18,770
For both NVidia and AMD, these fast, expensive graphics cards, the battle between Titans so to speak, are more for bragging rights than for making profits. The vast majority of discrete GPU card sales are for the low-end cards, say $300 or less. The same goes for CPU sales for AMD and Intel. Another point. The number of laptops being sold in 2019 is double that of desktops and increasing yearly so having good integrated graphics on the CPU is important. It does make for nice reading tho!
 
For both NVidia and AMD, these fast, expensive graphics cards, the battle between Titans so to speak, are more for bragging rights than for making profits. The vast majority of discrete GPU card sales are for the low-end cards, say $300 or less. The same goes for CPU sales for AMD and Intel. Another point. The number of laptops being sold in 2019 is double that of desktops and increasing yearly so having good integrated graphics on the CPU is important. It does make for nice reading tho!

While I agree, they would rather sell lots of high end because the margins are insanely better. Thats why you have companies like nvidia crippling cards with less memory. Its to push you higher. The 2060 should have never been 6GB. Never ever ever.

I dont want to say the 2070 is crippled at 8 gigs. But its no 4k card like the 2080ti was with extra mem.
 

EridanusSV

Notable
Aug 16, 2020
347
44
940
If someone ever had to complain about SAM being an all-red platform build is the same way complaining why Tesla's self-driving technology, manufactured and designed by Tesla is only available in Tesla vehicles..........idek anymore. I'm glad for this competition and new tech. Kudos AMD.
 

TechLurker

Reputable
Feb 6, 2020
162
93
4,660
Smart Access Memory isn't exactly new tech; it's just AMD's take on using the Resizable BAR capability from the PCI SIG spec, and isn't proprietary (just like how their Radeon Image Sharpening wasn't proprietary). According to Phoronix discussions, Linux has had the feature supported for awhile, allowing some combinations like older Intel + Radeon GPUs; it just requires a certain amount of MMIO space to work properly. Even Windows, according to coders on the AMD Reddit, has the capability to use it, but it's never done as MS leaves it to the hardware developers to use and maintain it instead of maintaining it themselves at the core OS level. It would entirely be up to the CPU and GPU makers to provide support and compatibility.

AMD just so happens to be building both CPUs and GPUs, and can provide support and validation for the Ryzen+Radeon pairing as its entirely within their control. And for the time being, they only certify that 5000 Series Ryzens + 500 Series Chipsets + 6000 Series Radeons will fully work. Whether or not AMD will at least open up SAM to allow older combinations remain to be seen. But it's possible they will only validate older GPUs with 5000 Series Ryzens and 500 Series chipsets. Considering that the feature is PCI based, it's possible that AMD's particular implementation requires PCIe 4.0. If it isn't, they could possibly validate 400 Chipsets later on, given that 400 series boards will be able to support 5000 Series Ryzens. Hopefully though; given vague wording, AMD will at least work to validate Vega and Polaris GPUs as well. Give them one last kick of FineWine, even if it's limited to working alongside 5000 Series Ryzens.

Meanwhile, Intel is only now getting in the GPU game, but could follow AMD into using their own version of SAM to help squeeze out a bit more Xe GPU performance; which they will definitely need in their struggle to catch up. In the interim, if they want to try and get a leg up on AMD and do something like "promoting wider support" for a SAM-like feature, they would have to put down some extra money on validating Intel+Radeon or Intel+Nvidia GPU setups. It's quite possible, given that it IS Intel we're talking about, they would limit SAM validation to their top end SKUs and chipsets, and maybe even drop support for non-Xe GPUs once their Xe GPUs mature (assuming they even attempt to play nice with Radeon and Nvidia in the first place).

Nvidia is kind of left out; they will have no choice but to put down money to validate a similar feature with their GPUs, and ensure it works with both Ryzen and Intel CPUs. It's quite possible that they too might only provide support for select high-end CPUs and chipsets; if only to reduce how much validation testing they would have to do.

Assuming the feature becomes more common in future CPU+GPU wars (if not later on in the current CPU+GPU wars), we're going to see a crapshoot mess of drivers from Intel and Nvidia. AMD is likely to only stick to their ecosystem and let the other two work out validation with AMD hardware (CPU or GPU).
 

King_V

Illustrious
Ambassador
I wonder if those who call AMD "cheating" for using SAM also called Nvidia "cheating" for GSync?

Or for their "support" of FreeSync that doesn't work on all FreeSync monitors that AMD's implementation of FreeSync has ZERO problem with supporting.

I don't seem to recall any such accusations of Nvidia cheating in that regard from the same people... but maybe my memory is just a little spotty, though.
 

EatMyPizza

Distinguished
Feb 23, 2013
176
0
18,710
I wonder if those who call AMD "cheating" for using SAM also called Nvidia "cheating" for GSync?

Or for their "support" of FreeSync that doesn't work on all FreeSync monitors that AMD's implementation of FreeSync has ZERO problem with supporting.

I don't seem to recall any such accusations of Nvidia cheating in that regard from the same people... but maybe my memory is just a little spotty, though.

I don't really think it's "cheating" but it's certainly misleading, as many people looking to buy these cards will not have 5000 series AMD cpu's and some may never plan to. A more relevant benchmark comparison to NVIDIA cards would be with the smart access memory turned off, just like nvidia should turn off DLSS for comparisons. I will have my 9900k for another 2 years, so SAM is useless to me, I want to see benchmarks without it.

On the 28th AMD appeared to show the 6800xt benchmarks without SAM turned on, and it was still matching the 3080. For the rest of their benchmarks, and their other cards they had it turned on though.
 
If you look at the AMD footnotes on slides. You will see AMD's RT performance on Microsofts on ray tracing test.
Sure, I heard about that already, but we don't even know if that synthetic benchmark will be in any way representative of real-world gaming performance with these cards. We certainly don't see around 500 FPS in actual raytraced games. There are a variety of raytraced lighting effects, each with their own performance characteristics, and there is a lot more to realtime raytracing than just casting rays. You have things like denoising taking place after the rays get cast, and building up data structures about the scene beforehand that may not be accurately represented by this test, which appears to only draw an extremely basic scene. We could even be running into CPU limitations resulting from how the drivers handle those things at high frame rates like that.

The performance about equals that of a 3070 which is about 40% behind 3080.
The slide didn't even mention which card was being tested, though it might have been the 6800 XT, since that seemed to be the focus of the bulk of their presentation, with the 6900 XT being an extra bit tossed in at the end. However, while an article I read placed that number as being nearly 40% behind what the 3090 achieves in that test, it was apparently only around 25% behind typical results for the 3080. Also, keep in mind that today's RT-enabled games are only utilizing RT for part of the rendering process, with the other 50-75% or so being traditional rasterization. So, if a 6800 XT did happen to be around 25% slower at the raytracing part compared to a 3080, that might only amount to around a 10% difference in overall frame rates with RT enabled once all rendering is taken into account.

Again though, this is all assuming Microsoft's test is even representative of the relative performance differences between the raytracing hardware of these different architectures, which it may not be. It's not even really intended as a benchmark, but rather one of a handful of tutorial samples they provide, and I don't know of any reviews that have used it to compare the performance of graphics cards. So as I said before, it's still anyone's guess what typical RT performance will be like, and it may be difficult to provide an accurate comparison until we start seeing games optimized specifically with the hardware in mind.

As for DLSS, it's ultimately an upscaling and sharpening algorithm using AI inferencing to make better guesses when filling in details. Many consider AMD's existing upscaling and sharpening implementation to already work relatively well, and they are apparently in the process of improving it to better compete with DLSS 2.0. Either way, I suspect most would struggle to notice the pixel-level differences between these techniques at the resolutions these cards are intended for.
 

spongiemaster

Admirable
Dec 12, 2019
2,277
1,280
7,560
I don't really think it's "cheating" but it's certainly misleading, as many people looking to buy these cards will not have 5000 series AMD cpu's and some may never plan to. A more relevant benchmark comparison to NVIDIA cards would be with the smart access memory turned off, just like nvidia should turn off DLSS for comparisons. I will have my 9900k for another 2 years, so SAM is useless to me, I want to see benchmarks without it.

On the 28th AMD appeared to show the 6800xt benchmarks without SAM turned on, and it was still matching the 3080. For the rest of their benchmarks, and their other cards they had it turned on though.
Any feature that isn't available to everyone who buys the product should only be shown in addition to standard benchmarks. No issue enabling Rage mode as everyone can use that, but the only benchmarks AMD showed of the 6900XT used SAM. Assuming the benchmarks were legit, not sure why anyone would call that cheating, it's not. It is, however, highly deceptive when 0% of the market is currently able to use that feature, without also showing how it performs for the remaining 100% of the market that doesn't have a 5000 series CPU. Features that 100% of owners can use should always be enabled.
 

BeedooX

Reputable
Apr 27, 2020
70
51
4,620
Any feature that isn't available to everyone who buys the product should only be shown in addition to standard benchmarks. No issue enabling Rage mode as everyone can use that, but the only benchmarks AMD showed of the 6900XT used SAM. Assuming the benchmarks were legit, not sure why anyone would call that cheating, it's not. It is, however, highly deceptive when 0% of the market is currently able to use that feature, without also showing how it performs for the remaining 100% of the market that doesn't have a 5000 series CPU. Features that 100% of owners can use should always be enabled.
It's not highly deceptive at all, c'mon - it was written right there on the slide when it was used.

AMD and NVidia are building entirely different architectures to power their video cards; this isn't like building an x86 processor.

If they (AMD) want to demonstrate with a feature that you can't use, hard luck? Wait for others' benchmarks to give you what you need. Perhaps we should have AMD apologise for still working on their drivers, as it's unfair to compare beta AMD drivers vs refined NVidia drivers... Let's show all the cards only on PCIE 3.0 for all those that can't use PCIE 4.0.

IMO, all the high end video cards from NVidia and AMD look great on paper, from an FPS perspective - and who wins doesn't matter; if you want to buy an NVidia Data-Center card, I'm happy for you.
 
  • Like
Reactions: LeighPing

EatMyPizza

Distinguished
Feb 23, 2013
176
0
18,710
It's not highly deceptive at all, c'mon - it was written right there on the slide when it was used.

I didn't see it written there at all on their new set of benchmarks. IT was obvious though, because the 6800 XT (which they showed with it turned off (as well as on) the first time, was suddenly doing better in the same games compared to when it was off.
 
Last edited:

Pitbull Tyson

BANNED
Oct 28, 2020
38
6
35
They come up with a 2080Ti killer a little to late there. There should be a 3090Ti killer not a single video card that touts better speeds then the 2080Ti and what not.
 
  • Like
Reactions: Gurg

BaRoMeTrIc

Honorable
Jan 30, 2017
164
16
10,715
I know people will buy it, but i just cannot fathom spending $350 for a 10-15% performance boost. Obviously people are willing to spend up to $700 for a 15% with the 3090 but that comes with a huge jump in memory bandwidth that makes it an excellent workstation card for those who do not necessarily need a quadro. But with AMD i just do not see the value in the 6900 xt, other than you must have the best card AMD has to offer.
 

mac_angel

Distinguished
Mar 12, 2008
566
83
19,060
Also, DLSS is looking promising if it gets implemented properly.

yea, that's what I was thinking, about the other stuff you mentioned as well. AMD might have caught up on performance on some titles, but what about the other technologies that have been implemented lately, mostly by NVidia? The big one is ray-tracing, but then there are other, proprietary NVidia things in many games. Hairworks, Turfworks, NVidia Flow. While some of the APIs are going to be opened up more, NVidia still has a huge head start on being able to process ray tracing. And you know it will be a big thing in Cyberpunk 2077. And they are going to be adding ray-tracing to The Witcher 3 (there are already ray-tracing mods you can install yourself, but this will be proper and fully implemented by developers).
As it is, with my setup, it is a very niche and rare setup and I'm generally playing titles that are a at least a couple of years old to be able to run close to maxed out settings. Otherwise I have to dumb down the graphics, which basically puts it on par, or less than with graphics from a few years ago maxed.
Side note, personal 'rant'. I really wish more developers would implement mGPU programming in their games. While these new GPUs are showing benchmarks of AAA games that are out there right now, that in no way has any future proof of being able to do the same with upcoming games. Cyberpunk 2077 being a great example. And the re-release of Crysis, you can be sure the old moniker, "but can it play Crysis" will be coming back.
 

BeedooX

Reputable
Apr 27, 2020
70
51
4,620
I know people will buy it, but i just cannot fathom spending $350 for a 10-15% performance boost. Obviously people are willing to spend up to $700 for a 15% with the 3090 but that comes with a huge jump in memory bandwidth that makes it an excellent workstation card for those who do not necessarily need a quadro. But with AMD i just do not see the value in the 6900 xt, other than you must have the best card AMD has to offer.
Have you not been around very long? You must have seen the sheer number of people that were willing to spend extra on the Intel 9900K over an AMD CPU for just a tiny handful of frames advantage in games.

I'm sure there's plenty of people like me don't count frames, or performance/$, it's irrelevant; just pick an amount you want to spend and spend it. I'd pick a 6900XT or 6950XT just because it's the best one... Just waiting for Zen 3 Threadrippers so I can grab one of those too...
 

mac_angel

Distinguished
Mar 12, 2008
566
83
19,060
Proprietary technology is rarely wonderful for consumers. But I don't see any company giving away its edge because it seems unfair.
Intel and Nvidia would not even flinch if they had a way to pull something like this.
So, while I may not like it, I feel it's hard to judge companies fairly in this market - they're all a little bit of aholes.
As long as there is competition, I am happy since I can choose the best solution or the least evil one 😉.

I think Intel and NVidia are a lot worse for this than AMD. I don't hold it against AMD for doing this, and I doubt it was meant to be proprietary. More along the lines of added programming of each to work well with each other.
AMD wants to push Vulkan - open sourced. They also fought against G-Sync with Free-sync. NVidia is still purposely holding back on their compatibility in this with the newer TVs. A lot of newer TVs have Free-sync on them, but NVidia still won't let it be compatible.
fyi, not a fanboy of AMD, just giving praise where it's deserved. I actually have an Intel CPU in all my systems, and NVidia GPUs in them all. This might be changing with this generation though.

I also think it's very hardware related. AMD is the only one offering PCIe gen4, which I'm sure makes a big difference if we're talking about memory transfer rates.
 
Last edited:
  • Like
Reactions: TheBeastInside
They come up with a 2080Ti killer a little to late there. There should be a 3090Ti killer not a single video card that touts better speeds then the 2080Ti and what not.
Why do you keep posting this stuff? It's obviously wrong, or at least doesn't align at all with the results they've shown. They are showing the 6800 as being faster than the 2080 Ti and 3070 by a decent margin, at least in raster performance. The 6800 XT is shown to be even faster, and competitive with the 3080. And they are showing the 6900 XT as being competitive with the 3090. There is no 3090 Ti.
 

TRENDING THREADS