News AMD Unveils Big Navi: RX 6900 XT, RX 6800 XT and RX 6800 Take On Ampere

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

TJ Hooker

Titan
Ambassador
Not a single mention of "hardware level ray tracing". DX12 ray tracing works on any GPU. Much like FreeSync vs GSync though, an open API (DX12) will always overtake a proprietary one given enough time. Especially when AMD is in both consoles.
Huh? "Hardware level ray tracing" is DX12 RT. Or rather, DX12 RT takes advantage of "hardware level RT" to accelerate RT (although it does have a fall back layer that doesn't require dedicated RT hardware, but presumably has lower fidelity and/or worse performance). Same thing goes for Vulkan RT (don't know about the fall back compatibility layer with Vulkan though). Both AMD and Nvidia's RT hardware implementations are proprietary RT accelerators. Just like AMD and Nvidia GPUs as a whole are proprietary rendering (and GPGPU) accelerators.

As an aside, DirectX really isn't an "open" API. Vulkan is.

IIRC the XBSX uses a tweaked DX12 API, while Sony uses some custom API for their consoles.
 
Last edited:

Giroro

Splendid
The 6900XT looks pretty attractive compared to the 3090 for gaming. For $500 less, you can forgive the short comings in the feature set of the 6900XT. The 6800XT on the other hand does not look particularly appealing compared to the 3080. $50 cheaper isn't enough for basically the same performance with inferior ray tracing and no DLSS counter.

Ray Tracing and DLSS only matter when a game supports it. Most/almost all current games do not support ray tracing and DLSS despite the tech being 2 years old. The games in the future that support ray tracing are probably going to be optimized specifically for Xbox and PS5
 

askyron

Reputable
May 2, 2020
6
0
4,510
Disappointing to see the lack of talk on Ray Tracing. Seems like Navi2 doesn't have the same ray tracing capability/performance as Nvidia. I'm going to be watching for ray tracing performance testing VERY closely.

Also, Radeon Boost is fine, but DLSS/DLSS2.0 is far superior.
they did mention that they fully supported DX12 with support for ray tracing
 

Giroro

Splendid
I don't like that AMD's benchmarks are relying so heavily on "Smart Access Memory", seeing how it is a proprietary game-specific technology that most customers won't be able to use - assuming it ever gets developer support.
But I look forward to spending the next 3 years constantly hearing about whichever game is the first/only one to fully support AMD, much like how Control is for Nvidia.
 

Awev

Reputable
Jun 4, 2020
89
19
4,535
I almost want to run out and build a new AMD system from scratch, yet I'll wait for the AM5 socket that will most likely be released end of next year, or 2022 at the latest. In the mean time I am looking at the RTX 3080 and 6800XT. Time, and availability, will tell what I end up with.
 

Giroro

Splendid
Really? I'm definitely AMD-leaning and I currently own an AMD GPU....

I use Radeon Boost. I understand the differences between it and DLSS 2.0

Not a single mention of "hardware level ray tracing". DX12 ray tracing works on any GPU. Much like FreeSync vs GSync though, an open API (DX12) will always overtake a proprietary one given enough time. Especially when AMD is in both consoles.

I'm not sure if AMD's ray tracing solution is well understood at this point. I remember a lead engineer at AMD saying something to the effect that fixed ray-tracing hardware (like RTX) was not an ideal solution and that AMD would need their ray tracing to be reprogrammable (like shaders).
I don't know if anything came of that, but if that is AMD's design philosophy, then a block of dedicated ray-tracing hardware might not actually exist in the chip. But they did make that 10x ray tracing "speedup" claim and "one ray accelerator per CU"
 

Awev

Reputable
Jun 4, 2020
89
19
4,535
I found an interesting article over on VideoCardz on Ray Tracing (RT) - you can see the the benchmarks here. The short of it is that the 6800XT is about 35% slower than the 3080, and 50% slower than the 3090, while being about 14x faster than just trying to do it in software. I think we need to see a number of tests - such as non-Zen 5 CPUs, and then the best combo of a 570 MB, 5950 CPU, and 6900 XT GPU, etc.
 

wr3zzz

Distinguished
Dec 31, 2007
108
45
18,610
I think it's only fair for AMD to turn on Rage Mode in comparison with 3090. I have a RTX 3090 and Nvidia GPU Boost is always on. I only found out about GPU Boost when out of the box the 3090 CTD all my gaming sessions. It CTD @ 2050 Mhz when the default factory OC boost clock is 1725 Mhz. I think the CTD in my case is due to lack of power and that's a 21% delta from a non-OC boost clock of 1695 Mhz with GPU Boost. Why should AMD handicap itself if Nvidia GPU Boost is ON in every chip?
 

jasonf2

Distinguished
I for one am glad to see some real competition in the video card game on a halo card level. Even if they don't take top dog the close enough nature is remarkably like first generation Ryzen. I think statements like "Nvidia is done" are pretty over stated but video card prices have been pretty much anything that Nvidia has wanted to charge for the last couple of generations and Nvidia has wanted a lot. AMD's playbook here is easy to read from the intel story. Pull into performance range, undercut price, gain market share, keep innovating, raise price when innovation outruns competition. Nvidia's only advantage here is that unlike intel they are not in a process node rut, only a subcontractor production issue which it sounds like they are working on. The only way to compete going forward is to actually compete, not play a monopoly. Nvidia will not only have to drop prices to AMD levels, but also continue to hard push performance. This should be a good year for consumers when the "super" variants emerge. If prices aren't brought back to parity even with minor clock increases Nvidia risks the market share losses that Intel has faced.
 
Last edited:

Zeifle

BANNED
Oct 28, 2020
6
1
15
RIP Nvidia ...
I'm actually thinking RIP AMD...

Lets look at what AMD actually showed us:

Power consumption: AMD quoted 250w for RX 6800 XT but then actual benchmark shows 300w in order to pull a little ahead of the RTX 3080 which is 320-330w. The difference is fairly insignificant, though not irrelevant. At current, they could be rated as being nearly identical but this may not play out to be so irrelevant in the long run depending on overclocks, driver improvements, etc. which can have a huge impact (as seen with the Ampere line with the initial crash bug then driver fix which actually improved performance). Big Navi's example is overclocked while Ampere's is not... otherwise bigger power consumption difference but also greater performance increase for Ampere.

Infinity Cache could be a really move for the company's future GPUS to cut corners on hardware while keeping them competitive but what we are seeing in terms of cost and specs isn't exactly that cost has gone down a lot or that saved costs were invested elsewhere in the GPU's specs. It could be a good move going forward but is proving to be questionable at current. Unfortunately, the sheer impact or lack thereof is unknown as we don't know anywhere near enough about how it is being utilized.

VRAM: AMD has noticeably more memory at 16GB GDDR6 while Nvidia has 10GB of the superior GDDR6X. AMD has Infinity Cache which will also help, though exact usage is unknown so this is still very up in the air. Most of the time 10GB should easily be enough so this difference will be nearly non-existent, at least when excluding the usage of the still uncommon 4k texture packs for games. In the case of the 4k texture packs or rare game scenarios where the streaming of assets and other elements benefit from more than 10GB in a meaningful way AMD lacks DLSS and thus may in fact, despite having more VRAM, actually end up not using it in its best case scenario against the competitor Ampere. The benchmarks offered by AMD, already to be questioned until we get 3rd party testing from DigitalFoundry and others, are keeping up with the RTX 3080 when not factoring DLSS or ray-tracing, and obviously 4k texture packs. If we include ray-tracing it simply wont be able to keep up with Ampere which I will delve into more below. If we include 4k texture packs then odds are DLSS is still going to pull significantly ahead, especially with ray-tracing which effectively mandates DLSS in games to date. We could say that DLSS isn't common but the RTX 3080's 10GB is enough for any current game and DLSS support is, finally, beginning to quickly grow. Last, a very important topic regarding next-gen IO. AMD lacks a solution while Nvidia has RTX IO + Microsoft's DirectStorage to solve this problem which would virtually single handedly negate the major benefits from Big Navi's VRAM lead and may even render Infinity Cache moot (take with a grain of salt because we don't know its exact usage and even being L4 cache it would be faster than typical memory fetches which can matter meaning Infinity Cache's value may not be negated by this). As for next-gen intense IO demanding games the VRAM difference and Infinity Cache will be totally irrelevant. Only a next-gen API solution will help (see below for more).

Next Gen IO Solutions: AMD's Big Navi has no contender at all for this. There could, and seriously I hope there is or they really will be in for an uphill battle, something in the works they just haven't publicly disclosed or been leaked but with IO being an incredibly relevant factor for next gen and the fact that it fundamentally alters how games may be designed and any system without sufficient IO, unless designed with a fallback measure, may be simply incapable of running such a game at all this matters... a lot! Nvidia offeres RTX IO + DirectStorage (Microsoft's API which is part of Xbox Series X' IO solution as well) to help with decompression and utilizing DirectStorage. Without RTX IO the expense of decompression on PC, due to lack of specialized hardware for this like the PS5 & Xbox Series X have, would bring even an AMD Threadripper CPU to its knees and prevent it from doing virtually anything else due to the staggering demand the decompression would place on the entire CPU in worst case scenario. Even in best case scenarios the performance hit would be immense in IO intense games. Therefore RTX IO which is a decompression solution using the GPU helps solve this problem for Nvidia GPUs allowing it to keep pace with games. This isn't a matter of simply waiting two seconds for a loading screen. There are going to be games built in a different way where it must be instant otherwise your going to deal with unplayable stuttering, excessive load screens, disconnects and possibly even processing issues or crashes without a next gen IO solution due to how the nature of next-gen IO performance impacts game design. Nvidia's own solution isn't even really available until 2021 and until developers and accompying games even really begin truly utilizing it this wont matter, but it eventually will result in a divide of what IS playable and what simply CANNOT be played on a platform that doesn't have appropriate IO performance.

Ray-tracing: AMD hasn't shown it despite everything else. Don't you find this odd? They show a host of benchmarks but no ray-tracing related benchmarks or examples. They state 10x faster than software shader implementation. What does this mean? Well, it means it isn't good enough. Turing, the RTX 1080 Ti is quoted by Nvidia as its RTX cores offering a baseline, minimal, and potentially higher (10x+ to be precise) ray-tracing over a software shader solution. This puts Big Navi's ray-tracing at last gen and we know Ampere's is a massive step up. In fact, early benchmarks and tests by GamerNexus and others have indicated we haven't even tapped the full extent of RTX cores (see GamerNexus' test with ray-tracing on the Quake II) indicating as ray-tracing matures it will only widen the gap between Big Navi and Ampere significantly. Ampere is achieving its current performance while also maintaining dedicated RTX and Tensor cores while AMD will have to make continued improvements to their specialized hardware and more space going forward which indicates it will likely not mature as adeptly as NVidia's GPUs will. Fortunately for AMD and its owners of Big Navi going forward ray-tracing is still immature, to put it mildly. Even the more ray-tracing oriented marketing is for games that may just have basic reflection or the occasional diffuse (cough Watch Dogs Legion, and Cyberpunk 2077 which does much better but is still relatively ray-tracing lite).

DLSS: AMD still has no publicly disclosed solution for DLSS which is a technique that will only continue to mature. DLSS 1.0 was awful no matter how you look at it. DLSS 2.0 has shown to be an immense improvement and can often provide not only outright incredible performance gains, especially with ray-tracing which is largely considered to require DLSS at higher settings to even be reasonably playable, but it can even potentially offer better than native results in many cases. At the same time it still isn't perfect. It does produce the occasional artifacts, though over at GamersNexus they found that aside from someone intensely focused on discovering them and not moving at all... aka almost no one, the negatives are typically nearly impossible to notice. None the less, it still has room to improve and that is precisely what the nature of this type of technology that relies on trained artificial intelligence via neural networks (a system for, quite literally, training/improving AI) entails. We've even seen it allow the RTX 3090 to achieve incredible results with 8K which should otherwise be unplayable on demanding modern titles. That said, not only does it have room to improve but it also has to be implemented with the game so support will need to be more widespread to truly matter. Most of the notable major titles coming out later this year and next year will feature it, however, so it seems that it is finally starting to gain traction with the games that are actually potentially demanding enough to necessitate it. As AMD didn't show comparisons of performance with DLSS, or ray-tracing which it is weaker with regards AND basically requires DLSS even with Ampere's more potent ray-tracing capability, it should be noted if ray-tracing is included or DLSS is supported AMD couldn't even be said to honestly even be competing at that point. With DLSS and Ampere's superior ray-tracing performance in games that support both even weaker GPUs like the RTX 3070 may very well outperform AMD's higher offerings, and possibly not even by a small margin. This also means in terms of long-term value Nvidia's Ampere cards are going to mature more gracefully than AMD's Big Navi.

Performance and Cost: The reality is AMD's GPUs don't really compete in a meaningful way at the high end be it the RX 6800 XT, RTX 3080, or above. At that point AMD may be a tad cheaper for virtually identical performance in default scenarios, but odds are if your spending that kind of money you want ray-tracing as well. In addition, if the game supports DLSS its a pure one sided stomp in Nvidia's favor, especially if ray-tracing also gets factored in, but also as it matures and starts producing superior to native results on details like billboard text, emblems, background wiring and cables, etc. At the RTX 3070 and below tiers for the more budget oriented build is where AMD will fight its real fight.
 
  • Like
Reactions: Solidjake

nofanneeded

Respectable
Sep 29, 2019
1,541
251
2,090
I'm actually thinking RIP AMD...

Lets look at what AMD actually showed us:

Power consumption: AMD quoted 250w for RX 6800 XT but then actual benchmark shows 300w in order to pull a little ahead of the RTX 3080 which is 320-330w. The difference is fairly insignificant, though not irrelevant. At current, they could be rated as being nearly identical but this may not play out to be so irrelevant in the long run depending on overclocks, driver improvements, etc. which can have a huge impact (as seen with the Ampere line with the initial crash bug then driver fix which actually improved performance). Big Navi's example is overclocked while Ampere's is not... otherwise bigger power consumption difference but also greater performance increase for Ampere.

Infinity Cache could be a really move for the company's future GPUS to cut corners on hardware while keeping them competitive but what we are seeing in terms of cost and specs isn't exactly that cost has gone down a lot or that saved costs were invested elsewhere in the GPU's specs. It could be a good move going forward but is proving to be questionable at current. Unfortunately, the sheer impact or lack thereof is unknown as we don't know anywhere near enough about how it is being utilized.

VRAM: AMD has noticeably more memory at 16GB GDDR6 while Nvidia has 10GB of the superior GDDR6X. AMD has Infinity Cache which will also help, though exact usage is unknown so this is still very up in the air. Most of the time 10GB should easily be enough so this difference will be nearly non-existent, at least when excluding the usage of the still uncommon 4k texture packs for games. In the case of the 4k texture packs or rare game scenarios where the streaming of assets and other elements benefit from more than 10GB in a meaningful way AMD lacks DLSS and thus may in fact, despite having more VRAM, actually end up not using it in its best case scenario against the competitor Ampere. The benchmarks offered by AMD, already to be questioned until we get 3rd party testing from DigitalFoundry and others, are keeping up with the RTX 3080 when not factoring DLSS or ray-tracing, and obviously 4k texture packs. If we include ray-tracing it simply wont be able to keep up with Ampere which I will delve into more below. If we include 4k texture packs then odds are DLSS is still going to pull significantly ahead, especially with ray-tracing which effectively mandates DLSS in games to date. We could say that DLSS isn't common but the RTX 3080's 10GB is enough for any current game and DLSS support is, finally, beginning to quickly grow. Last, a very important topic regarding next-gen IO. AMD lacks a solution while Nvidia has RTX IO + Microsoft's DirectStorage to solve this problem which would virtually single handedly negate the major benefits from Big Navi's VRAM lead and may even render Infinity Cache moot (take with a grain of salt because we don't know its exact usage and even being L4 cache it would be faster than typical memory fetches which can matter meaning Infinity Cache's value may not be negated by this). As for next-gen intense IO demanding games the VRAM difference and Infinity Cache will be totally irrelevant. Only a next-gen API solution will help (see below for more).

Next Gen IO Solutions: AMD's Big Navi has no contender at all for this. There could, and seriously I hope there is or they really will be in for an uphill battle, something in the works they just haven't publicly disclosed or been leaked but with IO being an incredibly relevant factor for next gen and the fact that it fundamentally alters how games may be designed and any system without sufficient IO, unless designed with a fallback measure, may be simply incapable of running such a game at all this matters... a lot! Nvidia offeres RTX IO + DirectStorage (Microsoft's API which is part of Xbox Series X' IO solution as well) to help with decompression and utilizing DirectStorage. Without RTX IO the expense of decompression on PC, due to lack of specialized hardware for this like the PS5 & Xbox Series X have, would bring even an AMD Threadripper CPU to its knees and prevent it from doing virtually anything else due to the staggering demand the decompression would place on the entire CPU in worst case scenario. Even in best case scenarios the performance hit would be immense in IO intense games. Therefore RTX IO which is a decompression solution using the GPU helps solve this problem for Nvidia GPUs allowing it to keep pace with games. This isn't a matter of simply waiting two seconds for a loading screen. There are going to be games built in a different way where it must be instant otherwise your going to deal with unplayable stuttering, excessive load screens, disconnects and possibly even processing issues or crashes without a next gen IO solution due to how the nature of next-gen IO performance impacts game design. Nvidia's own solution isn't even really available until 2021 and until developers and accompying games even really begin truly utilizing it this wont matter, but it eventually will result in a divide of what IS playable and what simply CANNOT be played on a platform that doesn't have appropriate IO performance.

Ray-tracing: AMD hasn't shown it despite everything else. Don't you find this odd? They show a host of benchmarks but no ray-tracing related benchmarks or examples. They state 10x faster than software shader implementation. What does this mean? Well, it means it isn't good enough. Turing, the RTX 1080 Ti is quoted by Nvidia as its RTX cores offering a baseline, minimal, and potentially higher (10x+ to be precise) ray-tracing over a software shader solution. This puts Big Navi's ray-tracing at last gen and we know Ampere's is a massive step up. In fact, early benchmarks and tests by GamerNexus and others have indicated we haven't even tapped the full extent of RTX cores (see GamerNexus' test with ray-tracing on the Quake II) indicating as ray-tracing matures it will only widen the gap between Big Navi and Ampere significantly. Ampere is achieving its current performance while also maintaining dedicated RTX and Tensor cores while AMD will have to make continued improvements to their specialized hardware and more space going forward which indicates it will likely not mature as adeptly as NVidia's GPUs will. Fortunately for AMD and its owners of Big Navi going forward ray-tracing is still immature, to put it mildly. Even the more ray-tracing oriented marketing is for games that may just have basic reflection or the occasional diffuse (cough Watch Dogs Legion, and Cyberpunk 2077 which does much better but is still relatively ray-tracing lite).

DLSS: AMD still has no publicly disclosed solution for DLSS which is a technique that will only continue to mature. DLSS 1.0 was awful no matter how you look at it. DLSS 2.0 has shown to be an immense improvement and can often provide not only outright incredible performance gains, especially with ray-tracing which is largely considered to require DLSS at higher settings to even be reasonably playable, but it can even potentially offer better than native results in many cases. At the same time it still isn't perfect. It does produce the occasional artifacts, though over at GamersNexus they found that aside from someone intensely focused on discovering them and not moving at all... aka almost no one, the negatives are typically nearly impossible to notice. None the less, it still has room to improve and that is precisely what the nature of this type of technology that relies on trained artificial intelligence via neural networks (a system for, quite literally, training/improving AI) entails. We've even seen it allow the RTX 3090 to achieve incredible results with 8K which should otherwise be unplayable on demanding modern titles. That said, not only does it have room to improve but it also has to be implemented with the game so support will need to be more widespread to truly matter. Most of the notable major titles coming out later this year and next year will feature it, however, so it seems that it is finally starting to gain traction with the games that are actually potentially demanding enough to necessitate it. As AMD didn't show comparisons of performance with DLSS, or ray-tracing which it is weaker with regards AND basically requires DLSS even with Ampere's more potent ray-tracing capability, it should be noted if ray-tracing is included or DLSS is supported AMD couldn't even be said to honestly even be competing at that point. With DLSS and Ampere's superior ray-tracing performance in games that support both even weaker GPUs like the RTX 3070 may very well outperform AMD's higher offerings, and possibly not even by a small margin. This also means in terms of long-term value Nvidia's Ampere cards are going to mature more gracefully than AMD's Big Navi.

Performance and Cost: The reality is AMD's GPUs don't really compete in a meaningful way at the high end be it the RX 6800 XT, RTX 3080, or above. At that point AMD may be a tad cheaper for virtually identical performance in default scenarios, but odds are if your spending that kind of money you want ray-tracing as well. In addition, if the game supports DLSS its a pure one sided stomp in Nvidia's favor, especially if ray-tracing also gets factored in, but also as it matures and starts producing superior to native results on details like billboard text, emblems, background wiring and cables, etc. At the RTX 3070 and below tiers for the more budget oriented build is where AMD will fight its real fight.

RIP Nvidia ...
 
  • Like
Reactions: Pes2020
Also, AMD is definitely skewing charts in their favor on RX 6900 XT by enabling the one-touch Rage Mode overclocking, plus Smart Memory Access. I don't mind the latter so much -- it's the benefit of running AMD on AMD, basically -- but I do wonder if there are ways other platforms could enable larger apertures, and why it hasn't been done up until now.
Really? Wouldn't it be the opposite? Nvidia GPU Boost 3.0 is an auto overclocking feature that's enabled by DEFAULT.
SMA is a more "exclusive" feature. I mean... great on them for enabling more performance AND giving a marketing Avenue, but unlike Rage, SMA isn't something EVERYONE is going to be able to use.
 
  • Like
Reactions: King_V

LOLROFL

Reputable
Jan 3, 2016
15
3
4,515
The 6900XT looks pretty attractive compared to the 3090 for gaming. For $500 less, you can forgive the short comings in the feature set of the 6900XT. The 6800XT on the other hand does not look particularly appealing compared to the 3080. $50 cheaper isn't enough for basically the same performance with inferior ray tracing and no DLSS counter.

Ngreedia shill detected.
 

TechLurker

Reputable
Feb 6, 2020
163
93
4,660
What was the Specs of the PC that they were testing with? With new Ryzen 5000 CPU, you can OC the RAM to 6536 Mhz. Paired with Smart Access Memory would be interesting to see some bench marks.

According to the Press Slides (courtesy of Anandtech), all testing was done on a Ryzen 5900X Engineering AM4 Motherboard with 3200Mhz DDR4 Memory. So there's some possible performance left on the table considering Ryzen scales well with higher RAM speeds (although the rumor was that the newest sweetspot was around 4000 for 1:1 timings), and that extra performance could help SAM indirectly.
 

joeblowsmynose

Distinguished
lol they had to use their cute OC button to get the charts to work right x3

Er .... not for the 6800xt they didn't. And for $500 savings, I would definitely click the rage button and use the SAM option to be level with a 3090 ... wouldn't you? or are you too much an nividia fanb ... I mean "enthusiast", to want to save $500? Asking for a friend.
 

Gurg

Distinguished
Mar 13, 2013
515
61
19,070
overclocks, driver improvements, etc. which can have a huge impact (as seen with the Ampere line with the initial crash bug then driver fix which actually improved performance)
The new Nvidia drivers actually improved the performance of my 2080 super. I like to run MSI Afterburner in the background and monitor the CPU and GPU performance and temps after gaming. After the last driver update, the core clock started registering at 2100-2115 mhz and above whereas before it would max out at 2070-2085. It appears the power limit was raised and is now running up to 118%.
 
Last edited:
  • Like
Reactions: Zeifle

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
810
20,060
AMD's 6000 series is the first GPU that I can remember in GPU history to get a "Set Bonus" by buying all AMD Hardware.

AMD 5000 Series CPU + AMD 500 Series MoBo + AMD 6000 Series GPU = SAM (Smart Access Memory)!
 

Zeifle

BANNED
Oct 28, 2020
6
1
15
Ray tracing and DLSS are still going to be important battlegrounds, especially with the next-gen consoles supporting RT. AMD may have an advantage, since it provides the console GPUs. Then again, 52 CUs is a decent step down from RX 6800, and the 36 CU PS5 and 20 CU Xbox Series S won't be anywhere close to Navi 21 performance. So, console games probably won't push RT as much as PC games, since we have PC GPUs that are already potentially 50% faster than the best console hardware. Anyway, exciting times!
Sadly, we already know the ray-tracing performance will be a weaker showing for Big Navi which leaves us even more concerned with console ray-tracing performance as they will perform at an even lower level. AMD stated that it will be approximately 10x software shader approach, and this is what Nvidia also cited for Turing's ray-tracing bottom line (as Turing RTX cores for the RTX 2080 was actually 10x +). I also recall reading a comment by Phil Spencer about his own Xbox Series X while also including assumption of the PS5 trying to temper expectations that devs will want to shy away from ray-tracing typically and apply it in limited use hinting at underwhelming performance. Unfortunately, I can't find the precise statement right now but maybe someone else has come across and can share it.

Ray Tracing is going to be my purchasing point. If AMD does it well, Its lights out for NVidia
It will do as well as Turing approximately based on the above information. You will want to go to Nvidia for this generation if you want ray-tracing. In addition, ray-tracing virtually mandates the use of DLSS to get acceptable framerates at higher resolutions and settings which AMD has no such solution for at current. In short, AMD's ray-tracing offering will unquestionably be a failure at anything but lower resolutions this generation. Then again, most gamers are still sub-4K so with enough tweaking of the settings just maybe it might prove a little passable in less intense cases. Just don't expect good results in something like Cyberpunk 2077 as that seems unlikely. The proof is in the fact they will not offer it on consoles for this game and have even explicitly stated it will only be offered on an Nvidia GPU, aka they are likely aware AMD wont be able to provide acceptable performance with ray-tracing enabled in their game even on the XTX.

Er .... not for the 6800xt they didn't. And for $500 savings, I would definitely click the rage button and use the SAM option to be level with a 3090 ... wouldn't you? or are you too much an nividia fanb ... I mean "enthusiast", to want to save $500? Asking for a friend.
Yes, they did for the 6800 XT. If you look at the chart they are comparing an overclocked 300w 6800 XT (its default power consumption is approximately 250w stated by AMD) to a stock RTX 3080 which typically draws around 320w so we know it isn't overclocked like the 6800 XT is. They pulled the same shady stuns using games that were known to not only favor AMD GPUs but also known to be unusually highly impacted by CPU performance and showed on the Ryzen 5000s series. If that doesn't raise flags then idk what to tell you.

Unfortunately, I had hoped AMD would offer something competitive but this doesn't look like it will have much in the way of longevity. For the here and now, though, it may appear to be more competitive. It simply wont age gracefully for far too many reasons (substantially inferior ray-tracing, no DLSS alternative and thus even worse ray-tracing and eventually subpar visuals with identical settings due to better than native AI results, no disclosed IO solution to keep pace with RTX IO/PS5/XSX tho this can be added mid-life its just a question of how late will they be).