News AMD Radeon RX 6800 XT and RX 6800 Review

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

Chung Leong

Reputable
Dec 6, 2019
493
193
4,860
It's going to be a long, long time before that happens. If you look at the Steam HW survey, the top GPUs are 1060s, 1050ti, 1050, etc. There are more GTX 970s still in use than any ray tracing cards other than the 2060/2070s. I'd be shocked if GPUs capable of ray tracing hit 50% of the gaming community within the next 2 years. Game companies are going to need to support non-ray tracing hardware for a very long time. Ray tracing is going to need to be common on the <$200 and <$300 cards for at least one full upgrade cycle before we get mass adoption, much less anyone getting rid of the ability to run without RT.

People who aren't upgrading are probably not spending much on AAA games either. The added effort required to support older hardware is simply not worth it. Far easier and cheaper to address the non-enthusiast segment of the market with the help of cloud gaming.
 
What on earth makes you think they'd force all users to use ray tracing, when no other video setting has been locked in and taken away from the user?
Because there's precedent: SM 1.0.

I can also remember Hardware T&L in the early days becoming mandatory for games after a while.

It comes with time as adoption and, more importantly, support becomes wide spread.

In all honesty, I wouldn't mind games using RayTracing exclusively in their engines. That would drive optimization towards it rapidly.

Cheers!
 

TJ Hooker

Titan
Ambassador
1- Waay better cooler design
Also nvidias solution allows some blow out and pass through, which I can tell you is superior as an engineer. Vent some heat and less restrictive flow.
Can one or both of you elaborate on why exactly you consider Nvidia reference coolers superior? The results I'm seeing show that the Ampere cards and big Navi cards run at about the same temperature, about the same noise level, and about the same fan speed (although fan speed by itself doesn't really matter IMO).

Sure, Nvidia's approach is 'innovative', but why does that matter if it doesn't result in any meaningful difference in performance? They also used an 'innovative' GPU power connector, but I don't think any one is treating that as a being a plus for Ampere over other cards.
 
Can one or both of you elaborate on why exactly you consider Nvidia reference coolers superior? The results I'm seeing show that the Ampere cards and big Navi cards run at about the same temperature, about the same noise level, and about the same fan speed (although fan speed by itself doesn't really matter IMO).

Sure, Nvidia's approach is 'innovative', but why does that matter if it doesn't result in any meaningful difference in performance? They also used an 'innovative' GPU power connector, but I don't think any one is treating that as a being a plus for Ampere over other cards.

There is little difference between the 3080 and 6800 because there is little difference in heat. At the top end theres and only 10 watts difference. The benefit comes from system thermals and noise, which theoretically should be better on nvidia.

Nvidia has an advantage for two-three reasons:

1. The front half of the cards exhaust hot air which means it doesnt get recirculated or reused.

2. Straight airflow is always superior to higher flow rates and noise than bending air. It flows faster with less noise. And you arent forced to blow it back down at the motherboard where it will just be picked back up again. One of the things that make centrifugal fans (blower style) fans so noisy is the moving air hits the housing. This is known as cabinet effect. It reduces efficiency.

3. And in the case of the 3090 a larger fan is usually superior for longer drafts. As the fan surface area is bigger you don't have to run the fan as fast to achieve the same cu ft rate. This creates less jet wash/turbulence. Also means less pressure drop. Its the exact same reason bypasses on modern jet engines get bigger and bigger.

But as you can see it takes an overhang of specially carved out pcb for a blow through design. A partial blow through is still superior to a standard block flow design. Buy total blow through like nvidias will produce less noise.

This is on a purely aerospace and thermodynamic academic perspective. Things like the type, balance, motor, bearing, blade and quality of fan blade design are factors.

As Steve of GN noted the AMD fans had a bit of axial flex to them. This leads to harmonic vibration at higher speeds. I suspect they sourced cooler master which just loves those dang sleeve bearings (axial flex being a common symptom) but it might be the frame spiders fault.

Now take an artic pwm p12 or p14 fan. Less than $10 each. Superbly balanced with no flex. And they just about match any Noctua for a fraction of the price.
 
Last edited:
Nov 19, 2020
1
0
10
I may have missed it but were the ray tracing results re tested after the high overclocks? Surely that do a lot to close the gap?
 
Nov 19, 2020
1
0
10
Big performance difference two test. İntel better than gaming Ryzen 5000?

f6rfslvz7udecbxbc6bd9g-980-80.png

jlnubcslt9vrv7pjegz5p.png
 
Last edited:

nofanneeded

Respectable
Sep 29, 2019
1,541
251
2,090
Can one or both of you elaborate on why exactly you consider Nvidia reference coolers superior? The results I'm seeing show that the Ampere cards and big Navi cards run at about the same temperature, about the same noise level, and about the same fan speed (although fan speed by itself doesn't really matter IMO).

Sure, Nvidia's approach is 'innovative', but why does that matter if it doesn't result in any meaningful difference in performance? They also used an 'innovative' GPU power connector, but I don't think any one is treating that as a being a plus for Ampere over other cards.


RTX 3080 FE VS RX 6800 Xt

RTX 3080 cooler is superior because it uses one less fan ( 2 VS 3) and 2 slots design and not 2.5 like AMD , and yet cooling down nvidia GPU at 320 watts TDP vs AMD 300 watts.

Vapor Chambers with fins alone can never outperform open air heat pipe design.

People with miniITX cases with only 2 slots wont be able to fit RX 6800XT in their cases (2.5 slots)
 
Last edited:
  • Like
Reactions: Gurg

Soaptrail

Distinguished
Jan 12, 2015
302
96
19,420
lol I wasn't competing. In this particular case, and at least for me it was easier to find the newer devices. But of course is could be better the other way for you and maybe other people who don't even write on the forum.

Not a competition but among the voting population, our two posts, people agree with you and therefore I have to concede.
 
  • Like
Reactions: RodroX
Big performance difference two test. İntel better than gaming Ryzen 5000?

f6rfslvz7udecbxbc6bd9g-980-80.png

jlnubcslt9vrv7pjegz5p.png
I haven't actually looked closely at the tests where Paul and I overlap, but I do know we're using different RAM kits, potentially different mobo BIOS on X570 (we're both MSI X570 Godlike though), and definitely different Z490 boards. I suspect Paul might also use a different method for testing Shadow of the Tomb Raider. I take the full ~170 second benchmark sequence, from the first scene through the third scene, but the black loading screens may skew things slightly.

Update: Confirmed that Paul is only using the last of the three test sequences in the benchmark. Performance there is higher, and he gets to omit the black loading screens. I could parse out that portion for a comparison chart, but basically our results aren't using the same sequence.

Paul also pointed out to me a BIOS setting on X570 that I previously didn't enable (CPPC -- I left it on "auto" which in this case is apparently "off"), so that could be part of the difference. If so, I'll be retesting at some point. :'(

Edit #2: At least in Far Cry 5 (which is pretty CPU limited at 1080p/1440p), I didn't see a significant change in a quick retest of RX 6800. Got 1% faster at 4K, 1% slower at 1080p/1440p.
 
Nov 20, 2020
2
1
15
I think it's probably pretty clear on what the paths are here after reading/watching a number of benchmarks.

Game at <=1440p and don't care about ray-tracing: AMD
Game at 4k and/or care about ray-tracing: Nvidia

AMD's SAM offers a marginal improvement, at best, on most games (excluding titles like Hitman 2 where it overperforms and adds a significant amount of FPS (~10, iirc)). Rage mode improvements generally don't seem to be worth mentioning (and sometimes are even detrimental).

For most people, the reference cooler probably won't matter because you'll grab an AIB card, if you're a serious buyer and/or lucky. Some of those AIB cards may be overclocked up to 2.5GHz, which is the max rumored overclock for an AIB vendor at the moment, which would likely put it more consistently on-par or even slightly surpassing the 3080 on <=1440p resolutions (but you'll also definitely pay more than the MSRP for those factory OC cards with custom coolers). There have even been a few custom, stable OC benchmarks for the 6800 XT that have beaten the 3090 by several FPS when used in conjunction with SAM.

There is an important caveat to note, however, which is that all consoles use AMD APUs now, which means that cross-platform games will likely be designed for AMD tech, so we could see an improvement on those titles in the future. Likewise, ray-tracing performance could change dramatically if developers move over to using DirectX's relatively new pre-baked RT tech instead of what they've been doing to support RTX at present (similar situation to GSync vs. FreeSync).

That said, the differences between the two are not monumental (unless you consider RT), which is only a good thing for the GPU market because there should be always be multiple options. Extra competition may also put on enough pressure to prevent another 2000-series pricing debacle from Nvidia's end, but who knows.
 
Last edited:
  • Like
Reactions: digitalgriffin
I think it's probably pretty clear on what the paths are here after reading/watching a number of benchmarks.

Game at <=1440p and don't care about ray-tracing: AMD
Game at 4k and/or care about ray-tracing: Nvidia
That should read gaming at any resolution and don't care about ray-tracing: Radeon or GeForce
Gaming at 1080p and use ray-tracing: Radeon or GeForce with a good push towards GeForce.
Gaming at 1440p or higher and use ray-tracing: GeForce
 

King_V

Illustrious
Ambassador
You're both wrong.

Want (relatively) high speed and/or resolution with ray-tracing? GeForce.
Want better bang for the buck and don't need maximum ray-tracing performance? Radeon.

And, the most important note: Subject to availability.
 
  • Like
Reactions: jeremyj_83
Nov 20, 2020
2
1
15
You're both wrong.

Want (relatively) high speed and/or resolution with ray-tracing? GeForce.
Want better bang for the buck and don't need maximum ray-tracing performance? Radeon.

And, the most important note: Subject to availability.

Availability is obviously important, especially so because the cards are largely comparable.

I don't think your first point is necessarily true, however, because the OC benches that I've seen so far consistently place the 6800 XT above the 3080 for 1440p with no RT, whereas it's much more up in the air on the reference card. So I suppose it will largely depend on which AIB card you can get your hands on (and what its boost clock is) and whether or not you're willing to wait to get the card you want (and pay for the AIB premium).

Likewise, you ignore the maneuvering behind the scenes that AMD has been doing to make sure developers optimize for their hardware in the future; however, I'll concede that is largely conjecture at this point, much the same as future RT or DLSS adoption in general.

I should mention that if DLSS 2.0 does take off, then that will obviously give Nvidia a large competitive advantage because the performance increases are generally very significant (20-30 FPS increases). DLSS 2.0 should be easier for developers to implement because it uses the same learning model for every game (compared to having to train a new model for each game as in 1.0).

It's really a matter of hedging your bets on future developments, which is what makes it difficult to declare a clear winner at this point (not that that's a bad thing, necessarily).
 
Last edited:

TJ Hooker

Titan
Ambassador
There is little difference between the 3080 and 6800 because there is little difference in heat. At the top end theres and only 10 watts difference. The benefit comes from system thermals and noise, which theoretically should be better on nvidia.
I'm comparing the 6800 XT and 3080 as they're competing in performance and price. Once the 6900 XT comes out I think it would be a better option for comparison with the 3090.

Again, why does theoretical performance matter when we know the actual performance? The actual performance being that there is no real difference in GPU temp or noise (Techpowerup actually found the 3090 to be 5 dB louder under load, at the same temp). If there is some real world difference in cooler performance that I'm not aware of, please let me know.

Nvidia has an advantage for two-three reasons:

1. The front half of the cards exhaust hot air which means it doesnt get recirculated or reused.

[...]
I get you're really trying to make this sound super complicated and technical, but it can pretty much all be boiled down to: 'Minimizing obstructions is good, as it improves air flow volume and reduces turbulence'. I think everyone understands that. I will say that I think the way they achieved that, i.e. with the PCB cutout, is neat though.

This is on a purely aerospace and thermodynamic academic perspective. Things like the type, balance, motor, bearing, blade and quality of fan blade design are factors.

As Steve of GN noted the AMD fans had a bit of axial flex to them. This leads to harmonic vibration at higher speeds. I suspect they sourced cooler master which just loves those dang sleeve bearings (axial flex being a common symptom) but it might be the frame spiders fault.
Yes, fans obviously affect cooling. But we already know the cooling performance (temp & noise), which takes into account fans along with the rest of the cooler design.

We can can all wax poetic about how Nvidia's cooler is a more elegant or sophisticated design on paper. But when it comes to a consumer choosing between one card and another, I don't see why they should care when we know that it doesn't make any real world difference.

As an aside, although Nvidia's solution may be superior in theory, the consensus seems to be that it is likely costlier. Designing a new cooler that costs more and doesn't perform any better doesn't really seem all that impressive to me.
 
Last edited:

TJ Hooker

Titan
Ambassador
RTX 3080 FE VS RX 6800 Xt

RTX 3080 cooler is superior because it is uses one less fan ( 2 VS 3) and 2 slots design and not 2.5 like AMD , and yet cooling down nvidia GPU at 320 watts TDP vs AMD 300 watts.

Vapor Chambers with fins alone can never outperform open air heat pipe design.

People with miniITX cases with only 2 slots wont be able to fit RX 6800XT in their cases (2.5 slots)
Ok, so this makes sense to me. # of slots is an objective reason to go with one over the other. Although it seems like it would be a pretty cut and dry decision. Either you only have 2 slots available in which case the 6800 XT simply isn't an option, or you have more than 2 slots available in which case 2 vs 2.5 slot is more or less irrelevant.

I don't see why a user would care how many fans the card has though, if the cooling performance is the same.
 
  • Like
Reactions: Zevkk
I'm comparing the 6800 XT and 3080 as they're competing in performance and price. Once the 6900 XT comes out I think it would be a better option for comparison with the 3090.

Again, why does theoretical performance matter when we know the actual performance? The actual performance being that there is no real difference in GPU temp or noise (Techpowerup actually found the 3090 to be 5 dB louder under load, at the same temp). If there is some real world difference in cooler performance that I'm not aware of, please let me know.


I get you're really trying to make this sound super complicated and technical, but it can pretty much all be boiled down to: 'Minimizing obstructions is good, as it improves air flow volume and reduces turbulence'. I think everyone understands that. I will say that I think the way they achieved that, i.e. with the PCB cutout, is neat though.


Yes, fans obviously affect cooling. But we already know the cooling performance (temp & noise), which takes into account fans along with the rest of the cooler design.

We can can all wax poetic about how Nvidia's cooler is a more elegant or sophisticated design on paper. But when it comes to a consumer choosing between one card and another, I don't see why they should care when we know that it doesn't make any real world difference.

As an aside, although Nvidia's solution may be superior in theory, the consensus seems to be that it is likely costlier. Designing a new cooler that costs more and doesn't perform any better doesn't really seem all that impressive to me.

Okay I'll simplify it a bit for you. I'm an astrospace engineer and worked with heat exchange systems for over 10 years now. This includes coils and heat exchangers. So I get a bit technical.

If you take two cards, and run them both at x fps (say 100 fps - which is about the truth here) and they consumer similar power (they do), then the NVIDIA cooler will win in two ways:

  1. It will run quieter because the fans don't recirculate hot air or hit obstructions
  2. If you have a CPU AIR cooler, or exhaust based radiator, the CPU will be cooler. This is because part of the hot air is vented to the outside where it cannot raise internal case temps.
It's not just about the card. It's noise, and inside case temps that are affected.

Also Steve @ GN noted how they are using multiple plates to cool the VRM's. Each time there is a conductive barrier you decrease efficiency. This means your VRM's run hotter. Also AMD did miss a chance to use the backplane as a heat sink. To AMD's benefit they always did over engineer the VRM's however.

He was spot on with these observations and I would have said the same thing as an engineer.

Now is the AMD design a BAD design? NO not at all. It's certainly more maintenance friendly than NVIDIA and still effective. And it's the best design they have ever created by far. The fans look good. But that fan flex as noted looks concerning at higher RPM's. It will lead to more imbalance and buzzing. It is an inferior design (outside maintenance) to NVIDIA's reference which should keep the case cooler and quieter.

Look at it this way: AMD used to make Chevy Spark of blower coolers. Now they make Corvettes of coolers. NVIDIA is making a Lambo. It's a huge step up, but not the same. If you are into speakers, think of it like a Vizio Soundbar (AMD Blower) ->Klipsch Reference Seperates (AMD Now)-> B&W 800 series (NVIDIA)

If a Corvette or Klipsch is good enough for you, then be happy with that.

And that you can take to the bank.
 
Last edited:

nofanneeded

Respectable
Sep 29, 2019
1,541
251
2,090
Ok, so this makes sense to me. # of slots is an objective reason to go with one over the other. Although it seems like it would be a pretty cut and dry decision. Either you only have 2 slots available in which case the 6800 XT simply isn't an option, or you have more than 2 slots available in which case 2 vs 2.5 slot is more or less irrelevant.

I don't see why a user would care how many fans the card has though, if the cooling performance is the same.

I was talking why Nvidia has superior cooling design .. which reflects on the pricing as well. imagine Nvidia used 3 fans ...

More over , 3 slots card are a huge problem from people using many cards in the system , also will restrict motherboards choices for the third slot is wasted , and in micro Atx motherboard with 4 slots the third slot is always the other 16x or 8x PCIe slot.

When you move to HEDT motherboards , in case of Micro ATX with all 4 slots 16 or 8 lanes , you will lose a lot of lanes just because the card you are using is 3 slots design.

in ATX motherboards it is far better , unless you are one of these people who want to add like 3x 8lanes SSD cards and some 10Gbit card .. then you cant do it with a 3 slots GPU.

By the way vapor chambers coolers are very cheap ... Nvidia paid at least $50 more for that cooler.
 
Last edited:
That will never be. This PC gaming, not locked consoles.

hoho are you sure about that? remember when we have options to use DX9 and DX11 on the early days of DX11? now did we still see something like that? even majority of indie games are transitioning to DX11 completely. the idea behind using RT is not just about much better graphic. but using straight forward lighting method that work physically correct unlike of various baked/faked effect that being heavily used right now. moving forward this various effect will be removed from game engine codes thus clean the game engine bloat and will make fixing bugs and issue much easier in the future. so yes in the future RT effect is no longer an option you can turn off and off. it will be the sole choice you have with no way to disable it.

another example is FP16 vs FP32 use in games. half life 2 is among the first game that support FP32 mode. using FP32 as oppose to FP16 will cut frame rate by half but back then there is no image benefit to half life 2 using FP32. but look our games right now. is there any game that can switch between FP16 and FP32? in fact it is impossible to build modern game with complex graphic without FP32 and FP16 usage becoming extremely limited due to this.
 
hoho are you sure about that? remember when we have options to use DX9 and DX11 on the early days of DX11? now did we still see something like that? even majority of indie games are transitioning to DX11 completely. the idea behind using RT is not just about much better graphic. but using straight forward lighting method that work physically correct unlike of various baked/faked effect that being heavily used right now. moving forward this various effect will be removed from game engine codes thus clean the game engine bloat and will make fixing bugs and issue much easier in the future. so yes in the future RT effect is no longer an option you can turn off and off. it will be the sole choice you have with no way to disable it.

another example is FP16 vs FP32 use in games. half life 2 is among the first game that support FP32 mode. using FP32 as oppose to FP16 will cut frame rate by half but back then there is no image benefit to half life 2 using FP32. but look our games right now. is there any game that can switch between FP16 and FP32? in fact it is impossible to build modern game with complex graphic without FP32 and FP16 usage becoming extremely limited due to this.
The current modern APIs tend to be improved and maintained, while older APIs fade away. If you were a game developer, trying to write a new game using DirectX 9 would mean also trying to write the game with development tools from a decade back or whatever. It's theoretically possible, but it would be a mess, and DirectX 11 is mostly a superset of DX9.

As for FP32, it's used because for all the blending used (HDR and such included), FP16 lacks the precision to avoid banding. There most certainly is a difference between FP16 and FP32, but FP16 is supported for operations that don't actually need the precision. Again, you could probably go for full FP16 if you wanted to try that, but then all the GPUs that lack native FP16 support would simply convert to FP32 and run the same speed. Only RX Vega and later AMD and RTX 20-series and later Nvidia have double performance FP16 fast math (maybe a few other GPUs as well, but definitely not Pascal -- in fact, Pascal is like 1/64 FP32 performance for FP16).
 
  • Like
Reactions: clsmithj

King_V

Illustrious
Ambassador
hoho are you sure about that? remember when we have options to use DX9 and DX11 on the early days of DX11? now did we still see something like that? even majority of indie games are transitioning to DX11 completely. the idea behind using RT is not just about much better graphic. but using straight forward lighting method that work physically correct unlike of various baked/faked effect that being heavily used right now. moving forward this various effect will be removed from game engine codes thus clean the game engine bloat and will make fixing bugs and issue much easier in the future. so yes in the future RT effect is no longer an option you can turn off and off. it will be the sole choice you have with no way to disable it.

Yes, but, look at what you're saying. Even the majority of Indie games are transitioning to DX11. Which came out in, what, 2009? 11 years ago. And, as Jarred stated, it's a superset, not an entirely new, separate thing.

I mean, I was installing DX 9 on a Windows 98 PC... and had to do a bit of hacking of the CAB file to get DX 9.0c to work in Windows 98 (9.0b would work fine, though).

So, by the time Ray Tracing becomes non-optional, if ever, who's still going to be using Ampere or Big Navi cards?

Ray Tracing was brand-new with the release of Turing, in late 2018. So, it's 2 years old now. When DX 11 was 2 years old, we had the top dogs of, what? The GTX 590 and Radeon HD 6990? Sliding in under them were the GTX 580 and HD 6970.

I'd estimate that if RT ever becomes REQUIRED, we're looking at about another 5 years minimum before that happens. And remember, DX11 wasn't a new feature, it was an improvement/expansion over DX9.
 
  • Like
Reactions: JarredWaltonGPU