News AMD Radeon RX 6800 XT and RX 6800 Review

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
So many people saying 'omg muh framerate' when mentioning raytracing and yet they play at 2k/4k? If frames matter to you so much you should be playing at 1080p

Also whats the point of framerates in a game like Shadow of the Tombraider? You are missing out of greater graphical fidelity for no reason, and turning dxr doesnt dip you below 60 fps if you own a 3080
 
So many people saying 'omg muh framerate' when mentioning raytracing and yet they play at 2k/4k? If frames matter to you so much you should be playing at 1080p

Also whats the point of framerates in a game like Shadow of the Tombraider? You are missing out of greater graphical fidelity for no reason, and turning dxr doesnt dip you below 60 fps if you own a 3080
I'm torn on this release of the 6000 series. I've been waiting patiently for both companies to have their offerings out for a long while now. I've had my Vega 56 for almost 2 years now. At the time I got it I had 3 1080p monitor's in Eyefinity so wasn't too bad to go to one screen to use for most games. However since then I got last year a Samsung CRG9 49" Monitor and been waiting for the new generations to release to get acceptable performance at native resolution (5120x1440). While close to 4k it's a little under and I was hoping both releases from the companies would be basically even so I could go with either offering.

I was rooting for AMD to have a slight edge so it could improve competition and hopefully drive prices down some. I think AMD did succeed in that aspect while going for the high-end. Though now there is the issue of DXR and whether I want to value it or not in my purchase decision. I'm one that I forgo FPS in games to get additional quality or eye candy (unless it's a competitive shooter or the like).

The results from AMD with their DXR offerings seem like they'll be behind a decent margin overall but I'm also curious as these are older titles mostly tested that have already had Nvidia driver support on it. The newer titles that just came out with AMD optimization fare better, but also some screenshots show a lack of the same quality. Time will tell if this is just early support/drivers that have issues or bugs in the adoption of the AMD side of things.

Lastly we have the DLSS vs. AMD's upcoming Super Resolution. I wish AMD would have brought this out at time of release to do a side by side comparison as I believe this will be my deciding choice which company I go for. It's not in a huge amount of games, or really many games I own, but in the next 2 years it might be in a decent % of them with both companies having solutions. If AMD and their super resolution can be enabled without anything really needed on the developer side and gives similar performance/quality to Nvidia then that is the win in my book. As much as I want DXR in all games for the visual improvement, it still takes too much of a hit to make it worthwhile as I want it to be without any sort of up-sampling.

-I realize I discussed how I'm more interested in visual quality of a game but there is a limit for me to appreciate that. I would like at least 60 frames on a game bare minimum with pushing the highest amount of settings as I can get. I would like of course to get 300fps DXR in games, but that's not realistic for a long stretch. So for now I can settle for minimum 60 frames with what DXR settings I can get.
 
A couple years from now the option to turn off RT will be gone. I just don't see game developers putting in the same level of effort supporting outdated hardware.

It's going to be a long, long time before that happens. If you look at the Steam HW survey, the top GPUs are 1060s, 1050ti, 1050, etc. There are more GTX 970s still in use than any ray tracing cards other than the 2060/2070s. I'd be shocked if GPUs capable of ray tracing hit 50% of the gaming community within the next 2 years. Game companies are going to need to support non-ray tracing hardware for a very long time. Ray tracing is going to need to be common on the <$200 and <$300 cards for at least one full upgrade cycle before we get mass adoption, much less anyone getting rid of the ability to run without RT.
 
Would never take that bet as this is a wild assumption with nothing to back it.

And unless you have a crystal ball I would avoid making projections like this.

Sounds like something an NV fanboy would say.

Nope. Just studied computer algorithms, computer architecture amd engineering. 3d algorithms and ai.

I also own a rx580, 7970, and 5700xt

That cache is a bottleneck at 4k today. The cache works effectively because as drawcalls and individual tiles take little to no memory. Thus a small chunk is used. But that is highly dependent upon complexity of that chunk.

The polygon count is shooting through the roof the last couple of years. We are now looking at scenes with millions of polygons dynamically lit with lumen (unreal engine 5 https://logicsimplified.com/newgames/unreal-5-nanite-and-lumen-technology-for-next-gen-games/ )

The question is what will be the bigger bottle neck in the future. 10GB or 128MB cache which is showing its limits now at 4k.
 
Last edited:
I said once RIP Nvidia, I take that back ...

Nvidia wins this round , but not performance wise , no , Performance wise they are close , Nvidia wins in Extras ,

1- Waay better cooler design
2- Better image quality (not performance)
3- Waay better looking card.
4- More Ports.

I am 100% sure that Nvidia cooler is at least $50 more expensive than AMD one (3080) ...

so for AMD to convince me to buy their cards , they should :

1- Reduce their prices by $50
2- work on the software level to improve image Quality.

Nvidia wins !

I decided to get the RTX 3080 ti , when it comes out. I dont trust that 10GB is good enough for the next 3 years.
 
A couple of games that I play have gone from DirectX 9c to version 11. Because of this, and the fact that a lot of their customers live where they can not afford a computer that is only five years old, they are loosing a good number of people with each update. Be glad that we can afford something with more than four cores and no multi-threading, even if DirectX 11/12 was not designed properly to take full advantage of 16 cores/32 threads (yet still a heck of a lot better than DirectX 9 or older).

If everyone likes the exact same thing then I would be out of a job. You would be able to go into a nation-wide lumber yard chain, pick one of three styles, and have them install it for you, in your just like everyone else's home. Since we all have different likes and tastes I get to keep my job, as every installation is a custom job, even if a lot of it seems to be the same on the surface. You have different textures and materials, different shapes and sizes, and that is just what I help install, and it gets installed into different shapes and sizes, with different materials and textures, and different locations.

Right now a 1440 monitor offers the right combination of Field of View (FoV), at a resolution that I like, at a good framerate - I prefer smoothness over quality, yet I don't have to make to many compromises with one for the other. And that is just me - I like being able to crank everything up to Ultra, at 1440, and manage to get up to 144 Hz, the best my gaming monitor can do. I look forward to the day that I can go VR and not have to worry if the quality and fidelity is there or not.

Remember, your mileage may very, and to each his (or her) own.
 
I said once RIP Nvidia, I take that back ...

Nvidia wins this round , but not performance wise , no , Performance wise they are close , Nvidia wins in Extras ,

1- Waay better cooler design
2- Better image quality (not performance)
3- Waay better looking card.
4- More Ports.

I am 100% sure that Nvidia cooler is at least $50 more expensive than AMD one (3080) ...

so for AMD to convince me to buy their cards , they should :

1- Reduce their prices by $50
2- work on the software level to improve image Quality.

Nvidia wins !

I decided to get the RTX 3080 ti , when it comes out. I dont trust that 10GB is good enough for the next 3 years.

Sounds like a reasonable choice. But I bet the 3080ti will be at least $950. It is a cut down 3090 after all. March or April will be the earliest announcement and May availability.

The 3080 10GB will likely kinda of disappear. Not enough margin for nvidia. It also wont make sense when the 3070ti 16GB comes out. That will be likely the $700 replacement.

Neither amd and nvidia will lower prices till may at the earliest. And AMD could care less if they make 25% or less of unit volume. They are at capacity. GPU's are much lower margin than zen 3. AMD will run at full TSMC capacity till May. Main allocation will be zen 3 for this reason. Also Rocket Lake will give AMD cpu gaming competition by ?March? in the 8 core and less arena. (Most gamers) But AMD will retain the core count performance king title. (Workstation people)

If AMD were really smart they would make mining versions of the cards and sell them at 4x's the price. They would raise margins to greater than zen 3, Limit the number of miners, and get some cards in gamers hands. All they would have to do is hobble consumer cards so they cant do mining via burnable fuses.
 
Last edited:
Why is ray tracing prominent in your headline when you later agree with everyone else that this is no more than a gimmick today? What about price? And power dissipation?

Taken with the dismissive "nipping at Ampere's heels" anyone - especially a newcomer - will assume from the beginning that AMD didn't really measure up. Some will skip all the detail and leave with that.

Why don't you guys ever think about what your headlines will convey? You're a big influence and they matter.
 
Why is ray tracing prominent in your headline when you later agree with everyone else that this is no more than a gimmick today? What about price? And power dissipation?

Taken with the dismissive "nipping at Ampere's heels" anyone - especially a newcomer - will assume from the beginning that AMD didn't really measure up. Some will skip all the detail and leave with that.

Why don't you guys ever think about what your headlines will convey? You're a big influence and they matter.
"AMD Radeon RX 6800 XT and RX 6800 Review: Nipping at Ampere's Heels"

Let's see ... nope. No mention of ray tracing. But these GPUs are definitely nipping at Ampere's heels. The RX 6800 is almost universally faster than RTX 3070, and RX 6800 XT trades blows with RTX 3080. Or maybe you're referring to the sub-heading?

"AMD's best GPUs to date are strong in rasterization but fall behind in ray tracing."

Which part of that is incorrect? AMD is competitive and often faster in rasterization, but falls behind in virtually every meaningful ray tracing test. Nipping at someone's heels implies AMD is close enough to Nvidia to be able to bite. And it is. If you want to try and read more into it than that, it's on you and your personal interpretation. I've never heard someone say something is "nipping at [something else's] heels" and felt it meant the product was clearly behind or inferior. Usually, it implies one product/company has routinely been in the lead and is now in danger of losing that lead. Which is pretty much the case here.

I don't agree that ray tracing is a gimmick. It's a well known algorithm that produces superior rendering results, but it's extremely complex and has previously been out of reach of real time gaming (prior to the RTX launch). Even now, looking at DXR, there are many games that have done a pretty bad job of incorporating ray tracing, but a few show the potential. Control and probably Watch Dogs Legion are two of the best examples, and I'm looking forward to seeing Cyberpunk 2077 with ray tracing enabled. But the ray tracing in Dirt 5, or Shadow of the Tomb Raider, or Call of Duty (both Modern Warfare and Black Ops Cold War)? Yeah, it doesn't do much/enough to make me think it was a great choice. Better shadows are not as important as improved lighting, reflections, and shadows -- all three together, not just one.
 
  • Like
Reactions: digitalgriffin
Sounds like a reasonable choice. But I bet the 3080ti will be at least $950. It is a cut down 3090 after all. March or April will be the earliest announcement and May availability.

The 3080 10GB will likely kinda of disappear. Not enough margin for nvidia. It also wont make sense when the 3070ti 16GB comes out. That will be likely the $700 replacement.

Neither amd and nvidia will lower prices till may at the earliest. And AMD could care less if they make 25% or less of unit volume. They are at capacity. GPU's are much lower margin than zen 3. AMD will run at full TSMC capacity till May. Main allocation will be zen 3 for this reason. Also Rocket Lake will give AMD cpu gaming competition by ?March? in the 8 core and less arena. (Most gamers) But AMD will retain the core count performance king title. (Workstation people)

If AMD were really smart they would make mining versions of the cards and sell them at 4x's the price. They would raise margins to greater than zen 3, Limit the number of miners, and get some cards in gamers hands. All they would have to do is hobble consumer cards so they cant do mining via burnable fuses.

yes but this is a long delay for me , and I am really pissed off ... Nvidia should have never released any card lower than 16GB VRAM (AMD is smarter in this) .. five months delay is not something acceptable.
 
I realize ray tracing is important to some people, and that is fine, but I think the vast majority of people are in all honestly really not that worried about it. Maybe for some specific games that are really not dependent on being fast paced but are instead more visually detail and story focused, and that's fine because the performance hit probably doesn't even matter there. For everything else, RT is probably a non starter for most people anyhow.

Like a lot of things, it has it's place and then there are places where it's unwanted.
 
I´m more than happy with 30 fps on those types of games with maximum eye candy, also because 30 fps looks more "Cinematic" (do you remember all the criticism Peter Jackson got for shooting The Hobbit at 48 fps and how it looked like a "Soap Opera" and not like a big budget movie because of it? well I think most people agree as the experiment has not been repeated again and we are still watching movies at 24 fps.
Films and games at 30fps are not at all comparable, since they work differently. When recording a film, the camera is almost continuously recording motion to each frame before moving onto the next, resulting in natural motion blur that provides a smooth transition between frames. With games, each frame is rendered at one point in time, without those smooth transitions, as accurately simulating them would require rendering many frames and merging them together. As a result, the transition from one frame to the next tends to appear quite choppy at 30fps, unlike films where it can appear smooth.

Some games provide the option to add a motion blur effect, but it's only roughly simulated and doesn't look nearly as good, nor does it completely remove the choppiness. Films will also specifically control the movement of cameras and the objects they are recording to prevent anything important from getting blurred, while that's often not the case in games, where the user has control of the camera. For anything with significant camera movements, like an FPS game, even a perfect motion blur implementation would result in things appearing rather blurry when looking around at 30fps.

D)A shame about their reference cooler though.
What about the reference cooler? From the couple reviews I've read so far, it's been very well-received, and the other review even suggested that the 6800XT was quieter than the RTX 3080 at stock settings, while maintaining very reasonable temperatures and putting out less heat due to the card's lower power draw.

With games ever increasing in complexity the 128MB of infinity cache will age quickly. Its showing its limits at 4k now.
If much of the performance gains are a result of keeping frequently accessed framebuffer data in the cache, that's not likely to change much unless resolution increases further. And if their new "Super Resolution" upscaling feature proves to be a good alternative to DLSS, which I suspect it probably will, then the amount of framebuffer data being accessed during the rendering process might even become smaller when that is active. Games will likely require more VRAM in the future, but that could just as easily hurt performance more on Nvidia's current cards, as they are at a VRAM disadvantage and more likely to have to shuffle data out to system RAM.

Man.... I had a crack at a $280 RX5700 at the start of 2020. The new cards are certainly better, but they're far above that level of price/performance. For instance, the RX6800 is 2x the cost, but only 60-70% faster....
Moreover, Comparing, say, the RX6800 to the RTX3070 in price/performance, AMD is almost completely negating any ray tracing performance differences. In that battle, the 3070 wins in my opinion, hands down.
$280 for an RX 5700 was not exactly typical pricing though, and chances are that it was one of the models with questionable cooling, hence why they were trying to get rid of it. Going by more typical RX 5700 pricing, the suggested price of the 6800 is around 60-75% higher than that card, which is right about in line with it from a price to performance standpoint. Considering cards in this higher-end price bracket generally fare significantly worse on a performance-per-dollar basis, that's not too bad considering the 5000-series cards just launched a little over a year ago.

A couple years from now the option to turn off RT will be gone. I just don't see game developers putting in the same level of effort supporting outdated hardware.
That seems unlikely within a "couple years". Especially since the newly-released consoles provide lower RT performance than these cards. So, we're likely to see most games continue to be designed first and foremost for rasterized rendering with some RT effects sprinkled on top throughout this console generation. And for most effects, implementing both versions should not be much harder, as commonly used game engines take much of that work out of the developer's hands.

The results from AMD with their DXR offerings seem like they'll be behind a decent margin overall but I'm also curious as these are older titles mostly tested that have already had Nvidia driver support on it. The newer titles that just came out with AMD optimization fare better, but also some screenshots show a lack of the same quality. Time will tell if this is just early support/drivers that have issues or bugs in the adoption of the AMD side of things.
Yeah, it will be interesting to see if these cards gain any ground with raytracing as new games are optimized for them. Current RT games were optimized exclusively for Nvidia's raytracing hardware, since that's all that was available, and as such might not play to the specific strengths and weaknesses of AMD's implementation. AMD may improve performance through driver updates as well, since there's a lot more to realtime raytracing than just casting rays, and many parts of the process like denoising can probably be optimized further on their end.

Nvidia wins this round , but not performance wise , no , Performance wise they are close , Nvidia wins in Extras ,

1- Waay better cooler design
2- Better image quality (not performance)
3- Waay better looking card.
4- More Ports.
  1. I see nothing indicative of Nvidia's cooler design being "better". It's possible it might be more expensive to make, but it's also a card that puts out more heat.
  2. Your basing this "better image quality" on the one newly-released game where RT reflections seemed to not be rendered right? >_>
  3. Many people are not all that fond of how the 30-series cards look. Looks are subjective, so what might look good to one person, might not to another.
  4. They have the exact same number of ports. Both have four. The USB-C port can be used as a DisplayPort output using a simple adapter cable, and also allows for even more connectivity options for USB-C displays, adding USB and 27 watts of power delivery, making it arguably better than a regular DisplayPort connection.

But if AMD were really smart they would make mining versions of the cards and sell them at 4x's the price. They would raise margins to greater than zen 3. Limit the number of miners. And get some cards in gamers hands. All they would have to do is hobble consumer cards so they cant do mining via burnable fuses.
Something tells me hobbling consumer cards to impede mining would also impede their gaming and application performance to at least some extent, so that's probably impractical. And miners are not going to buy a card that costs 4x the price and can't be easily resold, since they wouldn't likely be able to get a worthwhile return on investment out of it, and when the mining market inevitably crashes they would be stuck with a piece of hardware that they can't get rid of to recoup some of their costs.
 
  • Like
Reactions: drivinfast247
Look, I enjoyed playing the game called Control a lot... But I'm probably never going to play it again. I'm especially not going to spend $700 to make it look a little better.

So raytracing and DLSS are not useful theoretical features to me. Maybe in another 2 years if another couple games support it, but look at what just happened to all the 2080ti owners who "just bought it" for ray tracing.
The card went obsolete before games actually started to support it.
 
Last edited:
After watching Digital Foundry video on the comparison of Watch Dogs Legion X Box Series X ray tracing to PC versions ray tracing, I already suspected what was found in this review. Digital Foundry found a RTX 2060 Super with console ray tracing setting offered same (and sometimes better) performance than the console version... and thats a "mere" 7.2 GFLOP GPU, so rasterizing performance surely was not the culprit, ray tracing performance had to be, and this review just confirmed it. And that was again without DLSS!, enter DLSS and the 2060 super beats the console every time.

Nvidia is DF's biggest sponsor, and DF isn't the best at disclosing who paid for which video... so take their results with a grain of salt. At this point they are YouTube affiliate personalities, not really an independent review outlet.
 
  • Like
Reactions: dmoros78v
Thats the thing, thats your opinion, wich is mostly enforced by what you value more (fps) but not all people think the same. For CoD or any other First Person Shooter I agree, but some other games like Control, Watch Dogs, Tomb Raider, or mostly "Cinematic" third person games I will prefer eye candy over fps every time, I´m more than happy with 30 fps on those types of games with maximum eye candy, also because 30 fps looks more "Cinematic" (do you remember all the criticism Peter Jackson got for shooting The Hobbit at 48 fps and how it looked like a "Soap Opera" and not like a big budget movie because of it? well I think most people agree as the experiment has not been repeated again and we are still watching movies at 24 fps.

Not gonna knock you for your opinion but I genuinely don't understand it. Even windows feels awful on 30hz monitors, games are even worse and I don't even play first person shooters. But that's the great thing about playing on PC, you can tailor make the experience to suit your needs/preferences. Consoles have been getting better in this regard with "high" fps mode vs high quality modes but it's still lacking.
 
  • Like
Reactions: dmoros78v
Great review as always Jarred.
Just one little thing: did I miss the data about smart access memory and rage mode performance or will it be updated in another article?

Thanks!
 
  • Like
Reactions: digitalgriffin
Look, I enjoyed playing the game called Control a lot... But I'm probably never going to play it again. I'm especially not going to spend $700 to make it look a little better.

So raytracing and DLSS are not useful theoretical features to me. Maybe in another 2 years if another couple games support it, but look at what just happened to all the 2080ti owners who "just bought it" for ray tracing.
The card went obsolete before games actually started to support it.

2080tis didnt magically just lose performance, people can still game at a very high level with those
 
  • Like
Reactions: artk2219
Films and games at 30fps are not at all comparable, since they work differently. When recording a film, the camera is almost continuously recording motion to each frame before moving onto the next, resulting in natural motion blur that provides a smooth transition between frames. With games, each frame is rendered at one point in time, without those smooth transitions, as accurately simulating them would require rendering many frames and merging them together. As a result, the transition from one frame to the next tends to appear quite choppy at 30fps, unlike films where it can appear smooth.

Some games provide the option to add a motion blur effect, but it's only roughly simulated and doesn't look nearly as good, nor does it completely remove the choppiness. Films will also specifically control the movement of cameras and the objects they are recording to prevent anything important from getting blurred, while that's often not the case in games, where the user has control of the camera. For anything with significant camera movements, like an FPS game, even a perfect motion blur implementation would result in things appearing rather blurry when looking around at 30fps.


What about the reference cooler? From the couple reviews I've read so far, it's been very well-received, and the other review even suggested that the 6800XT was quieter than the RTX 3080 at stock settings, while maintaining very reasonable temperatures and putting out less heat due to the card's lower power draw.


If much of the performance gains are a result of keeping frequently accessed framebuffer data in the cache, that's not likely to change much unless resolution increases further. And if their new "Super Resolution" upscaling feature proves to be a good alternative to DLSS, which I suspect it probably will, then the amount of framebuffer data being accessed during the rendering process might even become smaller when that is active. Games will likely require more VRAM in the future, but that could just as easily hurt performance more on Nvidia's current cards, as they are at a VRAM disadvantage and more likely to have to shuffle data out to system RAM.


$280 for an RX 5700 was not exactly typical pricing though, and chances are that it was one of the models with questionable cooling, hence why they were trying to get rid of it. Going by more typical RX 5700 pricing, the suggested price of the 6800 is around 60-75% higher than that card, which is right about in line with it from a price to performance standpoint. Considering cards in this higher-end price bracket generally fare significantly worse on a performance-per-dollar basis, that's not too bad considering the 5000-series cards just launched a little over a year ago.


That seems unlikely within a "couple years". Especially since the newly-released consoles provide lower RT performance than these cards. So, we're likely to see most games continue to be designed first and foremost for rasterized rendering with some RT effects sprinkled on top throughout this console generation. And for most effects, implementing both versions should not be much harder, as commonly used game engines take much of that work out of the developer's hands.


Yeah, it will be interesting to see if these cards gain any ground with raytracing as new games are optimized for them. Current RT games were optimized exclusively for Nvidia's raytracing hardware, since that's all that was available, and as such might not play to the specific strengths and weaknesses of AMD's implementation. AMD may improve performance through driver updates as well, since there's a lot more to realtime raytracing than just casting rays, and many parts of the process like denoising can probably be optimized further on their end.


  1. I see nothing indicative of Nvidia's cooler design being "better". It's possible it might be more expensive to make, but it's also a card that puts out more heat.
  2. Your basing this "better image quality" on the one newly-released game where RT reflections seemed to not be rendered right? >_>
  3. Many people are not all that fond of how the 30-series cards look. Looks are subjective, so what might look good to one person, might not to another.
  4. They have the exact same number of ports. Both have four. The USB-C port can be used as a DisplayPort output using a simple adapter cable, and also allows for even more connectivity options for USB-C displays, adding USB and 27 watts of power delivery, making it arguably better than a regular DisplayPort connection.

Something tells me hobbling consumer cards to impede mining would also impede their gaming and application performance to at least some extent, so that's probably impractical. And miners are not going to buy a card that costs 4x the price and can't be easily resold, since they wouldn't likely be able to get a worthwhile return on investment out of it, and when the mining market inevitably crashes they would be stuck with a piece of hardware that they can't get rid of to recoup some of their costs.

You're showing your bias

Either game complexity will affect memory for both or it wont. But you first say increased complexity will not affect it just because its a frame buffer. And then you turn around and then say standard memory is more likely to be more affected. Hog wash.

That cache is mote than just a frame buffer/z buffer/effects buffer. That cache handles all the triangle data and textures for each draw call. Its not a simple buffer as you proclaim. The render chunks (tiles) must handle more complexity. They don't change their tile size. Early discards are less.

As I told someone else, game complexity is exploding with engines like ue5 lumen. They are putting in millions of triangles. Its insane. What do you think that game cache will do when its faced with millions of triangles in the future. The answer is simple: start to struggle at 4k.

Im not interested in the fastest hd/2k card at this price point. Im interested in the fastest card with great rt to match.

And AMD's upscaling tech isnt here. So you cant sell on what isnt here. (Remember vega's double int promise) Its not ai based either. The rumors are it is fast but inferior quality wise.

Heat difference is also trivial. In rage mode you are looking at <10Watts. Also nvidias solution allows some blow out and pass through, which I can tell you is superior as an engineer. Vent some heat and less restrictive flow.

Also nvidia doesnt need you to buy latestest everything to get that speed promised.

Lets add it up:
Raster 1080. AMD
Raster 1440 AMD
Raster 4k. NVIDIA
RT. NVIDIA by a large shot
Upscale tech. NVIDIA
Cooling solution NVIDIA
Memory. AMD
Requires 5xxx. NVIDIA (doesnt require it)

3:5. I'll take my extra $50 and put it down on NVIDIA 3080. And AMD could care less. Even if they sell way less 6xxxx series, they have much bigger margins in zen 3 and they are at capacity

As to mining there are a ton of hardware level tricks that can ve enforced to detect when a card is used for mining at a hardware level. I can think of 3 or 4. One way is to force the card to be on a 8x or higher bus. Miners love using 1x buses because it allows more cards per rig.meaning greater efficiency.

Another is to get the card to enumerate the bus lines and give a hello call to other AMD cards. If it detects more than 2 then mining gets disabled.

The third involves topics that i wont openly talk about because they deal with security. But its easy to tell when a mining program is running.

Mining can be easily stopped without affecting games. There just chose not to.
 
Well Ima need to upgrade my GPU sooner than I thought.

It seems this weird little game Im playing and enjoying every second (Snowrunner) can really push my current RTX 2070 to the limit and beyond, with everything maxed out im getting around 45~50FPS at 1440p.

Thats if any of this cards (nvidia or amd) are ever available in my country, and if the price is somehow "decent", cause everything is going crazy right now.
 
So raytracing and DLSS are not useful theoretical features to me. Maybe in another 2 years if another couple games support it, but look at what just happened to all the 2080ti owners who "just bought it" for ray tracing.
The card went obsolete before games actually started to support it.
I wouldn't say the 2080 Ti is obsolete yet. It's pretty much like a 3070 with more VRAM. It might not be as high-end as it was before, but it's still arguably pretty capable, with RT performance that should be better than the new consoles.

so for AMD to convince me to buy their cards , they should :

1- Reduce their prices by $50
2- work on the software level to improve image Quality.
Since you pointed out that I disagreed with your entire post in another thread about these cards, and I did pretty much the same thing here, I will agree with this. : P

Or at least, I think the cards would be more compelling over the competition if they were to target a somewhat lower price point. Of course, that extra 6-8GB of VRAM doesn't come free. When Nvidia releases higher-VRAM versions of their cards, they will most likely be priced notably higher. A 3080 Ti with 20GB of GDDR6X would likely get an MSRP closer to $1000, while only being perhaps 5-10% faster than a 3080 in today's titles, and a 3070 Ti with additional VRAM would probably get an MSRP of around $600 or so.

Not everyone feels the need for 16GB of VRAM though, and AMD uses the term "up to 16GB" when describing the 6800 in their marketing materials, so I suspect an 8GB variant is coming, with a price probably much closer to that of the 3070. An RX 6800 8GB for around $500 would still likely perform about the same in today's games, and probably for a number of years to come at 1440p and below, but there would be a better tradeoff between the somewhat lower RT performance and the higher rasterized performance relative to the 3070 when the two are positioned at about the same price. AMD likely wants to focus on the higher-margin parts initially though, while supply is limited.

And as for software support to improve image quality, there may be some room for improvement there at least on the RT side of things, and maybe with their new upscaling routine that isn't available yet. The only game where the article mentioned RT looking noticeably different was with Watchdogs Legion though, and that may be something on the game-developer's end.

You're showing your bias

Either game complexity will affect memory for both or it wont. But you first say increased complexity will not affect it just because its a frame buffer. And then you turn around and then say standard memory is more likely to be more affected. Hog wash.

That cache is mote than just a frame buffer/z buffer/effects buffer. That cache handles all the triangle data and textures for each draw call. Its not a simple buffer as you proclaim. The render chunks (tiles) must handle more complexity. They don't change their tile size. Early discards are less.

As I told someone else, game complexity is exploding with engines like ue5 lumen. They are putting in millions of triangles. Its insane. What do you think that game cache will do when its faced with millions of triangles in the future. The answer is simple: start to struggle at 4k.

Im not interested in the fastest hd/2k card at this price point. Im interested in the fastest card with great rt to match.

And AMD's upscaling tech isnt here. So you cant sell on what isnt here. (Remember vega's double int promise) Its not ai based either. The rumors are it is fast but inferior quality wise.

Heat difference is also trivial. In rage mode you are looking at <10Watts. Also nvidias solution allows some blow out and pass through, which I can tell you is superior as an engineer. Vent some heat and less restrictive flow.

Also nvidia doesnt need you to buy latestest everything to get that speed promised.

Lets add it up:
Raster 1080. AMD
Raster 1440 AMD
Raster 4k. NVIDIA
RT. NVIDIA by a large shot
Upscale tech. NVIDIA
Cooling solution NVIDIA
Memory. AMD
Requires 5xxx. NVIDIA (doesnt require it)

3:5. I'll take my extra $50 and put it down on NVIDIA 3080. And AMD could care less. Even if they sell way less 6xxxx series, they have much bigger margins in zen 3 and they are at capacity

As to mining there are a ton of hardware level tricks that can ve enforced to detect when a card is used for mining at a hardware level. I can think of 3 or 4. One way is to force the card to be on a 8x or higher bus. Miners love using 1x buses because it allows more cards per rig.meaning greater efficiency.

Another is to get the card to enumerate the bus lines and give a hello call to other AMD cards. If it detects more than 2 then mining gets disabled.

The third involves topics that i wont openly talk about because they deal with security. But its easy to tell when a mining program is running.

Mining can be easily stopped without affecting games. There just chose not to.
You're showing your bias : D

Your suggestion of infinity cache limitations is based largely on guesswork about what data is in the cache at any given time. I brought up frame buffer data because it's likely a part of what's stored in there, and something that could potentially be reduced in size if upscaling becomes more common for higher resolutions.

As for engines like Unreal5 working with millions of triangles, the Lumen demo was running on the PS5 hardware, with half the CUs and probably half the cache of a 6800 XT, along with lower native memory bandwidth than these cards. Sure, it wasn't running at 4K60, but there's no guarantee that even the high-end 30-series cards will be able to do that. And the 3070 actually has lower native memory bandwidth than an RX 6800. I guess I'm not seeing game developers really crippling performance on console hardware while designing their games around the capabilities of $700+ graphics cards any time soon.

It's possible that the 30-series architecture may show additional performance benefits in the future, but the same could be said for the 6000-series. At this point, it's anyone's guess.
 
Not gonna knock you for your opinion but I genuinely don't understand it. Even windows feels awful on 30hz monitors, games are even worse and I don't even play first person shooters. But that's the great thing about playing on PC, you can tailor make the experience to suit your needs/preferences. Consoles have been getting better in this regard with "high" fps mode vs high quality modes but it's still lacking.

Thanks for not bashing ones opinion or taste as it simply is that!, as to why, I guess it has to be me being a Cinefile?, movie watching has been my second priority for entertainment and have a quite long collection of DVDs and Blurays. So games that are like movies (last of us, uncharted, god of war on consoles, look and play great and I think most importantly have gotten to a point where it is almost as if you are playing an interactive movie) having 30 fps contributes to that "movie feeling" just like in professional grade cameras, having a proper 24p shooting option is very coveted by film makers.

So its just a how it "looks like a movie" thing I guess. But only that, desktop usage of course I go for 60 Hz min, and games were precision is needed like first person shooters, racing simulators, etc, of course 60 fps is my choice. There is also the exception of From Games like Dark Souls... here 60 fps indeed help in the fights, but on the opposite side of the spectrum, you have Red Dead Redemption 2, which has quite easy combat with aim assists inherited from console and look way way better at 4K @ 30 fps than 1440p @ 60 fps.

But thats the great thing about PC (and lately from the newest console) we have the choice of playing how we like more, eye candy or speed, its out choice and that's great.
 
Nvidia is DF's biggest sponsor, and DF isn't the best at disclosing who paid for which video... so take their results with a grain of salt. At this point they are YouTube affiliate personalities, not really an independent review outlet.

Ok, point taken, but nonetheless their findings match Tom´s findings, and every other review site I have read so far on the new AMD cards, they excell at rasterization but fall short on RT performance.
 
well AMD since they very beginning are not that excited to talk about ray tracing even during their initial presentation. there are several AMD sponsored titles using RT now and if you look at it all of them are mostly about shadows.

And the 3070 actually has lower native memory bandwidth than an RX 6800

this is one of nvidia specialty. if anything nvidia handles VRAM and bandwidth much better than AMD. it is something that nvidia has been focusing on since kepler era.
 
Films and games at 30fps are not at all comparable, since they work differently. When recording a film, the camera is almost continuously recording motion to each frame before moving onto the next, resulting in natural motion blur that provides a smooth transition between frames. With games, each frame is rendered at one point in time, without those smooth transitions, as accurately simulating them would require rendering many frames and merging them together. As a result, the transition from one frame to the next tends to appear quite choppy at 30fps, unlike films where it can appear smooth.

Some games provide the option to add a motion blur effect, but it's only roughly simulated and doesn't look nearly as good, nor does it completely remove the choppiness. Films will also specifically control the movement of cameras and the objects they are recording to prevent anything important from getting blurred, while that's often not the case in games, where the user has control of the camera. For anything with significant camera movements, like an FPS game, even a perfect motion blur implementation would result in things appearing rather blurry when looking around at 30fps.

Well I did say that on first person shooters I agree that 60 fps is the minimum I aim for, I could never play Doom Ultimate, Far Cry, etc at less than 60 fps. Also I know how cameras work, and how the exposure time add a natural motion blur to images, but it has been quite a while now that we have had motion blur on games and in games like red dead redemption 2, Tomb Raider series, Assassins Creed etc it looks perfectly fine (at least to me 😛) and contribute to that "playing a movie" feeling. Again is just my opinion based on my tastes.

Cheers
 
Great review as always Jarred.
Just one little thing: did I miss the data about smart access memory and rage mode performance or will it be updated in another article?

Thanks!

The talk about AMDs Smart Access Memory seems overblown as RX 6800xt combined with 5900x CPU still underperforms same RX 6800xt GPU with 9900k or 10900k at higher resolutions in gaming. See 13 game average chart under Radeon RX 6800 CPU Scaling
 
  1. I see nothing indicative of Nvidia's cooler design being "better". It's possible it might be more expensive to make, but it's also a card that puts out more heat.
  2. Your basing this "better image quality" on the one newly-released game where RT reflections seemed to not be rendered right? >_>
  3. Many people are not all that fond of how the 30-series cards look. Looks are subjective, so what might look good to one person, might not to another.
  4. They have the exact same number of ports. Both have four. The USB-C port can be used as a DisplayPort output using a simple adapter cable, and also allows for even more connectivity options for USB-C displays, adding USB and 27 watts of power delivery, making it arguably better than a regular DisplayPort connection.

1- it is a better design and innovative as well . alot better than a generic vapor champers design. The heat pipes with unrestricted air flow is cool design.

2- you have no proof.

3-well I was talking about my self and people who likes it ... what is your point ? I never said people must like what I like.

4- less DP is a Less DP , I dont want to use any adapters nor look for USBC monitros. and AMD could easily add USBC and 4 ports . like Nvidia did in the past.
 
Last edited:
Great review as always Jarred.
Just one little thing: did I miss the data about smart access memory and rage mode performance or will it be updated in another article?

Thanks!
SMA was enabled on the 5900X, which didn't seem to make a huge difference. I ran out of time to finish testing the RX 6800 (vanilla) and will be adding those charts today. I also need to run without SMA to see how it actually affects performance. Problem is, SMA is a BIOS feature and tied to the motherboard, and only about six boards support it right now, and I have to wonder if some of the changes to SMA are actually negatively impacting performance in general. Meaning, the firmware and BIOS updates that enabled SMA -- on the MSI X570 Godlike board I used -- may have dropped performance in general, but SMA brings back some of the performance on the RX 6800 series.

Look at my 9900K vs 10900K vs 5900X results for the 3070/3080/3090, where the 9900K comes out ahead of the other CPUs with the 3090 in a lot of cases, including the overall results. I think part of that is simply the 9900K testbed is more mature in firmware -- an older, existing socket and platform. Only the 3070 ended up performing better on the 5900X, which doesn't make much sense. But then, I still need to test Valhalla on a couple of the configs, and Dirt 5 with the 3090 performed unexpectedly poorly on the 10900K and 5900X (I retested it twice just to be sure, and still don't know what's causing the problem). Basically, if you pay close attention to any specific GPU and look at all of the results, there are clearly cases where some games either perform poorly or perform really well on some combinations of hardware but not others.

Drivers, firmware, slight differences in RAM, or specific game optimizations? It's not clear what precisely is the root cause.
 
Common misunderstanding, DXR may be agnostic but RTX IS NOT. RTX is not just a fun name for their cards. it is an implementation based on DXR but is NOT the same thing. Not out of luck at all, im not a consumer zombie i can survive a whole generation between upgrades and live a happy life. I'll get ray tracing when its ready, or if my current card melts idc that much, if its on a card i get then yay if not then meh.

RTX is a development platform than encapsulates ray tracing via DXR and Vulkan, along with many other tools for developers like the use of AI cores for DLSS. Nonetheless the games tested on this review are using DXR, if they weren't, then it would be impossible to enable ray tracing on this new AMD cards.

So maybe NVIDIA has some proprietary stuff (as usual) but today ray tracing itself is platform agnostic, it is available on nvidia and AMD thru DX12 DXR, and on the latest next gen consoles. That was my point, that you incorrectly stated ray tracing is not platform agnostic and it is, at least today it is.

Now when you decide to change you card and if you are able to find one without DXR support by then... well that's your choice, and yours alone.

I´m not a consumer Zombie neither... lol I still game on a x58 motherboard bought 11 years ago... that first had a Core i7 920 and then I upgraded it to a Core i7 980X that I found used for 100 bucks on ebay xD first the MOBO had 6 Gigs Triple Channel RAM and now It has 24 GB Triple Channel kit. And only now that Zen 3 have landed (and a few games started showing up that demand AVX extensions which my CPU doesn't support) is that I have started considering building a new PC from scratch.

Cheers!
 
  • Like
Reactions: digitalgriffin
Ray Tracing PSA:

DXR and RTX are not the same thing!
One is a proprietary technology.
If your game supports Direct X 12 ray tracing it will trace rays with AMD or Nvidia
If you game was developed for RTX alone it will ONLY trace rays with RTX cards.

RTX is NVIDIAs own implementation. and RTX cards SUPPORT DXR as well.


Not Quite, RTX is a development platform, pretty much like Visual Studio, so it is a collection of tools, one of which is raytracing, and it can compile raytracing code using either DXR, Vulkan or Optix. The last one "Optix" is the NVIDIA propietary raytracing API which you may be referring to but as of today only productivity software has embraced Optix (Adobe After Effects and Blender mostly) and no single game has been compiled to use Optix.

Best Regards
 
  • Like
Reactions: digitalgriffin
$280 for an RX 5700 was not exactly typical pricing though, and chances are that it was one of the models with questionable cooling, hence why they were trying to get rid of it. Going by more typical RX 5700 pricing, the suggested price of the 6800 is around 60-75% higher than that card, which is right about in line with it from a price to performance standpoint. Considering cards in this higher-end price bracket generally fare significantly worse on a performance-per-dollar basis, that's not too bad considering the 5000-series cards just launched a little over a year ago.

Well, the release price of the 5700 was $349 - that was mid 2019. I did pick up an MSI 5700 Evoke OC for, after a $30 MIR and a $30 instant rebate, $273. This was pre-pandemic . . or at least prior to people generally realizing how screwed up things would be. Admittedly, this particular discount brought it very close to the price of the cheapest 5600 XT. Definitely a good sale deal, but I don't think at the level of "we have to dump this crap."

I can say that my son's never had cooling issues. Whether this is due to the card being reasonable cooling on its own, or if my son's PC case is particularly good at cooling, I couldn't really say.

I know some early MSI 5700 XT cards had cooling issues, as well as the XFX THICC variety, but that was mitigated months earlier if I recall correctly. I remember digging back then, though I haven't had to think about it since the February/March timeframe.
 
SMA was enabled on the 5900X, which didn't seem to make a huge difference. I ran out of time to finish testing the RX 6800 (vanilla) and will be adding those charts today. I also need to run without SMA to see how it actually affects performance. Problem is, SMA is a BIOS feature and tied to the motherboard, and only about six boards support it right now, and I have to wonder if some of the changes to SMA are actually negatively impacting performance in general. Meaning, the firmware and BIOS updates that enabled SMA -- on the MSI X570 Godlike board I used -- may have dropped performance in general, but SMA brings back some of the performance on the RX 6800 series.

Look at my 9900K vs 10900K vs 5900X results for the 3070/3080/3090, where the 9900K comes out ahead of the other CPUs with the 3090 in a lot of cases, including the overall results. I think part of that is simply the 9900K testbed is more mature in firmware -- an older, existing socket and platform. Only the 3070 ended up performing better on the 5900X, which doesn't make much sense. But then, I still need to test Valhalla on a couple of the configs, and Dirt 5 with the 3090 performed unexpectedly poorly on the 10900K and 5900X (I retested it twice just to be sure, and still don't know what's causing the problem). Basically, if you pay close attention to any specific GPU and look at all of the results, there are clearly cases where some games either perform poorly or perform really well on some combinations of hardware but not others.

Drivers, firmware, slight differences in RAM, or specific game optimizations? It's not clear what precisely is the root cause.

I really appreciate the fact you tested on multiple platforms. There are many of us with older chipsets that just cant afford the whole ball of wax yet wanted to know the performance uplift we might get. Intel was top of the line until just recently. So it made sense to test it for that customer base.