News AMD Radeon RX 6800 XT and RX 6800 Review

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Overall, this is a very well-done review. There is one inaccuracy in it but it doesn't affect the overall outcome:

"Nvidia has had a virtual stranglehold on the GPU market for cards priced $500 or more, going back to at least the GTX 700-series in 2013."
- Jarred Walton, 2020/11/23

This is not entirely correct because 2½ years after 2013 came the R9 Fury. The R9 Fury-X and R9 Fury held positions #2 and #3 in 2015:

"Fury represents AMD’s return to high-end gaming. But it's far better suited for 2560x1440 than 4K. In almost every test, Sapphire’s R9 Fury Tri-X outperformed Nvidia’s GeForce GTX 980. Even at reference clock rates it's able to keep up. Fury fits nicely between the GTX 980 and 980 Ti in both power and cost."
- Kevin Carbotte & Igor Wallossek, 2015/07/10

https://www.tomshardware.com/reviews/sapphire-amd-radeon-r9-fury-tri-x-overclocked,4216.html

So, nVidia's stranglehold on the GPU market for cards priced at $500 or more actually started with the GTX 10 series so nVidia's stranglehold on high-end cards has actually only been since 2016.
 

CerianK

Distinguished
Nov 7, 2008
260
50
18,870
News flash: Optimistic AMD fan boy inadvertently knocks RX 6800 down one notch in PassMark GPU charts: Link
I apologize for getting a laugh out of those system specs. Hint: The GPU is not the bottle-neck.

Edit: Fixed, with this note: "Baseline has been excluded from average results due to anomalies in the submitted results."
 
Last edited:
Overall, this is a very well-done review. There is one inaccuracy in it but it doesn't affect the overall outcome:

"Nvidia has had a virtual stranglehold on the GPU market for cards priced $500 or more, going back to at least the GTX 700-series in 2013."
- Jarred Walton, 2020/11/23

This is not entirely correct because 2½ years after 2013 came the R9 Fury. The R9 Fury-X and R9 Fury held positions #2 and #3 in 2015:

"Fury represents AMD’s return to high-end gaming. But it's far better suited for 2560x1440 than 4K. In almost every test, Sapphire’s R9 Fury Tri-X outperformed Nvidia’s GeForce GTX 980. Even at reference clock rates it's able to keep up. Fury fits nicely between the GTX 980 and 980 Ti in both power and cost."
- Kevin Carbotte & Igor Wallossek, 2015/07/10

https://www.tomshardware.com/reviews/sapphire-amd-radeon-r9-fury-tri-x-overclocked,4216.html

So, nVidia's stranglehold on the GPU market for cards priced at $500 or more actually started with the GTX 10 series so nVidia's stranglehold on high-end cards has actually only been since 2016.
The problem is that being in second place with a GPU that was 10% slower and cost the same as the leader (GTX 980 Ti) meant the 980 Ti won by a big margin. R9 Fury X was designed to beat the GTX 980, and Nvidia pre-empted the launch by releasing a faster card and changing prices. Suddenly AMD's $649 card didn't look as good. Unless you focused on games where AMD tended to have a performance advantage, which some places certainly did. Regardless, I said Nvidia had a "virtual" stranglehold because it wasn't complete and total domination.

R9 290/290X and the basically clock bumped R9 390/390X were competitive -- lost some games, won some others, used a lot more power, but they were still fast. R9 Fury X was a weird compromise because HBM made it very expensive and also limited it to 4GB, so the 6GB 980 Ti was clearly superior. R9 Fury basically tied the GTX 980, only it came out nearly a full year late. An overclocked R9 Fury card against a standard GTX 980 isn't really a fair match either, as lots of overclocked GTX 980 cards existed -- which still used a lot less power. But after that, RX 400 and 500 series totally abandoned the high-end, and Vega was pretty much a repeat of Fiji: not as fast, used more power, late to the party.

Anyway, there are always differences of opinion. I wasn't at Tom's Hardware during the Fiji and Vega launches. I was at PC Gamer, and I still scored the AMD cards probably higher than they warranted. Partly because I was tired of seeing Nvidia win so badly, partly because I was also writing for Maximum PC and the reviews there tended to err on the side of higher scores for fast hardware, never mind the price. I do remember wondering about some of the TH results back then. I mean, Far Cry 4 had an obvious problem on the Fury X ... but it wasn't corrected / retested / whatever.

Today, I still have the Fury X and 980 Ti fully tested in our GPU hierarchy. (Sorry, no vanilla Fury.) Interestingly, the 980 Ti and Fury X are basically tied. Probably because today we're using a lot more DX12 / Vulkan games where AMD's architecture benefits. But back in 2015-2016, my recollection is that far more people were using 980 Ti than R9 Fury X. And today, I'd still much rather have a 980 Ti than a Fury X -- and if I had bought either one, I'd probably have sold it and upgraded two or three times since they launched. :)

TL;DR: Fury X was fine, Vega was fine, HD 7970 was good, R9 390X was decent. You could have made an argument for any of them at the time they launched. I wouldn't say any of those were superior to their Nvidia alternatives back in the day, and the gap with Pascal and Turing seemed to get wider. Ampere vs. Navi 2x though is pretty dang close overall (if you discount DLSS and RT performance).
 
Yes, but, look at what you're saying. Even the majority of Indie games are transitioning to DX11. Which came out in, what, 2009? 11 years ago. And, as Jarred stated, it's a superset, not an entirely new, separate thing.

I mean, I was installing DX 9 on a Windows 98 PC... and had to do a bit of hacking of the CAB file to get DX 9.0c to work in Windows 98 (9.0b would work fine, though).

So, by the time Ray Tracing becomes non-optional, if ever, who's still going to be using Ampere or Big Navi cards?

Ray Tracing was brand-new with the release of Turing, in late 2018. So, it's 2 years old now. When DX 11 was 2 years old, we had the top dogs of, what? The GTX 590 and Radeon HD 6990? Sliding in under them were the GTX 580 and HD 6970.

I'd estimate that if RT ever becomes REQUIRED, we're looking at about another 5 years minimum before that happens. And remember, DX11 wasn't a new feature, it was an improvement/expansion over DX9.

read again at the post that i'm replying to. i'm not talking about how long it will take for the new stuff to becomes the norm. he said "it will never happen". one of the primary goal of using RT is to completely replace bake/fake effect that are commonly use right now. so as hardware becoming more powerful we will eventually will use RT effect with no fallback to older method that were are using right now.
 
Raja Koduri left AMD in Sept 2017 just 2 months after the release of Vega. With the RDNA chips are we finally getting past the chips he was in charge of?
We're definitely to the point where anything Raja may have done is having far less of an impact. RDNA1 still builds from Vega/GCN in some respects, and RDNA2 is obviously heavily based off RDNA1. But considering Raja has been gone for basically three years now, yeah, his influence on the latest chips is pretty much gone. One interesting bit I have to wonder about is the "High Bandwidth Cache Controller" that was a big hype thing for Vega, and the new Infinity Cache. HBCC mostly seemed to be marketing for a memory controller with support for HBM2, but a 128MB L3 is certainly something different.
 
read again at the post that i'm replying to. i'm not talking about how long it will take for the new stuff to becomes the norm. he said "it will never happen". one of the primary goal of using RT is to completely replace bake/fake effect that are commonly use right now. so as hardware becoming more powerful we will eventually will use RT effect with no fallback to older method that were are using right now.
Until we get to the point that all GPUs have RT capabilities and even low end, GTX 1650 class, are able to do 1080p RT @ 30-60 fps, RT will be nothing more than an add-on. We are at least 5 years but probably 10 years away from that.
 
We're definitely to the point where anything Raja may have done is having far less of an impact. RDNA1 still builds from Vega/GCN in some respects, and RDNA2 is obviously heavily based off RDNA1. But considering Raja has been gone for basically three years now, yeah, his influence on the latest chips is pretty much gone. One interesting bit I have to wonder about is the "High Bandwidth Cache Controller" that was a big hype thing for Vega, and the new Infinity Cache. HBCC mostly seemed to be marketing for a memory controller with support for HBM2, but a 128MB L3 is certainly something different.
I think losing Raja was possibly the best thing to happen for RTG. While he was at AMD during the time that gave us the HD 4000 and the beginning of HD 5000, his GCN designs didn't perform nearly as well. I think his wheelhouse was in the VLIW architecture found in terrascale instead ofSIMD found in GCN. The change to RDNA1 I think shows this pretty well. With far lower power and lower resources it out performs Vega.
 
I think losing Raja was possibly the best thing to happen for RTG. While he was at AMD during the time that gave us the HD 4000 and the beginning of HD 5000, his GCN designs didn't perform nearly as well. I think his wheelhouse was in the VLIW architecture found in terrascale instead ofSIMD found in GCN. The change to RDNA1 I think shows this pretty well. With far lower power and lower resources it out performs Vega.
I have heard (in other words, BIG RUMOR!) that Raja was pushed out and some even said he basically "conned" Intel into buying into his vision of a high performance GPU design. Obviously Intel has money and R&D to do more than AMD if it wants to, but so far we're still waiting to see if Xe Graphics can actually compete. Not just on laptops, but in higher performance segments. I am very skeptical that the first gen Xe stuff will be good enough to take away from AMD and Nvidia dedicated GPU purchases, but gen 2 or 3 might get there.

Assuming Intel can figure out its manufacturing woes. Maybe it shouldn't have forced all of its most senior white males out of the company five years ago? Which, according to at least one of those guys (Francois Piednoel), they ended up at Apple making the new A14 and M1. See: View: https://twitter.com/FPiednoel/status/1326546819443171329
Take that with a grain of salt, obviously, as he's one of the senior guys that got the boot. Still, there's a continuing lawsuit against Intel over the mass exodus that happened prior to the 10nm problems.
 

mihen

Honorable
Oct 11, 2017
464
54
10,890
I would still take the Fury X over the 980 ti. The huge thing the Fury X had going for it was the cooler. It was very quiet and the card length was short.
Identity based hiring practices have always failed no matter which side it was on. Hire the best and you get the best. When your hiring on something other than performance, your product suffers. The same can also be applied to the good ol'boys club, where a business owner puts all their friends in senior positions. This happens a lot with entertainment companies.
 
I would still take the Fury X over the 980 ti. The huge thing the Fury X had going for it was the cooler. It was very quiet and the card length was short.
AIO coolers on GPUs are one more point of failure. My Fury X sample still technically works, but the pump is very much audible over the rest of the system. Plus, while the card is shorter, the total volume of the card + tubes + radiator + fan is most certainly not smaller and easier to deal with. I get why some people like AIO coolers for GPUs, but as someone who has to swap GPUs in and out of testbeds, they suck. I usually just put the radiator and fan on the floor. Thankfully the card is old enough now that I don't have to test it very often. I still need to do a full refresh of the GPU hierarchy benchmarks at some point, though, which is several weeks of testing when I get around to it. Probably after the New Year, when things slow down a bit, I'll tackle that problem.
 

therealtweeter1

Distinguished
Apr 2, 2011
4
0
18,510
Anyone who's used to a really good display (like NEC's PA Series, or even EA models) would never buy anything else for anything else than gaming. I'm not fancy of multi-display setups and I'm entirely happy with 60 fps and all that eye-candy.
This should be up to the game dev, though, not the GPU drivers or whatever. DXR generally says, "trace these rays" and expects an answer. Not "trace some of these rays and give me your best guess."
What is DXR ?
 

blacknemesist

Distinguished
Oct 18, 2012
483
80
18,890
Until we get to the point that all GPUs have RT capabilities and even low end, GTX 1650 class, are able to do 1080p RT @ 30-60 fps, RT will be nothing more than an add-on. We are at least 5 years but probably 10 years away from that.
Just because the low-end can't do RT it doesn't mean it is not going to be an option in most games. If you buy a 1650 what are you going to trade for the cost of RT? We need plain rasterization cards in because RT is optional but rendering the game at high fps is not.