News Newly Uncovered AMD Radeon RX 6000 Specs Imply Significant Performance Uplift

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
If the 6900XT trails the 3070, AMD's driver team screwed up again. That said, people expecting it to compete with the 3090 are equally out of their mind. The 3080 is legitimately on average twice as fast as the 5700XT at 4k. So there is no way the 6900XT would even equal the 3080 if all AMD did was double the CU's to 80 since we know performance won't scale linearly, and not all specs, most importantly memory speeds, have not doubled with the CU count. We know there is an IPC improvement with RDNA2, and it will have to be pretty significant just to bring it inline with a 3080 on average.

People think because the 3090 is only 10-12% faster than the 3080, that AMD might even be able to compete with the 3090. Problem with that is that they are using the wrong math. AMD isn't working from a 3080 base, they're working from a 5700XT base. AMD has to double the performance of the 5700XT to equal the 3080, but from there isn't an additional 10-15%. Because AMD is starting from half the performance, the additional improvement to get to the 3090 is twice as much at 20-30%. There is no way RDNA2 has the IPC improvement to overcome the scaling deficiencies to match the 3080 plus another 20-30% more to match the 3090. This isn't a case of wait for the reviews before making assumptions, that IPC improvement just isn't going to happen.

You're forgetting that Navi2 will also bring with it higher clock speeds over Navi1, about 200mhz or so with the full 80cu, about 500 more mhz with a comparable 40cu unit.

At 200mhz more clock speed, we're looking at a 10-15% advantage depending on which 5700xt card you're looking at. Add another 10-15% IPC gain from a larger CACHE and architectural changes and it may very well be on par with the 3090, except with much lower power requirements and size. Which makes it the winner.

If these power numbers are true, and it can match the 3080 or even be a little less performance than the 3080. I'm buying NAVI all the way, so I don't have to run a seperate air conditioner just to run the computer.
 
So, it's a pretty decent performance bump over the 20-series offerings, bringing performance more in line with were it should be at this point relative to the 10-series.
To be honest, the 30 series perf/$ is still underwhelming compared to what the 10 series offered at release relative to the previous generation, when you consider length of time between releases (ignoring the 20 series which generally offered little to no perf/$ improvements upon launch).

A 1080 Ti offered ~70% higher perf/$ than a 980 Ti (75% higher perf, $700 vs $650 MSRP) 21 months (1.75 years) later. Or 40% perf/$ increase per year.

A 3080 offers ~70% higher performance than a 1080 Ti at the same MSRP, 42 months (3.5 years) later. Or 20% perf/$ increase per year.

So perf/$ increase on an annualized basis for the 3080 is only half what it was for the 1080 Ti.

For an additional comparison, the 980 Ti offered ~52% better perf/$ than a 780 Ti (41% higher perf, $650 vs $700 MSRP), 19 months (1.5 years) later. Or 33% perf/$ increase per year, still far better than for the 3080.
 
Yes, plenty of people do. The size of the gap between the 3080 and 3090 defies logic even when adjusting for the halo tax. A 20GB 3080 isn't the answer. Paying more for no more performance is not an attractive answer. If AMD could release a 16GB card that performs within single digits of the 3090 and costs $850-900, there would definitely be a market for it that would basically kill off the 3090 as a gamer's option. The problem with anything AMD releases is that it is likely going to be way slower in ray tracing, which actually matters now, and they don't have any known competitor to DLSS which looks like a killer feature going into next generation.

I keep reading comments about AMD needing to come up with a DLSS alternative, but don't they have one already in RIS? Both DLSS and RIS allow you to render at a lower resolution, upscale to a higher resolution, and both improve the image quality to make the upscaled version hopefully close enough to a native higher resolution to make it acceptable to view. They use different means to do it, but the effect is very similar. The reviews I've read indicate that RIS was better than DLSS 1.0, although it's possible that DLSS 2.0 may now win the comparison. I hope when @JarredWaltonGPU (or another Tom's reviewer) reviews the new AMD GPUs they look at RIS as an alternative to DLSS.

Here are a few of the comparisons I've found between DLSS and RIS:
"It’s also clear that Radeon Image Sharpening is a superior equivalent to Nvidia’s DLSS, often by a considerable margin. With both techniques performing at the same frame rate, RIS delivered a clearer, sharper image with fewer artifacts."

"However, (DLSS) still needs to be refined a little in order to really compete with AMD's RIS. The latter is simply better at the moment."
 
I keep reading comments about AMD needing to come up with a DLSS alternative, but don't they have one already in RIS? Both DLSS and RIS allow you to render at a lower resolution, upscale to a higher resolution, and both improve the image quality to make the upscaled version hopefully close enough to a native higher resolution to make it acceptable to view.
RIS is nothing more than if you took a screenshot of the image, opened Photoshop, and applied a sharpening filter on it. In fact, NVIDIA has image sharpening in the driver settings for Turing cards (I'm not sure if they backported this to Pascal). RIS does not upscale anything. Any talk of upscaling with RIS is when the game has an adjustable internal resolution that's then upscaled in-engine to the output one.

What DLSS is doing is its attempting to create details from lower resolution images. The best way to put it is it's the CSI "Enhance Button." As an example (12:24 if the time stamp bookmark doesn't work):

Sharpening won't produce an image with the same amount of detail that DLSS reconstructed. And it may not look as clean without making it look too obvious.

EDIT: To put in another way, sharpening cleans up an image. DLSS reconstructs details that using a lower resolution missed.
 
The reviews that I linked to show that RIS actually performs as well as or better than DLSS. My interest in DLSS is so that I can run games at acceptable framerates when my GPU struggles given the settings and resolution I'm playing at. That's the only reason to turn on DLSS, correct? If you get the framerates you want at the native resolution of your monitor with everything turned on, then you don't want to use DLSS. The claim consistently being made is that AMD doesn't have a DLSS competitor.

With current drivers on a Navi card, you don't have to rely on a game's internal adjustable internal resolution scaling - it's in the Radeon Settings section next to RIS. So there's the scaling piece, and then the image quality is cleaned up (it's more than just the Photoshop sharpening filter) then you end up with an effect that's similar to DLSS even if the way you get there is different. You get a higher framerate without all of the softening that you get from running at a lower resolution and then upscaling. Yes, DLSS does it differently, but as the links that I posted show AMD unequivocally does have an alternative - it's already available on the existing 5xxx series cards.

And the bottom line is that the reviews I've read are all consistent - RIS looks at least as good as DLSS in most cases, and it's available on every game because it's all done in Radeon Settings.

It's certainly reasonable for someone to not like the approach, but it's not reasonable to say that AMD doesn't have an alternative. It clearly does. I hope Tom's Hardware at least mentions it in the reviews even if they don't have time initially to review it.
 
To be honest, the 30 series perf/$ is still underwhelming compared to what the 10 series offered at release relative to the previous generation, when you consider length of time between releases (ignoring the 20 series which generally offered little to no perf/$ improvements upon launch).

A 1080 Ti offered ~70% higher perf/$ than a 980 Ti (75% higher perf, $700 vs $650 MSRP) 21 months (1.75 years) later. Or 40% perf/$ increase per year.

A 3080 offers ~70% higher performance than a 1080 Ti at the same MSRP, 42 months (3.5 years) later. Or 20% perf/$ increase per year.

So perf/$ increase on an annualized basis for the 3080 is only half what it was for the 1080 Ti.

For an additional comparison, the 980 Ti offered ~52% better perf/$ than a 780 Ti (41% higher perf, $650 vs $700 MSRP), 19 months (1.5 years) later. Or 33% perf/$ increase per year, still far better than for the 3080.
Well, I did say "more in line". : P It's at least not hugely underwhelming like the 20-series was at launch. And the performance jump of the 10-series was atypically large, so it's probably not fair to treat that as the norm. It's also going to get harder to increase performance over time as process node shrinks become more difficult. At least the yearly performance gains in the graphics card space are still significantly larger than what we see for CPUs. Sure, we've at least been getting more cores the last few years, but lots of software doesn't really scale to additional CPU cores, unlike what we see with graphics rendering hardware.

Another thing to consider is that we're only looking at rasterized performance here, while these newer cards have added hardware for raytracing. A lot of these raytraced lighting effects still don't really seem worth the performance hit, but others arguably do if there's performance headroom to spare, and it's certainly true that performance over the 10-series is much larger when such effects are enabled. And I fully suspect that such effects will become the norm for "ultra" graphics settings in the coming years.

Yes, plenty of people do. The size of the gap between the 3080 and 3090 defies logic even when adjusting for the halo tax. A 20GB 3080 isn't the answer. Paying more for no more performance is not an attractive answer. If AMD could release a 16GB card that performs within single digits of the 3090 and costs $850-900, there would definitely be a market for it that would basically kill off the 3090 as a gamer's option. The problem with anything AMD releases is that it is likely going to be way slower in ray tracing, which actually matters now, and they don't have any known competitor to DLSS which looks like a killer feature going into next generation.
I guess my main point there was that the higher performance of the 3090 is small enough to be barely noticeable over the 3080, at least in today's games where the difference in VRAM doesn't really matter. Sure, if the 6900XT managed to perform close to the 3090 at a significantly lower price, there would be a market for that, but ultimately it would still be competing more with the 3080 than anything.

As for raytracing performance on RDNA2 hardware, not much is known about it yet, so I would hardly say that it will be "way slower" than Nvidia's solution. It could be faster, for all we know, or perhaps faster for some effects, and slower for others. The 30-series didn't actually improve raytracing performance much over the 20-series. If certain RT effects cut performance by 50% on a 2080 Ti, then they seem to cut performance by around 45% or so on a 3080. There's more performance to start with, but the impact of enabling the effects is still nearly as large. As such, if a 2070 performs similar to a 2080 Ti without raytracing, then it should perform fairly similar with it enabled as well, likely not gaining more than 5-10% with a heavy raytracing load. So, if AMD were aiming to be competitive with the performance hit of RT on the cards Nvidia released two years ago, then they would be competitive with the new cards as well, as not much seems to have changed as far as the overall performance impact of RT is concerned.

And as someone pointed out, Radeon Image sharpening is arguably competitive to DLSS, even if it might not be quite on the same level as DLSS 2.0. When it comes down to it, these are all just upscaling and sharpening techniques. DLSS uses AI to make guesses at details to fill in to recover some sharpness, while RIS uses a non-AI method, albeit one that's definitely more sophisticated than just a simple sharpening algorithm. And again, there's the possibility AMD may have made improvements over that as well, as not a whole lot is known about the specifics of their new cards yet.
 
The reviews that I linked to show that RIS actually performs as well as or better than DLSS. My interest in DLSS is so that I can run games at acceptable framerates when my GPU struggles given the settings and resolution I'm playing at. That's the only reason to turn on DLSS, correct? If you get the framerates you want at the native resolution of your monitor with everything turned on, then you don't want to use DLSS. The claim consistently being made is that AMD doesn't have a DLSS competitor.

With current drivers on a Navi card, you don't have to rely on a game's internal adjustable internal resolution scaling - it's in the Radeon Settings section next to RIS. So there's the scaling piece, and then the image quality is cleaned up (it's more than just the Photoshop sharpening filter) then you end up with an effect that's similar to DLSS even if the way you get there is different. You get a higher framerate without all of the softening that you get from running at a lower resolution and then upscaling. Yes, DLSS does it differently, but as the links that I posted show AMD unequivocally does have an alternative - it's already available on the existing 5xxx series cards.

And the bottom line is that the reviews I've read are all consistent - RIS looks at least as good as DLSS in most cases, and it's available on every game because it's all done in Radeon Settings.

It's certainly reasonable for someone to not like the approach, but it's not reasonable to say that AMD doesn't have an alternative. It clearly does. I hope Tom's Hardware at least mentions it in the reviews even if they don't have time initially to review it.
The tests you linked were were using DLSS 1.0, not 2.0. The results from DLSS 1.0 tended to be mixed. So far I haven't seen any decent comparison between DLSS 2.0 and RIS.

Some other points is that I haven't found someone testing RIS below 75% scaling. DLSS 2.0 the video I linked showed DLSS working at 50% scaling. You're welcome to show me comparisons of RIS working at 50% resolution and the native resolution.
 
I agree completely; I think I also mentioned that the comparisons I've seen were with DLSS 1.0 and DLSS 2.0 appears to be much better. And I haven't seen anything showing RIS below 75% either. There are definitely plusses and minuses to each approach. I suspect that as it improves DLSS either has already or soon will exceed RIS's capabilities not only in the range of scaling that it can cover but also in the resulting image quality. On the other hand RIS works on everything while DLSS has had limited adoption so far. DLSS 2.0 is supposed to be vastly easier for game developers to implement, so adoption may dramatically increase. I hope so - it's a really intriguing capability that shows a lot of promise. I'm just tired of people saying that AMD has nothing that compares to DLSS. I'm looking to upgrade from my current 1070 and I want some kind of DLSS-like capability to help extend the life of whatever I purchase.