News Fake Rumors Suck: Will the Real RX 6000 Please Stand Up?

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
But what you are saying just keeps AMD as being the slower card as compared to Nvidia but a more bang for the buck option the reputation they have been holding for quite some time in the GPU market.
slower card compared the the 3090 ? who cares ? thats a card that most wont get cause of the price, which, STARTS at $2000 cdn .but if they are faster, on par, or trade blows with the 3080? and at a lower price, thats what amd needs to aim for. not to mention, the power usage could be better for rdna2, then it is for ampre
 
yea, thats why the last amd cpu i bought, was a phenom 2 x4 i think it was, after that it has been intel, up until april this year, all 6 comps i have here were intel based, 3 of them, are now amd, wouldnt consider that pro intel or pro amd, bought what i did, as the performance and price was there.
Who cares? That means nothing. I own 2 AMD 5500XT's I use in secondary systems. Is that going to change your opinion of me? Of course not.
DLSS, IF you are running a res that can take advantage of it, sure, but if not ? is there any benefit to running it vs native res ? i know know no one with a 4k monitor, a few with 2k and a few more with 1440p, but most, still use 1080p, which res would benefit most from DLSS ?
If all you care about is 1080p, why are you in a thread about the 6000 series, when even the lowest card that's going to be announced in a few days is going to be a waste of money for 1080p gaming? There are plenty of cheap 1080p options available currently. Now, it just looks like you're trolling.
 
  • Like
Reactions: JarredWaltonGPU
Who cares? That means nothing. I own 2 AMD 5500XT's I use in secondary systems. Is that going to change your opinion of me? Of course not.
nope, cause most of your previous posts, show you are bias against amd. that is fact.

If all you care about is 1080p, why are you in a thread about the 6000 series, when even the lowest card that's going to be announced in a few days is going to be a waste of money for 1080p gaming? There are plenty of cheap 1080p options available currently. Now, it just looks like you're trolling.
why am i in this thread ? simple, i may game at 1080p, but i am considering upgrading my monitor to something else, and my current strix 1060, probably wont handle that upgrade. either way, i am still looking at upgrading my 1060, cause i feel its time to. at least i dont constantly troll other threads crying about 8k testing, which BTW, YOU are the only one that seems to care about it
 
RT is important, and AMD will have that with Big Navi. DLSS, though, may be just as important and it's a serious problem for AMD. It's proprietary Nvidia tech, and it's becoming much easier for games to implement. Unreal Engine supports it, and I think Unity does as well -- you basically just have to flip a switch in the code and add a UI to enable/disable DLSS. Even worse (for AMD) is that DLSS 2.x actually works. There were plenty of issues with DLSS 1.0, but DLSS 2.x is pretty dang awesome.

So, we can run benchmarks comparing AMD vs. Nvidia running the same settings without DLSS, but as soon as you enable DLSS, Nvidia is going to win in probably every game that supports it. Unless AMD can come up with a compelling alternative that works just as well (hint: FidelityFX isn't it, at least not in the current implementation), DLSS is potentially more of a game changer than RT.
The thing with DLSS though, is that with the original implementation, other, more traditional forms of upscaling combined with sharpening were proven to actually look and/or perform notably better, with the feature's only advantage being that it was an easy 1-click toggle to boost performance at the expense of quality in the few games that supported it.

DLSS 2.0 may have improved, and might even be a little better than those other forms of upscaling with sharpening, but that doesn't mean those other methods are any less relevant. Especially at higher resolutions, where these upscaling techniques make the most sense, people will likely be hard-pressed to notice any pixel-level differences between them while gaming, as long as they don't effectively put a blur-filter over everything like some early DLSS implementations did.

So all AMD really needs to have is a more convenient way to enable upscaling with their advanced sharpening filter. Deciding on a resolution for upscaling and where to set a slider for sharpening might be more flexible, but it is less convenient than Nvidia's 1-click solution, where the feature can either be flipped on and off, or set to a few different pre-defined quality levels in a supported game. And really, there's no reason that such an in-game "upscaling" toggle needs to be a proprietary option, supporting only one line of graphics cards. Developers are likely getting paid by Nvidia to implement DLSS, but there's no reason they couldn't get roughly similar results using other upscaling techniques that can work across all hardware. AMD just really needs to push their own DLSS-alternative toggle, and encourage developers to implement it in games. Or alternately, add a driver-level feature with a few simple quality presets that performs the upscaling and sharpening on a per-game basis in one simple step.

It looks like AMD's FidelityFX CAS feature serves to do that, though it's apparently not implemented in many games yet. And while quality might not always be quite as good as DLSS 2.0, in at least some cases it can apparently be better, as suggested in the comparison in this review for Death Stranding...

As for benchmarking games with DLSS, it seems a bit questionable unless the competition's upscaling and sharpening alternatives are also tested, even if that requires going into a graphics card profile and fiddling with sliders to enable. Realistically, I suspect the quality and performance of AMD's solution should be reasonably close, based on prior comparisons. That also introduces some vagueness as to just what constitutes an equivalent level of quality and performance in a particular game though.
 
Last edited:
So all AMD really needs to have is a more convenient way to enable upscaling with their advanced sharpening filter. Deciding on a resolution for upscaling and where to set a slider for sharpening might be more flexible, but it is less convenient than Nvidia's 1-click solution, where the feature can either be flipped on and off, or set to a few different pre-defined quality levels in a supported game. And really, there's no reason that such an in-game "upscaling" toggle needs to be a proprietary option, supporting only one line of graphics cards. Developers are likely getting paid by Nvidia to implement DLSS, but there's no reason they couldn't get roughly similar results using other upscaling techniques that can work across all hardware. AMD just really needs to push their own DLSS-alternative toggle, and encourage developers to implement it in games. Or alternately, add a driver-level feature with a few simple quality presets that performs the upscaling and sharpening on a per-game basis in one simple step.
It's unlikely Nvidia is giving money to developers to use DLSS. What they'll typically do is lend developer support for implementing their features from their software engineering team. This is one area where Nvidia and its money has a significant advantage over AMD getting feature support added. Nvidia doesn't have to make the features as simple as possible to implement because they have a team available to help game developers implement their features. There's no way AMD could have even tried to implement something like DLSS 1.0 where they had to train every game for each resolution themselves for developers. If AMD's method is less convenient than Nvidia's method, developers aren't going to use it unless it somehow saves them money, because they aren't going to spend money implementing a feature for a GPU maker that has only 20% of the market.
 
  • Like
Reactions: JarredWaltonGPU
The thing with DLSS though, is that with the original implementation, other, more traditional forms of upscaling combined with sharpening were proven to actually look and/or perform notably better, with the feature's only advantage being that it was an easy 1-click toggle to boost performance at the expense of quality in the few games that supported it.

DLSS 2.0 may have improved, and might even be a little better than those other forms of upscaling with sharpening, but that doesn't mean those other methods are any less relevant. Especially at higher resolutions, where these upscaling techniques make the most sense, people will likely be hard-pressed to notice any pixel-level differences between them while gaming, as long as they don't effectively put a blur-filter over everything like some early DLSS implementations did.

So all AMD really needs to have is a more convenient way to enable upscaling with their advanced sharpening filter. Deciding on a resolution for upscaling and where to set a slider for sharpening might be more flexible, but it is less convenient than Nvidia's 1-click solution, where the feature can either be flipped on and off, or set to a few different pre-defined quality levels in a supported game. And really, there's no reason that such an in-game "upscaling" toggle needs to be a proprietary option, supporting only one line of graphics cards. Developers are likely getting paid by Nvidia to implement DLSS, but there's no reason they couldn't get roughly similar results using other upscaling techniques that can work across all hardware. AMD just really needs to push their own DLSS-alternative toggle, and encourage developers to implement it in games. Or alternately, add a driver-level feature with a few simple quality presets that performs the upscaling and sharpening on a per-game basis in one simple step.

It looks like AMD's FidelityFX CAS feature serves to do that, though it's apparently not implemented in many games yet. And while quality might not always be quite as good as DLSS 2.0, in at least some cases it can apparently be better, as suggested in the comparison in this review for Death Stranding...

As for benchmarking games with DLSS, it seems a bit questionable unless the competition's upscaling and sharpening alternatives are also tested, even if that requires going into a graphics card profile and fiddling with sliders to enable. Realistically, I suspect the quality and performance of AMD's solution should be reasonably close, based on prior comparisons. That also introduces some vagueness as to just what constitutes an equivalent level of quality and performance in a particular game though.
The difficulty with fixed scaling and sharpening filters like CAS is that they can look good in some scenes, and bad in others. It's sort of funny how things have progressed.

  1. Render an image with the highest possible quality
  2. Oh no, jaggies! And MSAA doesn't work well with deferred rendering, so let's apply a temporal blur filter (TAA in a nutshell)
  3. Crap, that's really blurry and loses a lot of detail. Maybe we can do some contrast aware sharpening?
  4. Uh oh, we've got some shimmer and have reintroduced some jaggies because of the sharpening filter...

The idea behind DLSS is to create a trained algorithm using deep learning to find something closer to an ideal rendering mode -- the quality of standard rendering, minus the aliasing artifacts, but not over-blurred or over-sharpened. Theoretically, having all of that processing run on the Tensor cores allows for more real-time adaptability and fewer edge cases. And as Nvidia is fond of saying (because it's basically true), deep learning algorithms can improve over time just by letting the training process run on more data.

If I were to rate overall anti-aliasing techniques, it would probably be something like:

  1. DLSS 2.x -- looks good, has some areas where it introduces some blur due to upscaling, but also improves performance vs. native -- something no other AA method can claim
  2. TAA + CAS -- this is the #4 option from above, where you unblur the TAA blur but sometimes get other artifacts in the process, but it doesn't boost performance (CAS upscaling does not look as good IMO)
  3. SMAA -- relatively low impact on performance, and done well it tends to be less blurry than TAA
  4. TAA -- often has too much blur, but I still prefer it to severe aliasing jaggies
  5. FXAA -- hey, it at least does something, I think? (Seriously, I've looked at a lot of games over the years where FXAA misses a ton of obvious aliasing, so there's a good reason it's almost no impact on performance)
  6. No AA -- run at 4K native and aliasing isn't as much of a problem
  7. SSAA -- the best looking solution, but way too demanding to be practical in most games. Plus, I'd just want to run native 4K rather than 4K supersampling on a 1080p display
 
@Kamen Rider Blade , I think you need to sign up for Venmo just to claim your win.
Yup. But he doesn't use Venmo, so he has to wait until I'm in CA or something. LOL

To be fair, at the time I wasn't really expecting a 72 CU model -- though in retrospect it's pretty obvious. A fully enabled Navi 21 is going to have terrible yields, relatively speaking -- perfect chips of that size just aren't as common. So a small step down as the 'mainstream' high-end part makes sense. The RX 6800 stuff only started showing up in the last couple of weeks as well. RX 6700 XT and RX 6500 XT remain MIA, naturally -- coming in 2021. Anyway, a 72 CU chip potentially keeping up with RTX 3080 is believable; the 60 CU chip doing so wasn't.

Really interested in getting these in for testing, though! That Infinity Cache seems to be doing wonders, which makes me wonder why Nvidia hasn't done something like that. Heck, even Intel had a massive 128MB eDRAM on Iris Pro back with Broadwell that gave a serious boost to gaming performance without killing power. How much of the die does the 128MB take up, I wonder? Seems like it might be close to a third of the chip!
 
Yup. But he doesn't use Venmo, so he has to wait until I'm in CA or something. LOL

To be fair, at the time I wasn't really expecting a 72 CU model -- though in retrospect it's pretty obvious. A fully enabled Navi 21 is going to have terrible yields, relatively speaking -- perfect chips of that size just aren't as common. So a small step down as the 'mainstream' high-end part makes sense. The RX 6800 stuff only started showing up in the last couple of weeks as well. RX 6700 XT and RX 6500 XT remain MIA, naturally -- coming in 2021. Anyway, a 72 CU chip potentially keeping up with RTX 3080 is believable; the 60 CU chip doing so wasn't.

Really interested in getting these in for testing, though! That Infinity Cache seems to be doing wonders, which makes me wonder why Nvidia hasn't done something like that. Heck, even Intel had a massive 128MB eDRAM on Iris Pro back with Broadwell that gave a serious boost to gaming performance without killing power. How much of the die does the 128MB take up, I wonder? Seems like it might be close to a third of the chip!
I'm in no rush, I'd rather spend quality time with the infamous Jared Walton chatting about PC Hardware and Tech over some Root Beers then just claim the money.
The time hanging out is more important to me anyways. So I'm more than happy to wait until Jared Walton shows up in LA.

I looked at AMD's history of binning chips across all their GPU's as of the past few generations, and here are my new predictions:
Hkza7h5.png

So Jared, what do you think of my new predictions for AMD's 6000 series lineup based on all of today's info and other rumors / leaks that I can validate?
 
I'm in no rush, I'd rather spend quality time with the infamous Jared Walton chatting about PC Hardware and Tech over some Root Beers then just claim the money.
The time hanging out is more important to me anyways. So I'm more than happy to wait until Jared Walton shows up in LA.

I looked at AMD's history of binning chips across all their GPU's as of the past few generations, and here are my new predictions:
Hkza7h5.png

So Jared, what do you think of my new predictions for AMD's 6000 series lineup based on all of today's info and other rumors / leaks that I can validate?
They're possible, but lots of things are possible. I'm not expecting RX 6600 or RX 6400 for example. Also, I don't think AMD will do Navi 22, 23, and 24 with maximum CUs of 40, 32, and 26 -- those are just weird values on the latter two. Probably Navi 22 with something like up to 48 CUs is possible, but rumors currently peg it at 40 CUs. From there, the next step down would probably be more like 20-24 CUs, using harvested Navi 22 for the entire 26-40 CU range (just like Navi 21 will cover at least 60-80 CUs, and probably there will be a 50-ish CU variant at some point as well).

My guesses? RX 6700 XT might still be Navi 21, or Navi 22 will have 48 CUs. Then RX 6700 can do 40 CUs. RX 6500 XT will probably be 32 CUs (still potentially Navi 22), and if AMD does an RX 6500 vanilla chip it would be 28 CUs. Depending on yields, AMD would either have Navi 22 harvested chips or maybe a dedicated Navi 24, but we don't know yields for sure. AMD might do one more chip, Navi 24, for budget GPUs with a maximum of maybe 20 CUs. But it could just as easily be Navi 23 still (or maybe Navi 23 doesn't exist and it's Navi 24 for the budget to midrange chips).

Basically, AMD had two Navi 1x chips -- Navi 12 doesn't really count as it's just Navi 10 + HBM2. RX 6000 adds a new performance tier, so AMD could get by with three chips. Anything below RX 6500 though should just end up as a next-gen APU, maybe with DDR5?

The other interesting bit is going to be this whole Infinity Cache. It's 128MB on Navi 21, but I can't imagine AMD keeping the same size on the lower tier GPUs, because 128MB is still a pretty massive chunk of die space. So Navi 22 might end up with 64MB of Infinity Cache, and Navi 23/24 could either do 32MB or perhaps just ditch it completely. Probably keep it and do 32MB, but then use a 128-bit bus.