News AMD Unveils Big Navi: RX 6900 XT, RX 6800 XT and RX 6800 Take On Ampere

Page 6 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

Jim90

Distinguished
Another trash product from AMD in GPU field ... Remember Radeon VII dead on arrival thanks to non working drivers .... And other problems with drivers ... firmware ... compatibility with existing hardware. As owner of RTX 2080 (happy 2 years right now) i will pay extra 120 bucks for RTX 3080FE when be upgrading sometime next year after everybody hurry to be first will happy get their cards. Reason why I prefer Nvidia driver stability and support and and general quality of platform and yes very good performance and focus on important usable things ... For example AMD give trash 16GB slower memory which is usable to very few cases ... (ML learning etc) Nvidia give a little less but much better performing memory which be usable in all cases ...

Nvidia isn't even trying to disguise these posts!!!
 
Last edited by a moderator:
  • Like
Reactions: nofanneeded

Jim90

Distinguished
I would not buy potential danger and faulty equipment ... in any field ... my personal safety and time and nerves are more valuable ... especially if it is 120 buck question on card which cost in field of 600-700 anyway (depend on vendor/model etc.)

I would fully expect tomshardware to remove this damaging post. These forums must NOT be used to spread this trash.
 
  • Like
Reactions: nofanneeded

Rogue Leader

It's a trap!
Moderator
I would not buy potential danger and faulty equipment ... in any field ... my personal safety and time and nerves are more valuable ... especially if it is 120 buck question on card which cost in field of 600-700 anyway (depend on vendor/model etc.)

Hello Mr w_o_t_q, this is your last post in this thread. We (the moderation team) are watching this thread and do not tolerate the spread of egregious misinformation. Take this as your hint to move on to another topic and a warning to not spread misinformation or you will be sanctioned.
 
Performance/Power ratio is puzzling here for AMD , it beats RTX 3080 by 20 watts (300vs320) .. and it looses against RTX 3070 by 30 watts (250vs220) while using the same chip .
While listed TBP is 30W higher, AMD is showing an average 18% higher performance than the 2080Ti, so about 20% higher than the 3070. That would mean it still has a better performance/watt than the 3070.
 
While listed TBP is 30W higher, AMD is showing an average 18% higher performance than the 2080Ti, so about 20% higher than the 3070. That would mean it still has a better performance/watt than the 3070*.
* In AMD provided benchmarks, which almost certainly weren't picked to skew things in AMD's favor and the lack of ray tracing test results is purely coincidental!

This is why we want independent testing to see how the cards truly stack up in a wider selection of games. And also to see what the true power use is in some games -- hopefully it's TBP or less, but we don't know until we test. Right now, though:
  1. $80 more expensive is a 16% price increase
  2. 250W is 14% more power
  3. 18% higher performance in AMD tests means there's a very real chance it's less than that in a wider set of games
  4. No details on ray tracing performance, but it would be very surprising if AMD can beat Nvidia, and at least a few supposed leaks show worse RT performance than Ampere
It's not a bad deal, on paper, but neither is it an amazing deal. It's more money for potentially more performance while using more power. I don't care too much about the power, but additional testing results are going to be important.
 
  • Like
Reactions: Zeifle

MasterMadBones

Distinguished
I would not buy potential danger and faulty equipment ... in any field ... my personal safety and time and nerves are more valuable ... especially if it is 120 buck question on card which cost in field of 600-700 anyway (depend on vendor/model etc.)
In that case I don't understand your point because then you wouldn't buy an Nvidia card at launch either. Or any GPU ever, for that matter.

Nvidia lists ten known issues for the 457.09 release, one of which is a BSOD. AMD lists three, one of which is a black screen. The remaining issues for both are all less severe, though potentially frustrating for a very small group of users. The majority of them are easily circumvented.

It's impossible to find a driver that is completely free of bugs, which is why it's always wise to view the release notes before installation, so you can decide for yourself whether any of the bugs affect you.
 

MasterMadBones

Distinguished
Hello Mr w_o_t_q, this is your last post in this thread. We (the moderation team) are watching this thread and do not tolerate the spread of egregious misinformation. Take this as your hint to move on to another topic and a warning to not spread misinformation or you will be sanctioned.
I hadn't noticed this when I made my previous comment. Frankly I'm surprised this was deemed necessary at all, especially since the quoted message only contained a comment about personal considerations. The word "faulty" extends to the driver package in my interpretation. Between personal experience and opinion I don't see any real misinformation from this user.

It has been obvious that I don't agree with this person, but I'm disappointed that I'll be unable to further discuss with them while others have behaved like trolls in this thread without consequence.

I know I'm not supposed to talk about the moderation team's actions, but this just doesn't feel right.
 

Zeifle

BANNED
Oct 28, 2020
6
1
15
Year 2020 where RIP for both Nvidia And Intel.
only thing stopped me from buying 3090 is the scalpers. but I'm happy today I didn't waste my money on that.

For the the first time Nvidia/Intel fan will swap to full AMD rig :p
I really hope you don't plan to play games like Watch Dogs: Legion or Cyberpunk 2077, especially with ray-tracing enabled. Early benchmarks show ultra settings proving difficult for even the RTX 3080 and RTX 3090 without ray-tracing. With ray-tracing DLSS is mandatory even at lower resolutions to get acceptable framerates, otherwise it is unplayable. As AMD has no such DLSS type offering you simply will be 100% incapable of playing Watch Dogs: Legion at 4K with ray-tracing, and even possibly 1440p. As for Cyberpunk 2077 it dials up ray-tracing demands far higher with more advanced ray-tracing effects so even though it hasn't been tested we can already make preliminary assumptions with fine accuracy.

Don't get me wrong, the game WILL look amazing even without ray-tracing and DLSS still causes some detail lost on complex textures as seen in early Watchdogs: Legion DLSS on/off comparisons, though it is minor and wont be noticed in motion. But if you are opting for getting the best graphics, and even more so a mid-range GPU, the relevance of ray-tracing and DLSS (even without ray-tracing allowing lower tier cards to compete with top end AMD offerings) is only going to become increasingly prevalent in the more demanding titles, AAA releases, which matter going forward. Most of the major titles are including one or both of these within the next couple of months and it doesn't make much sense to pay above $400 for something that can't really perform with all the bells and whistles. THAT SAID, you could opt to simply turn down to a lower resolution in those games that you struggle with ray-tracing and maybe cut shadows from ultra to med/high and a few other settings and still get pretty good visuals while getting your cake-tracing too, if need really be. I don't think AMD is competitive this generation, but its not like the games will be horrible looking or unplayable for those that do go that route.
 
Last edited:

Awev

Reputable
Jun 4, 2020
89
19
4,535
@Zeifle Do you have some insider knowledge that we don't have, and you are under a non-disclosure agreement?

In a prior post to this thread I linked the info about Ray Tracing (RT), that suggests that AMD is 10x (I think 14x) faster than software rendering of RT. In the story that this thread is related to, AMD has said that FidelityFX will be supporting something similar to nVidia's DLSS, and currently they do offer upscaling. In a post related to FidelityFX and DLSS, you can see that AMD is making progress - look here. And you have spoken about Mirco$oft's direct memory API, ignoring the fact that not only does AMD's new cards support it right out of the box, yet they have helped Sony with the PS/5, or is it Micro$oft with the X whatever whatever X|S, do the decompression for effective bandwidth increase.

AMD GPUs have been known for the integer (INT) math powers, while nVidia was known for their floating point (FP) math. nVidia has made the effect to do better with the INT math, so now their cards might be considered for mining crypto currency (not likely because of the Return on Investment (RoI) is not good). Who is to say that AMD has not made similar strides in FP?

Please stop with all of your doom and gloom until you have the numbers, and are willing to post the sources, to back it up. It is obvious that you do not like a prior version of AMD's FidelityFX, have you seen the newest version? We are discussing a card that has not gone on sale yet, and anyone that might have it for testing and publishing the results are under an embargo as AMD likes to say (also known as a non-discloser agreement (NDA)). And very few people own an RXT 30#0 yet, so while the market availability is virtually nill we really can only go on last year's products as to what to buy now, today, at this moment.

And before you accuse me of being a fanboy of one company or another, know me first. I gave my girlfriend a WinTel box so she could check emails and get recipes (I had it lying around collecting dust), inherited a Dell All-in-One WinTel box that I have replaced Win 7 with Linux Mint 20 (better hardware management and hence more responsive), built a AMD box with Windoze 7 for a friend (only checking emails and craigs listings), while I have used and worked on a number of other computers over the years (anyone remember M$-DOS 2.12 or a Power Mac?). And my own personal machine that I use every day, even now to write this? Why, I built my own system, with a Ryzen 5 chip, nVidia GTX 1070 to run my 32" curved 1440p 144Hz monitor and AMD Radon R5 to power a 1080p monitor, with a race car/flight simulator set up next to my desk set up, spread over two desks, dual boots Mint 20 and Win 10 Pro, all in one box. I pick and choose based on what is needed, and in my case, my budget as well, I am not loyal to just one brand or manufacture.

@Everyone I would love to see a flyoff between the best nVidia and AMD has to offer come Jan 1, using M$ Flight Simulator 2020. We know that it should of been developed on DirectX 12 (but DirectX 11 so Windoze 7 and 8 machines could play it as well), and better optimized. Because of this it brings any system to it's knees at the ultra settings. This would be the "will it run Crysis Remastered" test to start the new year with.
 
  • Like
Reactions: King_V
Another trash product from AMD in GPU field ... Remember Radeon VII dead on arrival thanks to non working drivers .... And other problems with drivers ... firmware ... compatibility with existing hardware. As owner of RTX 2080 (happy 2 years right now) i will pay extra 120 bucks for RTX 3080FE when be upgrading sometime next year after everybody hurry to be first will happy get their cards. Reason why I prefer Nvidia driver stability and support and and general quality of platform and yes very good performance and focus on important usable things ... For example AMD give trash 16GB slower memory which is usable to very few cases ... (ML learning etc) Nvidia give a little less but much better performing memory which be usable in all cases ...
Look at the fiasco with the 2080ti cards failing almost instantly. Look at the issues with the 3080 cards shutting off. Or did you forget about those?
 

King_V

Illustrious
Ambassador
I don't think AMD is competitive this generation, but its not like the games will be horrible looking or unplayable for those that do go that route.

Ah, got it. So, basically, you come up with your own, carefully-tailored definition of "competitive" so that you can say that AMD is not competitive.

Fine. I say Nvidia is not competitive. AMD is clearly superior, because it offers more Frames-per-letters-in-their-product-name.

Navi2 has only 5 letters. Ampere has 6 letters. In areas where the cards have approximately the same number of frames/second, Navi2 is better because fps/5 is greater than fps/6.
 
Performance/Power ratio is puzzling here for AMD , it beats RTX 3080 by 20 watts (300vs320) .. and it looses against RTX 3070 by 30 watts (250vs220) while using the same chip .
While listed TBP is 30W higher, AMD is showing an average 18% higher performance than the 2080Ti, so about 20% higher than the 3070. That would mean it still has a better performance/watt than the 3070.
In addition to the point Jeremy made about the 6800 being a notably faster card (at least in the examples they provided) another thing to consider is that the card has double the VRAM of the 3070, and even 60% more VRAM than the 3080. So while its graphics chip might be more efficient (at least at traditional rasterized graphics), the large amount of VRAM is probably drawing more power. The 3070 is also running GDDR6 memory that's slower than the GDDR6X found in the 3080 and 3090, whereas the 6800 is using the same memory system as the other cards in its family.

The 6800 is also using the same graphics chip as the 6800 XT and 6900 XT, just with 25% of its cores disabled, so that could potentially reduce efficiency a little over what could be possible if it used a smaller chip, particular if they bin more efficient silicon to go toward its higher-end counterparts. The 3070, on the other hand, is using its own chip that's less than two-thirds the size of the one that goes into the 3080 and 3090.
 

Zeifle

BANNED
Oct 28, 2020
6
1
15
@Zeifle Do you have some insider knowledge that we don't have, and you are under a non-disclosure agreement?

In a prior post to this thread I linked the info about Ray Tracing (RT), that suggests that AMD is 10x (I think 14x) faster than software rendering of RT. In the story that this thread is related to, AMD has said that FidelityFX will be supporting something similar to nVidia's DLSS, and currently they do offer upscaling. In a post related to FidelityFX and DLSS, you can see that AMD is making progress - look here. And you have spoken about Mirco$oft's direct memory API, ignoring the fact that not only does AMD's new cards support it right out of the box, yet they have helped Sony with the PS/5, or is it Micro$oft with the X whatever whatever X|S, do the decompression for effective bandwidth increase.

AMD GPUs have been known for the integer (INT) math powers, while nVidia was known for their floating point (FP) math. nVidia has made the effect to do better with the INT math, so now their cards might be considered for mining crypto currency (not likely because of the Return on Investment (RoI) is not good). Who is to say that AMD has not made similar strides in FP?

Please stop with all of your doom and gloom until you have the numbers, and are willing to post the sources, to back it up. It is obvious that you do not like a prior version of AMD's FidelityFX, have you seen the newest version? We are discussing a card that has not gone on sale yet, and anyone that might have it for testing and publishing the results are under an embargo as AMD likes to say (also known as a non-discloser agreement (NDA)). And very few people own an RXT 30#0 yet, so while the market availability is virtually nill we really can only go on last year's products as to what to buy now, today, at this moment.

And before you accuse me of being a fanboy of one company or another, know me first. I gave my girlfriend a WinTel box so she could check emails and get recipes (I had it lying around collecting dust), inherited a Dell All-in-One WinTel box that I have replaced Win 7 with Linux Mint 20 (better hardware management and hence more responsive), built a AMD box with Windoze 7 for a friend (only checking emails and craigs listings), while I have used and worked on a number of other computers over the years (anyone remember M$-DOS 2.12 or a Power Mac?). And my own personal machine that I use every day, even now to write this? Why, I built my own system, with a Ryzen 5 chip, nVidia GTX 1070 to run my 32" curved 1440p 144Hz monitor and AMD Radon R5 to power a 1080p monitor, with a race car/flight simulator set up next to my desk set up, spread over two desks, dual boots Mint 20 and Win 10 Pro, all in one box. I pick and choose based on what is needed, and in my case, my budget as well, I am not loyal to just one brand or manufacture.

@Everyone I would love to see a flyoff between the best nVidia and AMD has to offer come Jan 1, using M$ Flight Simulator 2020. We know that it should of been developed on DirectX 12 (but DirectX 11 so Windoze 7 and 8 machines could play it as well), and better optimized. Because of this it brings any system to it's knees at the ultra settings. This would be the "will it run Crysis Remastered" test to start the new year with.
Nah, I'm no insider. 13x faster than software ray-tracing, so you are very close. Yes, they intend to add something that offers similar to Nvidia's DLSS to FidelityFX, though right now they are merely at the looking for partners phase and haven't actually began anything on it. This makes sense as AMD has hardly any history of in-house R&D for innovative technologies, but rather fund partnerships or buy technologies which is honestly totally fine and shouldn't be seen as a necessary negative. They offer upscaling, unfortunately, DLSS also adds seriously substantial performance gains while maintaining those higher resolutions and that is the big issue why AMD wont be as able this generation.

Regarding the supersampling technology it would only work during ray-traced scenes, yet it wont work during normal scenes. This should already strike you as odd. It is likely nothing even remotely close to what DLSS attempts and they are just being wordy which means its actual benefits are incredibly questionable. However, we won't know until we get to see it in action which doesn't look to be soon based on how they worded things. Honestly, as long as it can show up within the next year I don't think it will be too much of an issue but it could very negatively impact their next year's sales if they can't manifest a resolution to the issue pretty soon.

A100 brings a new precision, TF32, which works just like FP32 while delivering speedups of up to 20X for AI—without requiring any code change.
Source: https://www.nvidia.com/en-us/data-center/tensor-cores/
Now, with intentional optimizations it can go higher but I don't recall how high (I think it was 50x but I wouldn't give that too much credit). Note: The above is for Ampere generation tensor cores. Prior will have lower performance levels.

Now, AMD's solution wants to try and perform AI based super sampling that is cross-platform and not using dedicated hardware to accelerate it which means its sharing that performance hit with the rest of the GPU processing. Actually, sharing is the lesser issue despite being a critical one. The real issue is without that dedicated hardware it may simply not be fast enough to provide reasonable performance gains at higher resolutions. I think this may relate to some of their poor, if not possibly shady but honestly might just be poor, wording about the subject. IF they do come up with a solution that is open source cross-platform able to provide viable performance gains even if it only applied to ray-tracing (?) that would actually be amazing for the industry. It would benefit Nvidia more, but Nvidia could hopefully use its R&D leverage to mold it and refine it. Typically, with technologies like this if initiated by a third party they end up remaining open source from Nvidia and one would hope that trend would continue for the greater ecosystem. That said, it would for obvious reasons perform best on Nvidia GPUs because of the Tensor cores so AMD would still need to make room for dedicated hardware next generation. I'd like to be hopeful they come up with something cause it is better for us, but there are too many issues with their statements so far, and coming from a more informed background on the topic I'd outright call them liars if I didn't know better but will reserve such a statement until further PR on the subject from them because I'm hoping it was just very poor wording.

The way Nvidia has improved its integer and floating point performance is... interesting. They can perform either task. So far nothing from AMD suggests this capability, yet, and AMD may not even care as typically integer is by far the most important of the two in video games. Floating point arithmetic is a lot more expensive, though it can't be completely avoided so it is still relevant. Nvidia quotes some crazy TFLOP values at times but as you will see many outlets, correctly, stating the value of using TFLOPs as a measure of performance is becoming far less meaningful now days. Even though Ampere may be noticeably better than Turing in terms of raw crypto mining performance its power consumption, along with Big Navi's too, are up there making them less desirable.

You didn't read my post properly if you thought we didn't have the numbers. Simply Google Watch Dogs: Legion benchmarks and you will see multiple outlets confirming key points that 4K Ultra without DLSS is a problem even for top end Ampere GPUs. As for 4k Ultra and ray-tracing the game quickly becomes unplayable at even much lower resolutions without DLSS. Even with DLSS they can't use the highest quality DLSS setting as the performance hit from the game and ray-tracing is already too immense for Ampere. Even if AMD had equal ray-tracing performance, which they don't as AMD has already revealed, due to the current lack of a DLSS alternative they would be seeing the same single digit to low 20 framerates at 1440-4K resolution the RTX 3080 and RTX 3090 were seeing as their performance is basically even with Ampere before ray-tracing and DLSS. Idk about you, but I don't think people are gunning to buy slideshow edition of Watch Dogs or any game.

Ah, got it. So, basically, you come up with your own, carefully-tailored definition of "competitive" so that you can say that AMD is not competitive.

Fine. I say Nvidia is not competitive. AMD is clearly superior, because it offers more Frames-per-letters-in-their-product-name.

Navi2 has only 5 letters. Ampere has 6 letters. In areas where the cards have approximately the same number of frames/second, Navi2 is better because fps/5 is greater than fps/6.

I gave an extremely valid example from the early Watch Dogs: Legions benchmarks. Why don't you find a way to refute my points? Good luck.

Eh, not really. I feel like 77 mutated into our decade's Spore long ago but that's neither here nor there
I feel ya. I'd imagine its rough for those actually working on the blasted product. Here's to hoping it turns out well when it finally releases.
 
Last edited by a moderator:

King_V

Illustrious
Ambassador
I gave an extremely valid example from the early Watch Dogs: Legions benchmarks. Why don't you find a way to refute my points? Good luck.
That's not valid at all. You can't prove that AMD falls short in performance, so you redefine "competitive" as "must match NVidia's performance at 4K with Ray Tracing (or equivalent) and DLSS (or equivalent) enabled. And for whomever is playing to ALSO have good enough eyesight be able to see the difference in the latter while things are moving at speed on 4K. And yes, I saw your token nod which you barely even mention "and maybe even 1440p" and then completely disregard afterward.

In ONE game. Though, you also SPECULATE with regard to 2077.

And, you therefore define "competitive" based on that very narrow corner case, which covers a very small number of gamers.

No. You're not making any valid points. You have a conclusion already defined that you want ("AMD is not competitive") and you cherry-pick your arguments entirely around that.

Funny how the huge price difference at that level (RTX 3090 vs RX 6900 XT) didn't seem to factor into your "competitive" equation at all.
 
  • Like
Reactions: Awev and -Fran-
Mar 11, 2020
13
0
10
Waiting for lower cost RDNA2 cards , to get myself full DirectX 12 Ultimate (FL 12_2) support.

Maybe an RX 6500 XT?