News AMD Unveils Big Navi: RX 6900 XT, RX 6800 XT and RX 6800 Take On Ampere

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

Zeifle

BANNED
Oct 28, 2020
6
1
15
I've highlighted in bold a few of the MANY areas of 'concern' with your unbelievably visible lies (extreme tactics: enabled lol) ....
  1. The VAST MAJORITY of gamers are console/mobile gamers and thus have never experienced any form of RT. That the next gen consoles now have a playable level of RT is HUGELY significant for console gaming!!!
  2. RT is still in it's infancy, developers are STILL having to significantly compromise on the complexity of effect implementation.
  3. In paragraph 3 above you state that the power consumption of the 6800XT is 250w and that they must have been using an overclocked card for their comparisons. That is blatantly incorrect. THE 6800XT DRAWS 300w - STILL 20w LESS THAN THE NVIDIA 3080. The depth of knowledge - and level of detail - you are clearly trying so very hard to display is also reflected in just how far you are prepared to go on this one clearly deliberately incorrect point. This ENTIRE paragraph is WRONG.
In case anyone was in any doubt: AMD is now LEADING in BOTH CPU AND GPU, from top to bottom.

The smell of burning Nvidia/Intel-PAID-SHILLS is incredibly strong in these forums, and across the web. You, Sir, have just attained a particularly high and extremely visible level of paid-shilling. As your paymasters seek to recover costs as a result of AMD's demolition, I'd like to imagine a certain paid army is going to be rapidly de-mobbed.
However, in the current climate, please continue to make a fool of yourself - we need a laugh.
Visible lies? You may want to try and re-read my post as, despite going out of your way to highlight points you clearly completely misunderstood what was said. In fact, you took my statements as indicating something else entirely which is, frankly, baffling.

1. Reflections only is NOT going to be a big deal and this is what most ray-tracing will entail. It may look a little prettier in the night streets after some rain puddles are on the ground in specific scenario screenshots/video but in reality its a low impact improvement in games like Watch Dogs Legion and even more so in most other games that don't frequently features lots of reflective material, aka nearly every game out there. There is a reason people have been disappointing with ray-tracing so far because the majority implementations are just reflections and the occasional underutilized shadows. For more advanced techniques it is going to be very difficult. Are there going to be edge case games that utilize it in a meaningfully impactful way, probably at the cost of other visuals going down but still worth it? Yes, but this will be a limited experience simply because the consoles have less than ideal ray-tracing capability. As ray-tracing optimizations in terms of API and techniques mature and new ones are developed during the generation this may change, nearly out of no where, depending on how this research develops but you would be a fool to count on that to occur. For those that want an actual proper and meaningful ray-tracing experience they will want to go to PC and Nvidia for the time being, especially for the high impact games like Cyberpunk 2077. If your going to make statements like this at least provide a proper basis and know what you are talking about alright?

2. I stated this, myself, and made that VERY clear. I talked about its lack of maturity and adoption, its complexity which virtually mandates DLSS at higher resolutions like 4K and often even lower, and the fact that most games can't afford more than fairly basic ray-traced reflections with even that causing significant hit for some titles. I went into further detail but I really should not need to regurgitate what you failed to read and comprehend.

3. I went and double checked and it was the 6800 that ran 250w while the 6800 XT ran 300w so I got those two mixed up, my bad. However, this means the situation is actually worse. First, don't bother arguing a 20w difference because that is frankly irrelevant in almost any gamer's case. You are counting pennies at that point. However, that means to get those boosted results put them at likely the same power draw or higher, probably higher as they didn't even show it oddly but showed it with prior benchmarks, to just barely edge out typically a 4% meager gain when NOT factoring in ray-tracing or DLSS. Thank you for correcting me on this point, granted you wanted to make Big Navi look better but this actually shows it is in an even worse position thanks to the correction.

AMD certainly appears to be leading in CPU but absolutely not in GPU. You are a shill as you still haven't contended the points made. Even if we try to hedge them as even at best the moment DLSS, ray-tracing, and next-gen IO come into the equation any one of those single points paints a complete total loss for AMD as these technologies become more relevant. DLSS can potentially allow a card at a fraction of the price to outperform AMD's top offerings with superior quality. Ray-tracing support by AMD is so poor that it means the games will either be running limited ray-tracing options with some off entirely (the more noticeable ones at that) or even common ones at much lower settings in worst case scenarios. Oh, it also wont have DLSS to help so ray-tracing at 4K is pretty much a pipe dream. Next-gen IO solution is currently something that could single-handedly prevent a game from even being playable at any setting if it is highly IO performance dependent but this is also the one area AMD can actually FIX mid-life, albeit just late to the game. How exactly do you propose AMD is winning here? I'm not even bringing up various Gameworks enhancements (FildelityFX is laughable compared to Nvidia offerings which offer all the same technologies, often superior, and more). AMD lost this generation. That is the reality. However, it doesn't mean they will lose next generation as long as they can come up with an IO solution and a DLSS contender. Even Nvidia's first outing with DLSS and ray-tracing performance were bad if not arguably outright horrible and these two areas aren't widely supported just quite yet though certainly growing sizeably soon with upcoming titles. No one said AMD is out of the game, but they are out of this generation if one makes an educated purchase. They will also have to sacrifice that performance lead they barely got to make space for improved hardware accelerated ray-tracing and AI technologies, and depending on their IO solution if they can't develop a good software decompression using the GPU then hardware for this as well.

Keep shilling though.

I couldn't care less about RT, but DLSS 2.x has merit and AMD sorely needs an alternative.
This is my personal perspective as well. RT will look nice in the very rare title like Cyberpunk 2077 but even then its still got too much room for growth for me to honestly care. However, a DLSS alternative is so important to a healthy competitive market at this point. Its going to be a rough point since it took Nvidia time to develop and they also have the edge as a world leader in artificial intelligence and technologies, particularly on the subject of neural networks. That said, AMD does have a history of poaching great 3rd party technologies so fingers crossed they find a robust solution that can keep up, or maybe even a diamond in the rough.

Actually the Total Board Power (TBP) is 300W for the 6800XT. We don't know if it will be pulling 300W or not until there are reviews of the card. I have a feeling that it will use less, probably in the 275W range, since the 6900XT with more CUs and higher clocks has the same TBP. We also know that while the TDP of the 3080 is 320W, it actually pulls more than that during gaming. From Jarred's RTX 3080 review, the 3080 was pulling 332W @ 1440p in Metro Exodus and 334W during FurMark.

Zeifle is selectively choosing the 2nd 6800XT performance comparison slide (slide 20) that shows the performance boost available with enabling Rage Mode & Smart Access Memory (SAM) with what s/he is saying. However, s/he fails to realize that slides 16 & 17 already cover stock vs stock comparison of the RTX 3080 & 6800XT. Nothing in the end notes from slides 16 & 17 state that Rage Mode or SAM is enabled, which it does say in the end notes for the 6800 test vs 2080Ti. Slide 20 though says that enabling these features can get your even more performance, an average of 6% increase over the 7 titles, at 4K resolution all while staying in the 300W TBP. However, unlike nVidia that is saying you will want a 750W PSU or larger, AMD is saying that you will only need a 650W PSU for their full AMD setup.
The 6800 XT point was corrected, though it actually made the situation worse for AMD as noted earlier in this same post.

Regarding 650w PSU it has been shown in tests that the RTX 3080 FE can suffice just fine on a 650w PSU as long as it is decently efficient (I wouldn't recommend a bronze efficiency PSU for instance). A 750w PSU is more advisable and the same applies to Big Navi as it was leaning at only a 20w difference. Claiming that equates to one reliably handling a 100w PSU difference so much more reliably is laughable. You'd have to be willfully blind to try to claim one is better than the other on this point. In addition, I did point out a 320-330w power draw in my posts so why are you attempting to correct me with information I already shared, not that 30w is particularly significant at the 300w mark? In addition, Furmark is very well known to produce unrealistic strain on GPUs so I recommend against using that as evidence in the future. Its really just to show the absolute peak end of the spectrum but doesn't really equate to a meaningful presentation in this type of discussion comparing two GPUs. GamerNexus produced around 323w in the same test with Furmark, btw so Jarred's testing may have been a bit off which is something he admits to with regards to power consumption and thermal accuracy relatively recently compared to GamerNexus. Why? Because he also recognizes that 330 vs 320 isn't a significantly meaningful difference.

Do you know what a 6% difference actually entails with this GPUs? We aren't scraping 30 FPS where each individual frame matters a lot. These are performance levels at typically 60-120 FPS meaning around a 3-4 FPS difference on average at 60 FPS, and while more at higher FPS due to the nature of FPS it becomes less meaningful. Throw in adaptive v-sync technologies and its value drops further. In addition, this is a difference the RTX 3080 can hit though power consumption for the OC between testers has varied significantly from a no-existent change at 317w to some hitting 338w I've seen. I'm not sure if a quality issue is at play here or simply testing tools and OC configurations.

Just out of curiosity for those of you buying these next gen parts, what is your screen resolution and refresh rates. I have an ultra wide 1440p with only 75Hz so all these GPU's are overkill for me. I am hoping we get refreshes on the lower end cards soon.
Mine is 4K 60FPS G-sync. Check out Steam's hardware survey results for resolution.
65.49% are still at 1080p.
6.89% are at 1440p.
2.27% are at 4K.

You are certainly not alone in finding these GPUs to be largely overkill.

Two things. First, AMD may only be doing 1 Ray Accelerator (RA) per CU, and a Turing-equivalent RA at that. However, AMD has 60, 72, and 80 RA in the revealed GPUs. Nvidia's Ampere RT cores are supposed to be ~70% faster than Turing, but SM counts relative to shader core counts have shifted (because of the doubling of FP32 cores per SM). So 3070 has 46 RT, 3080 has 68, and 3090 has 82. Obviously, that's still more RT performance than AMD on the last two, but AMD may not do too badly when looking at 3070 and 6800 in RT performance. A lot of it will come down to how the RA vs. RT cores stack up in actual use, which we don't really know. (There are leaks, but I'm not going to worry too much about those for now, as it's possible to fake things -- grains of salt and all that.)

As for DLSS and RTX IO, AMD did mention alternatives on both of those. FidelityFX Super Resolution is basically going to be a DLSS alternative. Will it be as good as DLSS 2.x? We don't know -- it's not out yet -- but Microsoft and Sony both have a vested interest in helping AMD create better image upscaling technologies. RTX IO meanwhile leverages Microsoft's DirectStorage API, which AMD also discussed. What's important to note is that no games have implemented RTX IO yet, so we don't know how much it will help in actual use. Sony and MS have both talked about optimizing game load times, however, which is basically the same thing RTX IO is supposed to do. In other words, don't count AMD out on the IO front.

Looks like Nvidia will still lead in RT and DLSS for the time being, AMD might catch up with the latter. RT, though, Nvidia has worked with a lot of devs to make it easier to implement, and I don't really see the consoles as being a major factor here other than encouraging devs to find optimizations to make RT look better than rasterization without totally killing fps. Word is some of the initial PS5 / XBSX games will have two modes, one targeting 60 fps at 4K without ray tracing, and one using upscaling to 4K and RT but only targeting 30 fps.
Interestingly it was discovered AMD leaked its ray-tracing performance for one of the GPUs, though it is unknown which, which was shown to noticeably lag behind Ampere but it was good enough that, assuming they are quick enough with a DLSS competitor solution, it should probably suffice as even GamerNexus found Ampere's current ray-tracing performance to be overkill due to other bottlenecks in hybrid approach rendering.

That said, AMD's "DLSS alternative" isn't looking too good. Initial statements from AMD indicates, with almost no uncertainties, that this solution is likely to be during ray-traced only scenes (which is odd and makes little sense for a lot of reasons so despite how it was likely very poorly despite explicitly phrased I'd take that stance with some serious salt). They have no solution at the moment, whatsoever, and indicated they are merely at the stage of looking into an open cross-platform agnostic solution. The fact they are looking for a non-accelerated solution is concerning that it very likely wont work well without taking significant performance on the GPU away from other processing. GPUs are quite capable with regards to some types of AI but this also means there is no bottom line expected performance that discrete hardware acceleration affords, either. This comes with too many caveats and an uncertain, arguably distant, future. It was a necessary but brutal PR move to AMD's own standing on this front as the topic has become too relevant to ignore.

Regarding the IO front, yes, I agree which I'm sure you saw. There simply aren't any games utilizing it, much less in a meaningful way that mandates extreme IO performance. First, RTX IO doesn't even become available to devs to start utilizing until 2021. Second, even on the consoles they wont likely be making heavily IO dependent games in the first year or two. Even as heavy IO dependent games become more pronounced it will typically be oriented more towards what we already see resulting in just shorter loading screens and transitions and not games designed in such a way that it will matter. Once we start seeing open world games that are designed with high IO dependency for streaming or unique game designs involving indoor/outdoor, teleportation/warping between two locations/realities/times/etc., high speed open world traversal (the Flash or faster flying vehicles or even slower ones for the amount of IO involved in next-gen games, etc.) then it will be mandatory to have such a solution. AMD definitely still has time here and its something that can be brought forth mid-life of the Big Navi GPUs. In addition, its not like Nvidia's solution is simply some technology only they built. It was, indeed, using Microsoft's Direct Storage in coop with Nvidia's RTX IO API and so a good chunk of the work was already initially completed by Microsoft. Its silly that AMD has basically been radio silent on this and I'm hoping they don't lack the vision to understand its importance on the PC front until it results in them being too late, not that it would be the first time, but its something I'm not particularly concerned about so much as just a bit disappointing with their handling of. This is the area I'm least concerned about for Big Navi, despite being the most critical area they could screw up.
 

w_o_t_q

Commendable
Jul 24, 2019
44
2
1,545
Another trash product from AMD in GPU field ... Remember Radeon VII dead on arrival thanks to non working drivers .... And other problems with drivers ... firmware ... compatibility with existing hardware. As owner of RTX 2080 (happy 2 years right now) i will pay extra 120 bucks for RTX 3080FE when be upgrading sometime next year after everybody hurry to be first will happy get their cards. Reason why I prefer Nvidia driver stability and support and and general quality of platform and yes very good performance and focus on important usable things ... For example AMD give trash 16GB slower memory which is usable to very few cases ... (ML learning etc) Nvidia give a little less but much better performing memory which be usable in all cases ...
 
  • Like
Reactions: Gurg
Who wonders what the 6700XT with 40 CUs would provide for performance? Will AMD be able to give say 2080 Super performance for sub 200W?
My guess would be somewhere around there, though a lot depends on how much they push clock rates. Marketing images of the 80CU chip show a lot of symmetry, so the 40CU version might look a lot like that cut in half. That also means it would likely have half the infinity cache as well though, which could impact high-resolution performance relative to the 6800 and up. If RDNA2 is getting a lot of its performance gains from keeping the frame-buffer cached, then we might see 4K performance drop off further on the 6700 XT and below.

Another thought is that we might even see another card based off this higher-end chip. Even the 3800 has the full 128MB of cache enabled, but if there are a sufficient number of chips with some amount of defective cache, AMD could probably disable half of it and market it as a somewhat lower-end part. So, perhaps a 6700 XT could be a 6800 with half the cache, half the VRAM and maybe some more cores disabled, but not necessarily down to 40.

It's also possible that even lower-end parts could just be rebadged 5000-series GPUs based on RDNA1, though I would expect those to be priced a fair amount lower than the current cards to be competitive.

I honestly wouldn't be surprised if that left the card only 5-10% or so ahead of the RTX 3080 in the real world in an older or intel CPU system which puts it in a bad spot value wise for many.
Perhaps, but the 3090 itself is only 10-15% faster than a 3080 in games, making it an even worse value for gaming. Though the 3090 does have significantly more VRAM over the 3080 to differentiate it, which can be useful to certain professionals. The 6900 XT's markup over the 6800 XT is much lower, though all of the 6000-series cards announced so far offer 16GB. Top-end cards like the 3090 and 6900 XT are not really meant to be a good value though.

I'd not call $579 (for the 6800) any earth-shattering 'must buy now' bargain....

They should have undercut the 3070's $499 pricing by $50 or so...
I agree that $580 doesn't really appear to shift the value of cards in this price segment, but I suspect that Nvidia found out about AMD's plans for performance and pricing months before the 30-series announcement, and targeted lower-than-planned prices for their cards as a result. It may not be practical for AMD to undercut the 3070 with a graphics chip this large (and double the VRAM of a 3070). In any case, based on AMD's numbers at least, performance of the 6800 seems to be well ahead of the 3070 in most of today's games, and nearly in-between that card and the 3080. So, they don't need to price it lower than a 3070. Even matching the 3070's $500 MSRP would have likely made it a clear winner over that card, but if availability is limited and demand is expected to be high, they probably didn't see much to gain from doing that.

Yes, they did for the 6800 XT. If you look at the chart they are comparing an overclocked 300w 6800 XT (its default power consumption is approximately 250w stated by AMD) to a stock RTX 3080 which typically draws around 320w so we know it isn't overclocked like the 6800 XT is.
The 250 watts was stated for the graphics chip alone. 300 watts is for the total card, including VRAM and other hardware, not for overclocked performance. Either way, it looks to be more efficient than Nvidia's offerings this generation, at least as far as reference models go. I must say though, you seem to be making some rather long posts for someone who only joined the site today for the sole purpose of ranting about AMD's Ryzen 6000 announcement. Nvidia is certainly getting their money's worth. Much better than the tugboat guy. : D
 

King_V

Illustrious
Ambassador
AMD will always be second best in video card department. They can't dominate Intel and nVidia both. Pick and choose one. Because of this we still have Xeon processors dual socket as a better solution for servers then a HEDT Threadripper with slow cores but a lot of them, but who needs them now that video rendering is done with GPU on Premiere and what have you.
Sloppy speculation at best.
 

nofanneeded

Respectable
Sep 29, 2019
1,541
251
2,090
Disappointed the price of the AMD Radeon RX 6800 started with a 5 rather than a 4. Given they are comparing it with a 3070 it would have been nice for it to been lower in price.

That extra $80 is actually less than the Price ($96) of the extra 8GB GDDR6 you get over RTX 3070..

So Actually AMD is selling this card $16 less than RTX 3070 given it has 16GB VRAM.
 

killingfield

Commendable
Oct 29, 2020
2
0
1,510
Who wanna bet that Nvidia RTX 3070/3080 with 16/20 GB are coming very soon ?

AMD giving us 16GB on all the three cards at the same performance level and for lower price is a serious blow to Nvidia.
3080 20GB GDDR6X may not be a good idea, the GDDR6X will make the 20GB edition far more expensive than the 10GB edition
 
3080 20GB GDDR6X may not be a good idea, the GDDR6X will make the 20GB edition far more expensive than the 10GB edition
If by "far more expensive" you mean $100 to maybe $150 more, yes. But for a top-tier just below 3090 GPU, that's not out of the question. AMD is doing 16GB GDDR6 for $580 after all, so it's not like the RAM costs $300 for 10GB more. Then again, it's Nvidia and AMD is already asking $999 for the RX 6900 XT. $999 for RTX 3080 Ti 20GB wouldn't be shocking at all.
 

Zeifle

BANNED
Oct 28, 2020
6
1
15
I'm really curious about its ray tracing support and performance. Though I'm already hearing doubts that somehow RDNA 2 won't support existing games even though most of them are using DXR.
It will be passable at lower resolutions most likely. Its estimated by some info AMD let slip to be about 13x faster than a software solution. Turing, the RTX 2000 series was roughly 10x+. This puts it a bit ahead of Turing but noticeably behind Ampere, which is probably why Cyberpunk is explicitly stated to run only on Nvidia GPUs. Microsoft's Phil Spencer has made some statements that indicate we shouldn't count on console ray-tracing tbh... but for PC we can be more hopeful from AMD, at least as a "possibility" in some cases. If you want that smooth 60+ fps reliable your probably going to need to drop settings down a good deal, or keep settings up and lower resolution (honestly, probably to 1080p in many cases). This is simply the result of not having a DLSS alternative. That said, while 4K looks great its not going to wow you as much as some nice ray-tracing. Jumping from 720p to 1080p was a big deal. Not quite as much going to 4K. Sure, it is noticeable but its not particularly a game changer. If your willing to go with, say, 30 fps you might even be able to pull it off at like 1440p in some cases with decently high settings. However, this all assumes only a handful of the more basic stuff like shadows and reflections. Something like Cyberpunk may be entirely off the table, possibly even at 1080p (but until that is proven to be the case take that with a dose of salt, even though there is considerable evidence supporting this). All of this is okay, except when people are buying a $500+ GPU intending to play games at 4K and even 1440p with high settings or ultra-wide only to have to drop settings significantly or resolution... and possibly even sacrifice higher framerates its a problem. It will be there, to an extent, for those who do go with Big Navi it will just have limited support this generation and there is virtually no way of getting around that as they haven't even begin to actually develop a DLSS alternative based on AMD's statement.

To put it in perspective, Big Navi seems solid in a vacuum overview, but when factoring in where its competing, price, and its lack of proper support for this new generation of games I simply cannot say it is competitive this generation. I'd even go as far as saying this might be the worst AMD GPU generation in the past decade or longer. Not right out of the gate, mind you, but in terms of these cards will mature like ice cream on a summer day. In contrast, Ampere already has a foundation for numerous cutting edge technologies that are only going to mature extremely well.
 

MasterMadBones

Distinguished
For those saying that drivers are or will be bad, there is no indication for that at this point. As far as I understand, the current 20.10.1 drivers already support RDNA2 and it's the most stable driver package we've seen from AMD in years. That doesn't necessarily predict RDNA2 stability, but it certainly looks promising.
 
  • Like
Reactions: King_V

w_o_t_q

Commendable
Jul 24, 2019
44
2
1,545
The most stable drivers not means that they are stable enough to just use them without problems. It is like bad quality thing Producer can said that is better than previous one, but it does not mean that is just good to use without defects....
 
  • Like
Reactions: Gurg

King_V

Illustrious
Ambassador
The most stable drivers not means that they are stable enough to just use them without problems. It is like bad quality thing Producer can said that is better than previous one, but it does not mean that is just good to use without defects....
And you (among others) are engaging in nothing more than speculation by insisting they'll be bad.
 

King_V

Illustrious
Ambassador
It will be passable at lower resolutions most likely. Its estimated by some info AMD let slip to be about 13x faster than a software solution. Turing, the RTX 2000 series was roughly 10x+. This puts it a bit ahead of Turing but noticeably behind Ampere, which is probably why Cyberpunk is explicitly stated to run only on Nvidia GPUs. Microsoft's Phil Spencer has made some statements that indicate we shouldn't count on console ray-tracing tbh... but for PC we can be more hopeful from AMD, at least as a "possibility" in some cases. If you want that smooth 60+ fps reliable your probably going to need to drop settings down a good deal, or keep settings up and lower resolution (honestly, probably to 1080p in many cases). This is simply the result of not having a DLSS alternative. That said, while 4K looks great its not going to wow you as much as some nice ray-tracing. Jumping from 720p to 1080p was a big deal. Not quite as much going to 4K. Sure, it is noticeable but its not particularly a game changer. If your willing to go with, say, 30 fps you might even be able to pull it off at like 1440p in some cases with decently high settings. However, this all assumes only a handful of the more basic stuff like shadows and reflections. Something like Cyberpunk may be entirely off the table, possibly even at 1080p (but until that is proven to be the case take that with a dose of salt, even though there is considerable evidence supporting this). All of this is okay, except when people are buying a $500+ GPU intending to play games at 4K and even 1440p with high settings or ultra-wide only to have to drop settings significantly or resolution... and possibly even sacrifice higher framerates its a problem. It will be there, to an extent, for those who do go with Big Navi it will just have limited support this generation and there is virtually no way of getting around that as they haven't even begin to actually develop a DLSS alternative based on AMD's statement.

To put it in perspective, Big Navi seems solid in a vacuum overview, but when factoring in where its competing, price, and its lack of proper support for this new generation of games I simply cannot say it is competitive this generation. I'd even go as far as saying this might be the worst AMD GPU generation in the past decade or longer. Not right out of the gate, mind you, but in terms of these cards will mature like ice cream on a summer day. In contrast, Ampere already has a foundation for numerous cutting edge technologies that are only going to mature extremely well.

See, there you go - you were actually making reasonable points in your first paragraph, and then you had to throw logic out the window in your 2nd paragraph due to your obsessive need to engage in AMD-bashing, taking it to the point of hyperbole.

When it comes to having credibility, this post of yours just managed an epic level of snatching defeat from the jaws of victory.
 
  • Like
Reactions: Jim90

nofanneeded

Respectable
Sep 29, 2019
1,541
251
2,090
The most stable drivers not means that they are stable enough to just use them without problems. It is like bad quality thing Producer can said that is better than previous one, but it does not mean that is just good to use without defects....

I doubt the new card will have bad drivers , xbox and PS5 are both using Ryzen + RDNA2 chips and If AMD could make drivers for the consoles without any problem , they can easily make them for Windows.
 

w_o_t_q

Commendable
Jul 24, 2019
44
2
1,545
I doubt the new card will have bad drivers , xbox and PS5 are both using Ryzen + RDNA2 chips and If AMD could make drivers for the consoles without any problem , they can easily make them for Windows.
This year AMD failed make x570 drivers for windows 10 ... computer had regular problems in Youtube video and games ... After 2 bios upgrades it started working (MSI x570 A-pro everything on automatic)... (Luckily it was gaming rig and 2 month waiting was not big deal ...) Consider that I did heard that AMD buffed their software develop team ... it means GPU drivers most likely will have more issues, remember chip set driver are more simple ...
 

King_V

Illustrious
Ambassador
This year AMD failed make x570 drivers for windows 10 ... computer had regular problems in Youtube video and games ... After 2 bios upgrades it started working (MSI x570 A-pro everything on automatic)... (Luckily it was gaming rig and 2 month waiting was not big deal ...) Consider that I did heard that AMD buffed their software develop team ... it means GPU drivers most likely will have more issues, remember chip set driver are more simple ...

Oh? They failed to make x570 drivers for Windows 10?

Funny, because I just went here and found x570 chipset drivers dated 10/19 for Windows 10.

Then when I clicked on Previous Drivers, I found earlier releases of X570 chipset drivers for Windows 10 dated:
  • 7/21/2020
  • 6/3/2020
  • 3/19/2020
  • 1/16/2020
So, your claim that "This year AMD failed make x570 drivers for windows 10" is an outright falsehood.

I don't know why you, as a sample size of 1 person, are having regular problems.

As to how on earth this correlates to your expectation of driver issues, you're not making any valid points. "Oh, people complained about Navi drivers when Navi first came out, therefore the drivers WILL suck when Navi 2 comes out" is a lousy train of logic.
 
  • Like
Reactions: Soaptrail and Jim90

MasterMadBones

Distinguished
The most stable drivers not means that they are stable enough to just use them without problems. It is like bad quality thing Producer can said that is better than previous one, but it does not mean that is just good to use without defects....
As an RX 5700 XT owner, personally I haven't had any issues with the drivers for several months and we've heard similar stories from others. On top of that, the list of known issues with the 20.10.1 release has just 3 entries, which is acceptable. Only one of the problems is somewhat critical, but it seems to happen under very, very specific circumstances.

It's not like Nvidia hasn't had its own share of stability issues at recent launches, so I don't expect flawless drivers here either. Where AMD has gone wrong in the past, has been the sheer amount of problems and time required to fix all of them. That is just something I don't really see happening again, considering where the currently available driver package is. The fact that performance has improved between October 8th and 28th, indicates to me that most of the stability testing is finished and the driver team was able to start working on performance.
 

w_o_t_q

Commendable
Jul 24, 2019
44
2
1,545
Oh? They failed to make x570 drivers for Windows 10?

Funny, because I just went here and found x570 chipset drivers dated 10/19 for Windows 10.

Then when I clicked on Previous Drivers, I found earlier releases of X570 chipset drivers for Windows 10 dated:
  • 7/21/2020
  • 6/3/2020
  • 3/19/2020
  • 1/16/2020
So, your claim that "This year AMD failed make x570 drivers for windows 10" is an outright falsehood.

I don't know why you, as a sample size of 1 person, are having regular problems.

As to how on earth this correlates to your expectation of driver issues, you're not making any valid points. "Oh, people complained about Navi drivers when Navi first came out, therefore the drivers WILL suck when Navi 2 comes out" is a lousy train of logic.
I have to flash this mb for friend which assemble his gaming pc ... Previous his gaming system was intel .... For him this will be the last amd system at least DIY .... Unstable and waiting 2 month to fix issues ...
 

nofanneeded

Respectable
Sep 29, 2019
1,541
251
2,090
Performance/Power ratio is puzzling here for AMD , it beats RTX 3080 by 20 watts (300vs320) .. and it looses against RTX 3070 by 30 watts (250vs220) while using the same chip .
 

w_o_t_q

Commendable
Jul 24, 2019
44
2
1,545
As an RX 5700 XT owner, personally I haven't had any issues with the drivers for several months and we've heard similar stories from others. On top of that, the list of known issues with the 20.10.1 release has just 3 entries, which is acceptable. Only one of the problems is somewhat critical, but it seems to happen under very, very specific circumstances.

It's not like Nvidia hasn't had its own share of stability issues at recent launches, so I don't expect flawless drivers here either. Where AMD has gone wrong in the past, has been the sheer amount of problems and time required to fix all of them. That is just something I don't really see happening again, considering where the currently available driver package is. The fact that performance has improved between October 8th and 28th, indicates to me that most of the stability testing is finished and the driver team was able to start working on performance.
I would not buy potential danger and faulty equipment ... in any field ... my personal safety and time and nerves are more valuable ... especially if it is 120 buck question on card which cost in field of 600-700 anyway (depend on vendor/model etc.)