I've highlighted in bold a few of the MANY areas of 'concern' with your unbelievably visible lies (extreme tactics: enabled lol) ....
- The VAST MAJORITY of gamers are console/mobile gamers and thus have never experienced any form of RT. That the next gen consoles now have a playable level of RT is HUGELY significant for console gaming!!!
- RT is still in it's infancy, developers are STILL having to significantly compromise on the complexity of effect implementation.
- In paragraph 3 above you state that the power consumption of the 6800XT is 250w and that they must have been using an overclocked card for their comparisons. That is blatantly incorrect. THE 6800XT DRAWS 300w - STILL 20w LESS THAN THE NVIDIA 3080. The depth of knowledge - and level of detail - you are clearly trying so very hard to display is also reflected in just how far you are prepared to go on this one clearly deliberately incorrect point. This ENTIRE paragraph is WRONG.
In case anyone was in any doubt: AMD is now LEADING in BOTH CPU AND GPU, from top to bottom.
The smell of burning Nvidia/Intel-PAID-SHILLS is incredibly strong in these forums, and across the web. You, Sir, have just attained a particularly high and extremely visible level of paid-shilling. As your paymasters seek to recover costs as a result of AMD's demolition, I'd like to imagine a certain paid army is going to be rapidly de-mobbed.
However, in the current climate, please continue to make a fool of yourself - we need a laugh.
Visible lies? You may want to try and re-read my post as, despite going out of your way to highlight points you clearly completely misunderstood what was said. In fact, you took my statements as indicating something else entirely which is, frankly, baffling.
1. Reflections only is NOT going to be a big deal and this is what most ray-tracing will entail. It may look a little prettier in the night streets after some rain puddles are on the ground in specific scenario screenshots/video but in reality its a low impact improvement in games like Watch Dogs Legion and even more so in most other games that don't frequently features lots of reflective material, aka nearly every game out there. There is a reason people have been disappointing with ray-tracing so far because the majority implementations are just reflections and the occasional underutilized shadows. For more advanced techniques it is going to be very difficult. Are there going to be edge case games that utilize it in a meaningfully impactful way, probably at the cost of other visuals going down but still worth it? Yes, but this will be a limited experience simply because the consoles have less than ideal ray-tracing capability. As ray-tracing optimizations in terms of API and techniques mature and new ones are developed during the generation this may change, nearly out of no where, depending on how this research develops but you would be a fool to count on that to occur. For those that want an actual proper and meaningful ray-tracing experience they will want to go to PC and Nvidia for the time being, especially for the high impact games like Cyberpunk 2077. If your going to make statements like this at least provide a proper basis and know what you are talking about alright?
2. I stated this, myself, and made that VERY clear. I talked about its lack of maturity and adoption, its complexity which virtually mandates DLSS at higher resolutions like 4K and often even lower, and the fact that most games can't afford more than fairly basic ray-traced reflections with even that causing significant hit for some titles. I went into further detail but I really should not need to regurgitate what you failed to read and comprehend.
3. I went and double checked and it was the 6800 that ran 250w while the 6800 XT ran 300w so I got those two mixed up, my bad. However, this means the situation is actually worse. First, don't bother arguing a 20w difference because that is frankly irrelevant in almost any gamer's case. You are counting pennies at that point. However, that means to get those boosted results put them at likely the same power draw or higher, probably higher as they didn't even show it oddly but showed it with prior benchmarks, to just barely edge out typically a 4% meager gain when NOT factoring in ray-tracing or DLSS. Thank you for correcting me on this point, granted you wanted to make Big Navi look better but this actually shows it is in an even worse position thanks to the correction.
AMD certainly appears to be leading in CPU but absolutely not in GPU. You are a shill as you still haven't contended the points made. Even if we try to hedge them as even at best the moment DLSS, ray-tracing, and next-gen IO come into the equation any one of those single points paints a complete total loss for AMD as these technologies become more relevant. DLSS can potentially allow a card at a fraction of the price to outperform AMD's top offerings with superior quality. Ray-tracing support by AMD is so poor that it means the games will either be running limited ray-tracing options with some off entirely (the more noticeable ones at that) or even common ones at much lower settings in worst case scenarios. Oh, it also wont have DLSS to help so ray-tracing at 4K is pretty much a pipe dream. Next-gen IO solution is currently something that could single-handedly prevent a game from even being playable at any setting if it is highly IO performance dependent but this is also the one area AMD can actually FIX mid-life, albeit just late to the game. How exactly do you propose AMD is winning here? I'm not even bringing up various Gameworks enhancements (FildelityFX is laughable compared to Nvidia offerings which offer all the same technologies, often superior, and more). AMD lost this generation. That is the reality. However, it doesn't mean they will lose next generation as long as they can come up with an IO solution and a DLSS contender. Even Nvidia's first outing with DLSS and ray-tracing performance were bad if not arguably outright horrible and these two areas aren't widely supported just quite yet though certainly growing sizeably soon with upcoming titles. No one said AMD is out of the game, but they are out of this generation if one makes an educated purchase. They will also have to sacrifice that performance lead they barely got to make space for improved hardware accelerated ray-tracing and AI technologies, and depending on their IO solution if they can't develop a good software decompression using the GPU then hardware for this as well.
Keep shilling though.
I couldn't care less about RT, but DLSS 2.x has merit and AMD sorely needs an alternative.
This is my personal perspective as well. RT will look nice in the very rare title like Cyberpunk 2077 but even then its still got too much room for growth for me to honestly care. However, a DLSS alternative is so important to a healthy competitive market at this point. Its going to be a rough point since it took Nvidia time to develop and they also have the edge as a world leader in artificial intelligence and technologies, particularly on the subject of neural networks. That said, AMD does have a history of poaching great 3rd party technologies so fingers crossed they find a robust solution that can keep up, or maybe even a diamond in the rough.
Actually the Total Board Power (TBP) is 300W for the 6800XT. We don't know if it will be pulling 300W or not until there are reviews of the card. I have a feeling that it will use less, probably in the 275W range, since the 6900XT with more CUs and higher clocks has the same TBP. We also know that while the TDP of the 3080 is 320W, it actually pulls more than that during gaming. From
Jarred's RTX 3080 review, the 3080 was pulling 332W @ 1440p in Metro Exodus and 334W during FurMark.
Zeifle is selectively choosing the 2nd 6800XT performance comparison slide (slide 20) that shows the performance boost available with enabling Rage Mode & Smart Access Memory (SAM) with what s/he is saying. However, s/he fails to realize that slides 16 & 17 already cover stock vs stock comparison of the RTX 3080 & 6800XT. Nothing in the end notes from slides 16 & 17 state that Rage Mode or SAM is enabled, which it does say in the end notes for the 6800 test vs 2080Ti. Slide 20 though says that enabling these features can get your even more performance, an average of 6% increase over the 7 titles, at 4K resolution all while staying in the 300W TBP. However, unlike nVidia that is saying you will want a 750W PSU or larger, AMD is saying that you will only need a 650W PSU for their full AMD setup.
The 6800 XT point was corrected, though it actually made the situation worse for AMD as noted earlier in this same post.
Regarding 650w PSU it has been shown in tests that the RTX 3080 FE can suffice just fine on a 650w PSU as long as it is decently efficient (I wouldn't recommend a bronze efficiency PSU for instance). A 750w PSU is more advisable and the same applies to Big Navi as it was leaning at only a 20w difference. Claiming that equates to one reliably handling a 100w PSU difference so much more reliably is laughable. You'd have to be willfully blind to try to claim one is better than the other on this point. In addition, I did point out a 320-330w power draw in my posts so why are you attempting to correct me with information I already shared, not that 30w is particularly significant at the 300w mark? In addition, Furmark is very well known to produce unrealistic strain on GPUs so I recommend against using that as evidence in the future. Its really just to show the absolute peak end of the spectrum but doesn't really equate to a meaningful presentation in this type of discussion comparing two GPUs. GamerNexus produced around 323w in the same test with Furmark, btw so Jarred's testing may have been a bit off which is something he admits to with regards to power consumption and thermal accuracy relatively recently compared to GamerNexus. Why? Because he also recognizes that 330 vs 320 isn't a significantly
meaningful difference.
Do you know what a 6% difference actually entails with this GPUs? We aren't scraping 30 FPS where each individual frame matters a lot. These are performance levels at typically 60-120 FPS meaning around a 3-4 FPS difference on average at 60 FPS, and while more at higher FPS due to the nature of FPS it becomes less meaningful. Throw in adaptive v-sync technologies and its value drops further. In addition, this is a difference the RTX 3080 can hit though power consumption for the OC between testers has varied significantly from a no-existent change at 317w to some hitting 338w I've seen. I'm not sure if a quality issue is at play here or simply testing tools and OC configurations.
Just out of curiosity for those of you buying these next gen parts, what is your screen resolution and refresh rates. I have an ultra wide 1440p with only 75Hz so all these GPU's are overkill for me. I am hoping we get refreshes on the lower end cards soon.
Mine is 4K 60FPS G-sync. Check out Steam's hardware survey results for resolution.
65.49% are still at 1080p.
6.89% are at 1440p.
2.27% are at 4K.
You are certainly not alone in finding these GPUs to be largely overkill.
Two things. First, AMD may only be doing 1 Ray Accelerator (RA) per CU, and a Turing-equivalent RA at that. However, AMD has 60, 72, and 80 RA in the revealed GPUs. Nvidia's Ampere RT cores are supposed to be ~70% faster than Turing, but SM counts relative to shader core counts have shifted (because of the doubling of FP32 cores per SM). So 3070 has 46 RT, 3080 has 68, and 3090 has 82. Obviously, that's still more RT performance than AMD on the last two, but AMD may not do too badly when looking at 3070 and 6800 in RT performance. A lot of it will come down to how the RA vs. RT cores stack up in actual use, which we don't really know. (There are leaks, but I'm not going to worry too much about those for now, as it's possible to fake things -- grains of salt and all that.)
As for DLSS and RTX IO, AMD did mention alternatives on both of those. FidelityFX Super Resolution is basically going to be a DLSS alternative. Will it be as good as DLSS 2.x? We don't know -- it's not out yet -- but Microsoft and Sony both have a vested interest in helping AMD create better image upscaling technologies. RTX IO meanwhile leverages Microsoft's DirectStorage API, which AMD also discussed. What's important to note is that no games have implemented RTX IO yet, so we don't know how much it will help in actual use. Sony and MS have both talked about optimizing game load times, however, which is basically the same thing RTX IO is supposed to do. In other words, don't count AMD out on the IO front.
Looks like Nvidia will still lead in RT and DLSS for the time being, AMD might catch up with the latter. RT, though, Nvidia has worked with a lot of devs to make it easier to implement, and I don't really see the consoles as being a major factor here other than encouraging devs to find optimizations to make RT look better than rasterization without totally killing fps. Word is some of the initial PS5 / XBSX games will have two modes, one targeting 60 fps at 4K without ray tracing, and one using upscaling to 4K and RT but only targeting 30 fps.
Interestingly it was discovered AMD leaked its ray-tracing performance for one of the GPUs, though it is unknown which, which was shown to noticeably lag behind Ampere but it was good enough that, assuming they are quick enough with a DLSS competitor solution, it should probably suffice as even GamerNexus found Ampere's current ray-tracing performance to be overkill due to other bottlenecks in hybrid approach rendering.
That said, AMD's "DLSS alternative" isn't looking too good. Initial statements from AMD indicates, with almost no uncertainties, that this solution is likely to be during ray-traced only scenes (which is odd and makes little sense for a lot of reasons so despite how it was likely very poorly despite explicitly phrased I'd take that stance with some serious salt). They have no solution at the moment, whatsoever, and indicated they are merely at the stage of looking into an open cross-platform agnostic solution. The fact they are looking for a non-accelerated solution is concerning that it very likely wont work well without taking significant performance on the GPU away from other processing. GPUs are quite capable with regards to some types of AI but this also means there is no bottom line expected performance that discrete hardware acceleration affords, either. This comes with too many caveats and an uncertain, arguably distant, future. It was a necessary but brutal PR move to AMD's own standing on this front as the topic has become too relevant to ignore.
Regarding the IO front, yes, I agree which I'm sure you saw. There simply aren't any games utilizing it, much less in a meaningful way that mandates extreme IO performance. First, RTX IO doesn't even become available to devs to start utilizing until 2021. Second, even on the consoles they wont likely be making heavily IO dependent games in the first year or two. Even as heavy IO dependent games become more pronounced it will typically be oriented more towards what we already see resulting in just shorter loading screens and transitions and not games designed in such a way that it will matter. Once we start seeing open world games that are designed with high IO dependency for streaming or unique game designs involving indoor/outdoor, teleportation/warping between two locations/realities/times/etc., high speed open world traversal (the Flash or faster flying vehicles or even slower ones for the amount of IO involved in next-gen games, etc.) then it will be mandatory to have such a solution. AMD definitely still has time here and its something that can be brought forth mid-life of the Big Navi GPUs. In addition, its not like Nvidia's solution is simply some technology only they built. It was, indeed, using Microsoft's Direct Storage in coop with Nvidia's RTX IO API and so a good chunk of the work was already initially completed by Microsoft. Its silly that AMD has basically been radio silent on this and I'm hoping they don't lack the vision to understand its importance on the PC front until it results in them being too late, not that it would be the first time, but its something I'm not particularly concerned about so much as just a bit disappointing with their handling of. This is the area I'm least concerned about for Big Navi, despite being the most critical area they could screw up.