saunupe1911
Distinguished
source ?
Common sense! That's my source. Bots are still firing on all cylinders. But whatever you say or think buddy.
source ?
Huh? "Hardware level ray tracing" is DX12 RT. Or rather, DX12 RT takes advantage of "hardware level RT" to accelerate RT (although it does have a fall back layer that doesn't require dedicated RT hardware, but presumably has lower fidelity and/or worse performance). Same thing goes for Vulkan RT (don't know about the fall back compatibility layer with Vulkan though). Both AMD and Nvidia's RT hardware implementations are proprietary RT accelerators. Just like AMD and Nvidia GPUs as a whole are proprietary rendering (and GPGPU) accelerators.Not a single mention of "hardware level ray tracing". DX12 ray tracing works on any GPU. Much like FreeSync vs GSync though, an open API (DX12) will always overtake a proprietary one given enough time. Especially when AMD is in both consoles.
The 6900XT looks pretty attractive compared to the 3090 for gaming. For $500 less, you can forgive the short comings in the feature set of the 6900XT. The 6800XT on the other hand does not look particularly appealing compared to the 3080. $50 cheaper isn't enough for basically the same performance with inferior ray tracing and no DLSS counter.
they did mention that they fully supported DX12 with support for ray tracingDisappointing to see the lack of talk on Ray Tracing. Seems like Navi2 doesn't have the same ray tracing capability/performance as Nvidia. I'm going to be watching for ray tracing performance testing VERY closely.
Also, Radeon Boost is fine, but DLSS/DLSS2.0 is far superior.
Who wonders what the 6700XT with 40 CUs would provide for performance? Will AMD be able to give say 2080 Super performance for sub 200W?
Really? I'm definitely AMD-leaning and I currently own an AMD GPU....
I use Radeon Boost. I understand the differences between it and DLSS 2.0
Not a single mention of "hardware level ray tracing". DX12 ray tracing works on any GPU. Much like FreeSync vs GSync though, an open API (DX12) will always overtake a proprietary one given enough time. Especially when AMD is in both consoles.
The Radeon RX 6000 series is much more than a value alternative; AMD plans to go against Nvidia's best and may come out ahead.
AMD Unveils Big Navi: RX 6900 XT, RX 6800 XT and RX 6800 Take On Ampere : Read more
I'm actually thinking RIP AMD...RIP Nvidia ...
Given that the 6800XT consumes less power than the RTX 3080, and the 6900XT consumes less power than the RTX 3090, you appear to be making this up out of whole cloth.Looks like a powerhog that's going to run at 100 C
I'm actually thinking RIP AMD...
Lets look at what AMD actually showed us:
Power consumption: AMD quoted 250w for RX 6800 XT but then actual benchmark shows 300w in order to pull a little ahead of the RTX 3080 which is 320-330w. The difference is fairly insignificant, though not irrelevant. At current, they could be rated as being nearly identical but this may not play out to be so irrelevant in the long run depending on overclocks, driver improvements, etc. which can have a huge impact (as seen with the Ampere line with the initial crash bug then driver fix which actually improved performance). Big Navi's example is overclocked while Ampere's is not... otherwise bigger power consumption difference but also greater performance increase for Ampere.
Infinity Cache could be a really move for the company's future GPUS to cut corners on hardware while keeping them competitive but what we are seeing in terms of cost and specs isn't exactly that cost has gone down a lot or that saved costs were invested elsewhere in the GPU's specs. It could be a good move going forward but is proving to be questionable at current. Unfortunately, the sheer impact or lack thereof is unknown as we don't know anywhere near enough about how it is being utilized.
VRAM: AMD has noticeably more memory at 16GB GDDR6 while Nvidia has 10GB of the superior GDDR6X. AMD has Infinity Cache which will also help, though exact usage is unknown so this is still very up in the air. Most of the time 10GB should easily be enough so this difference will be nearly non-existent, at least when excluding the usage of the still uncommon 4k texture packs for games. In the case of the 4k texture packs or rare game scenarios where the streaming of assets and other elements benefit from more than 10GB in a meaningful way AMD lacks DLSS and thus may in fact, despite having more VRAM, actually end up not using it in its best case scenario against the competitor Ampere. The benchmarks offered by AMD, already to be questioned until we get 3rd party testing from DigitalFoundry and others, are keeping up with the RTX 3080 when not factoring DLSS or ray-tracing, and obviously 4k texture packs. If we include ray-tracing it simply wont be able to keep up with Ampere which I will delve into more below. If we include 4k texture packs then odds are DLSS is still going to pull significantly ahead, especially with ray-tracing which effectively mandates DLSS in games to date. We could say that DLSS isn't common but the RTX 3080's 10GB is enough for any current game and DLSS support is, finally, beginning to quickly grow. Last, a very important topic regarding next-gen IO. AMD lacks a solution while Nvidia has RTX IO + Microsoft's DirectStorage to solve this problem which would virtually single handedly negate the major benefits from Big Navi's VRAM lead and may even render Infinity Cache moot (take with a grain of salt because we don't know its exact usage and even being L4 cache it would be faster than typical memory fetches which can matter meaning Infinity Cache's value may not be negated by this). As for next-gen intense IO demanding games the VRAM difference and Infinity Cache will be totally irrelevant. Only a next-gen API solution will help (see below for more).
Next Gen IO Solutions: AMD's Big Navi has no contender at all for this. There could, and seriously I hope there is or they really will be in for an uphill battle, something in the works they just haven't publicly disclosed or been leaked but with IO being an incredibly relevant factor for next gen and the fact that it fundamentally alters how games may be designed and any system without sufficient IO, unless designed with a fallback measure, may be simply incapable of running such a game at all this matters... a lot! Nvidia offeres RTX IO + DirectStorage (Microsoft's API which is part of Xbox Series X' IO solution as well) to help with decompression and utilizing DirectStorage. Without RTX IO the expense of decompression on PC, due to lack of specialized hardware for this like the PS5 & Xbox Series X have, would bring even an AMD Threadripper CPU to its knees and prevent it from doing virtually anything else due to the staggering demand the decompression would place on the entire CPU in worst case scenario. Even in best case scenarios the performance hit would be immense in IO intense games. Therefore RTX IO which is a decompression solution using the GPU helps solve this problem for Nvidia GPUs allowing it to keep pace with games. This isn't a matter of simply waiting two seconds for a loading screen. There are going to be games built in a different way where it must be instant otherwise your going to deal with unplayable stuttering, excessive load screens, disconnects and possibly even processing issues or crashes without a next gen IO solution due to how the nature of next-gen IO performance impacts game design. Nvidia's own solution isn't even really available until 2021 and until developers and accompying games even really begin truly utilizing it this wont matter, but it eventually will result in a divide of what IS playable and what simply CANNOT be played on a platform that doesn't have appropriate IO performance.
Ray-tracing: AMD hasn't shown it despite everything else. Don't you find this odd? They show a host of benchmarks but no ray-tracing related benchmarks or examples. They state 10x faster than software shader implementation. What does this mean? Well, it means it isn't good enough. Turing, the RTX 1080 Ti is quoted by Nvidia as its RTX cores offering a baseline, minimal, and potentially higher (10x+ to be precise) ray-tracing over a software shader solution. This puts Big Navi's ray-tracing at last gen and we know Ampere's is a massive step up. In fact, early benchmarks and tests by GamerNexus and others have indicated we haven't even tapped the full extent of RTX cores (see GamerNexus' test with ray-tracing on the Quake II) indicating as ray-tracing matures it will only widen the gap between Big Navi and Ampere significantly. Ampere is achieving its current performance while also maintaining dedicated RTX and Tensor cores while AMD will have to make continued improvements to their specialized hardware and more space going forward which indicates it will likely not mature as adeptly as NVidia's GPUs will. Fortunately for AMD and its owners of Big Navi going forward ray-tracing is still immature, to put it mildly. Even the more ray-tracing oriented marketing is for games that may just have basic reflection or the occasional diffuse (cough Watch Dogs Legion, and Cyberpunk 2077 which does much better but is still relatively ray-tracing lite).
DLSS: AMD still has no publicly disclosed solution for DLSS which is a technique that will only continue to mature. DLSS 1.0 was awful no matter how you look at it. DLSS 2.0 has shown to be an immense improvement and can often provide not only outright incredible performance gains, especially with ray-tracing which is largely considered to require DLSS at higher settings to even be reasonably playable, but it can even potentially offer better than native results in many cases. At the same time it still isn't perfect. It does produce the occasional artifacts, though over at GamersNexus they found that aside from someone intensely focused on discovering them and not moving at all... aka almost no one, the negatives are typically nearly impossible to notice. None the less, it still has room to improve and that is precisely what the nature of this type of technology that relies on trained artificial intelligence via neural networks (a system for, quite literally, training/improving AI) entails. We've even seen it allow the RTX 3090 to achieve incredible results with 8K which should otherwise be unplayable on demanding modern titles. That said, not only does it have room to improve but it also has to be implemented with the game so support will need to be more widespread to truly matter. Most of the notable major titles coming out later this year and next year will feature it, however, so it seems that it is finally starting to gain traction with the games that are actually potentially demanding enough to necessitate it. As AMD didn't show comparisons of performance with DLSS, or ray-tracing which it is weaker with regards AND basically requires DLSS even with Ampere's more potent ray-tracing capability, it should be noted if ray-tracing is included or DLSS is supported AMD couldn't even be said to honestly even be competing at that point. With DLSS and Ampere's superior ray-tracing performance in games that support both even weaker GPUs like the RTX 3070 may very well outperform AMD's higher offerings, and possibly not even by a small margin. This also means in terms of long-term value Nvidia's Ampere cards are going to mature more gracefully than AMD's Big Navi.
Performance and Cost: The reality is AMD's GPUs don't really compete in a meaningful way at the high end be it the RX 6800 XT, RTX 3080, or above. At that point AMD may be a tad cheaper for virtually identical performance in default scenarios, but odds are if your spending that kind of money you want ray-tracing as well. In addition, if the game supports DLSS its a pure one sided stomp in Nvidia's favor, especially if ray-tracing also gets factored in, but also as it matures and starts producing superior to native results on details like billboard text, emblems, background wiring and cables, etc. At the RTX 3070 and below tiers for the more budget oriented build is where AMD will fight its real fight.
Really? Wouldn't it be the opposite? Nvidia GPU Boost 3.0 is an auto overclocking feature that's enabled by DEFAULT.Also, AMD is definitely skewing charts in their favor on RX 6900 XT by enabling the one-touch Rage Mode overclocking, plus Smart Memory Access. I don't mind the latter so much -- it's the benefit of running AMD on AMD, basically -- but I do wonder if there are ways other platforms could enable larger apertures, and why it hasn't been done up until now.
The 6900XT looks pretty attractive compared to the 3090 for gaming. For $500 less, you can forgive the short comings in the feature set of the 6900XT. The 6800XT on the other hand does not look particularly appealing compared to the 3080. $50 cheaper isn't enough for basically the same performance with inferior ray tracing and no DLSS counter.
What was the Specs of the PC that they were testing with? With new Ryzen 5000 CPU, you can OC the RAM to 6536 Mhz. Paired with Smart Access Memory would be interesting to see some bench marks.
lol they had to use their cute OC button to get the charts to work right x3
Ngreedia shill detected.
The new Nvidia drivers actually improved the performance of my 2080 super. I like to run MSI Afterburner in the background and monitor the CPU and GPU performance and temps after gaming. After the last driver update, the core clock started registering at 2100-2115 mhz and above whereas before it would max out at 2070-2085. It appears the power limit was raised and is now running up to 118%.overclocks, driver improvements, etc. which can have a huge impact (as seen with the Ampere line with the initial crash bug then driver fix which actually improved performance)
Sadly, we already know the ray-tracing performance will be a weaker showing for Big Navi which leaves us even more concerned with console ray-tracing performance as they will perform at an even lower level. AMD stated that it will be approximately 10x software shader approach, and this is what Nvidia also cited for Turing's ray-tracing bottom line (as Turing RTX cores for the RTX 2080 was actually 10x +). I also recall reading a comment by Phil Spencer about his own Xbox Series X while also including assumption of the PS5 trying to temper expectations that devs will want to shy away from ray-tracing typically and apply it in limited use hinting at underwhelming performance. Unfortunately, I can't find the precise statement right now but maybe someone else has come across and can share it.Ray tracing and DLSS are still going to be important battlegrounds, especially with the next-gen consoles supporting RT. AMD may have an advantage, since it provides the console GPUs. Then again, 52 CUs is a decent step down from RX 6800, and the 36 CU PS5 and 20 CU Xbox Series S won't be anywhere close to Navi 21 performance. So, console games probably won't push RT as much as PC games, since we have PC GPUs that are already potentially 50% faster than the best console hardware. Anyway, exciting times!
It will do as well as Turing approximately based on the above information. You will want to go to Nvidia for this generation if you want ray-tracing. In addition, ray-tracing virtually mandates the use of DLSS to get acceptable framerates at higher resolutions and settings which AMD has no such solution for at current. In short, AMD's ray-tracing offering will unquestionably be a failure at anything but lower resolutions this generation. Then again, most gamers are still sub-4K so with enough tweaking of the settings just maybe it might prove a little passable in less intense cases. Just don't expect good results in something like Cyberpunk 2077 as that seems unlikely. The proof is in the fact they will not offer it on consoles for this game and have even explicitly stated it will only be offered on an Nvidia GPU, aka they are likely aware AMD wont be able to provide acceptable performance with ray-tracing enabled in their game even on the XTX.Ray Tracing is going to be my purchasing point. If AMD does it well, Its lights out for NVidia
Yes, they did for the 6800 XT. If you look at the chart they are comparing an overclocked 300w 6800 XT (its default power consumption is approximately 250w stated by AMD) to a stock RTX 3080 which typically draws around 320w so we know it isn't overclocked like the 6800 XT is. They pulled the same shady stuns using games that were known to not only favor AMD GPUs but also known to be unusually highly impacted by CPU performance and showed on the Ryzen 5000s series. If that doesn't raise flags then idk what to tell you.Er .... not for the 6800xt they didn't. And for $500 savings, I would definitely click the rage button and use the SAM option to be level with a 3090 ... wouldn't you? or are you too much an nividia fanb ... I mean "enthusiast", to want to save $500? Asking for a friend.