like in Call of Duty : Modern warfare 2019 , if you buy a 6000x series can you unlock the ray tracing option since it mentions the name nVidia ?
Thats why then that AMDs gpus take a harder hit with RT if they have to allocate stream cores to ray trace?The AMD Radeon RX 6000 series has support for hardware-accelerated ray tracing, however, where
Nvidia uses dedicated ray tracing cores, AMD has made some tweaks to some of the existing GPU blocks to accelerate instruction and data patterns normally associated with ray tracing.
AMD's ray tracing is a step behind Nvidia's performance in most current games, though. Part of that deficit may be architectural, but some games have also shown that it may also just be a matter of optimization, which could be resolved over time.
Im interested also. I tried to use RT in Cyberpunk but it was greyed out.like in Call of Duty : Modern warfare 2019 , if you buy a 6000x series can you unlock the ray tracing option since it mentions the name nVidia ?
i think Crimson Adrenaline looks both cooler and more helpful than nVidia's control panel , though i have to admit i might never have the money to buy something like an 3090 .Im interested also. I tried to use RT in Cyberpunk but it was greyed out.
New to AMD GPUs cards and just recently got this card. The SW isnt as easy (simple) as Nvidias
Im waiting on a 4.0 riser cable to hopefully get my card running again. Then I can sort through the SW some morei think Crimson Adrenaline looks both cooler and more helpful than nVidia's control panel , though i have to admit i might never have the money to buy something like an 3090 .
But from what i understand AMD doesn't have basic ray tracing ? (it's software powered ? i don't know what that means) i only intend to know if it can unlock the ray tracing option in call of duty : modern warfare
If it was made with DirectX 12's DXR, then yes, it should work on the Radeon RX 6000 series. DXR is not specific to NVIDIA.
If it was made with Vulkan, chances are no. Vulkan recently got GPU agnostic ray tracing extensions and games that use Vulkan before then that have ray tracing use NVIDIA's extensions.
Although I'm not sure if it matters any more given how old it is, but just as a sanity check you do need to be on Windows 10 version 1809 or later.
What do you mean "sometimes run with DX12, sometimes not"? Does it randomly switch which API it uses?my mortal kombat 11 sometimes run with DX12 sometimes not , why is that ?
and are the rumors true with the new windows 10 update and DX12 you can add DPU VRAM in CrossfireX ?What do you mean "sometimes run with DX12, sometimes not"? Does it randomly switch which API it uses?
Thats why then that AMDs gpus take a harder hit with RT if they have to allocate stream cores to ray trace?The AMD Radeon RX 6000 series has support for hardware-accelerated ray tracing, however, where
Nvidia uses dedicated ray tracing cores, AMD has made some tweaks to some of the existing GPU blocks to accelerate instruction and data patterns normally associated with ray tracing.
AMD's ray tracing is a step behind Nvidia's performance in most current games, though. Part of that deficit may be architectural, but some games have also shown that it may also just be a matter of optimization, which could be resolved over time.
like in Call of Duty : Modern warfare 2019 , if you buy a 6000x series can you unlock the ray tracing option since it mentions the name nVidia ?
Mixing GPUs has been around since the beginning: https://www.gamersnexus.net/game-bench/2326-amd-nvidia-sli-directx-12-benchmark-explicit-multi-gpu . The problem is it requires explicit game support, just like SLI and CrossFire.sorry my typo a GPU , not a DPU
mortal kombat 11 sometimes launches on Direct X 12 and sometimes it simply won't launch ! i'm using an AMD 5700XT
To reply to this question still, yes and no. It depends on the implementation. As en example, Spider-Man: Miles Morales on PS5 is considered a very good implementation of AMD's technology, and it's definitely up there with Nvidia's RT.Thats why then that AMDs gpus take a harder hit with RT if they have to allocate stream cores to ray trace?
I have read that AMD and Nvidia use opposite approaches to their GPU design One is sw and other HW priority or something. I dont understand it really.To reply to this question still, yes and no. It depends on the implementation. As en example, Spider-Man: Miles Morales on PS5 is considered a very good implementation of AMD's technology, and it's definitely up there with Nvidia's RT.
The major difference appears to be in how AMD handles memory accesses, which is one of the major bottlenecks for real-time ray tracing currently. There is some dedicated hardware for handling bounding volume and triangle intersections, but AMD has noted in patents that RT memory access patterns are similar to those involved in texture mapping. Basically they made some extensions to the TMUs to deliver the data to the ray accelerators, whereas Nvidia's RT cores handle the data themselves.
AMD does use the SPs for denoising, but not for the ray tracing itself. That shouldn't be a huge problem, because compute utilization on modern GPUs is generally quite low. It's one of the reasons why Vega and Ampere don't perform as well in games as the raw TFLOPS and memory bandwidth would suggest.
AMD's implementation requires far less die space on its own, but in theory it should cause higher access latency for the ray accelerators, in addition to potential resource contention issues. Part of AMD's solution is to use a very large L3 cache. Obviously that adds to die space again, but it's a tradeoff that AMD thought was worth it, as it also helps performance elsewhere and reduces board complexity by not needing a very wide memory bus.
It has been said that although in terms overall quality and complexity it probably won't be able to match Ampere, with the right optimization it should be able to work with next to no performance hit, however that is something we still need to see. For now, developers have had the most time to optimize for Nvidia's hardware. Only time will tell what the balance really is, but don't trust existing games to start performing much better than they already do.
The "NVIDIA does things in software/AMD does thing in hardware" thing is probably the task sorting. AMD GPUs sort tasks in the hardware, whereas NVIDIA GPUs get presorted tasks from the drivers. However, NVIDIA's rationale for doing this was because the tasks were predictable enough that there wasn't really a need for a hardware sorter. While this incurs a minor hit on the CPU side, this allowed them to get rid of something in hardware.I have read that AMD and Nvidia use opposite approaches to their GPU design One is sw and other HW priority or something. I dont understand it really.
I just assumed they were different just to be different. Not having to compete on an even field.The "NVIDIA does things in software/AMD does thing in hardware" thing is probably the task sorting. AMD GPUs sort tasks in the hardware, whereas NVIDIA GPUs get presorted tasks from the drivers. However, NVIDIA's rationale for doing this was because the tasks were predictable enough that there wasn't really a need for a hardware sorter. While this incurs a minor hit on the CPU side, this allowed them to get rid of something in hardware.
What NVIDIA and AMD do is not necessary "one is objectively better than the other." It's just that they have different ideas on how to achieve the same thing.
I'll add to this that these decisions were made years ago, when games were DX9 or DX11-based and didn't really utilize more than 2 or 4 threads. That left some performance on the table, and Nvidia decided they could use that performance to do the scheduling. AMD, on the other hand, was developing the console APUs, which had very poor CPUs. For them it was a better idea to move the scheduling into hardware, which is something that trickled down into consumer GPUs. Both options are totally understandable (and Nvidia generally had less driver overhead at the time), but now that games are using DX12 and using more cores as well as making more draw calls, the overall CPU load is higher. That means that "slower" CPUs end up bottlenecking the GPU a little faster when paired with Nvidia hardware.The "NVIDIA does things in software/AMD does thing in hardware" thing is probably the task sorting. AMD GPUs sort tasks in the hardware, whereas NVIDIA GPUs get presorted tasks from the drivers. However, NVIDIA's rationale for doing this was because the tasks were predictable enough that there wasn't really a need for a hardware sorter. While this incurs a minor hit on the CPU side, this allowed them to get rid of something in hardware.
What NVIDIA and AMD do is not necessary "one is objectively better than the other." It's just that they have different ideas on how to achieve the same thing.