[SOLVED] Do AMD 6000x series have ray tracing ?

Status
Not open for further replies.
Solution
The AMD Radeon RX 6000 series has support for hardware-accelerated ray tracing, however, where
Nvidia uses dedicated ray tracing cores, AMD has made some tweaks to some of the existing GPU blocks to accelerate instruction and data patterns normally associated with ray tracing.

AMD's ray tracing is a step behind Nvidia's performance in most current games, though. Part of that deficit may be architectural, but some games have also shown that it may also just be a matter of optimization, which could be resolved over time.
Thats why then that AMDs gpus take a harder hit with RT if they have to allocate stream cores to ray trace?

bniknafs9

Honorable
Jan 21, 2019
646
65
10,190
Im interested also. I tried to use RT in Cyberpunk but it was greyed out.
New to AMD GPUs cards and just recently got this card. The SW isnt as easy (simple) as Nvidias
i think Crimson Adrenaline looks both cooler and more helpful than nVidia's control panel , though i have to admit i might never have the money to buy something like an 3090 .
But from what i understand AMD doesn't have basic ray tracing ? (it's software powered ? i don't know what that means) i only intend to know if it can unlock the ray tracing option in call of duty : modern warfare
 
  • Like
Reactions: Bassman999

Bassman999

Prominent
Feb 27, 2021
481
99
440
i think Crimson Adrenaline looks both cooler and more helpful than nVidia's control panel , though i have to admit i might never have the money to buy something like an 3090 .
But from what i understand AMD doesn't have basic ray tracing ? (it's software powered ? i don't know what that means) i only intend to know if it can unlock the ray tracing option in call of duty : modern warfare
Im waiting on a 4.0 riser cable to hopefully get my card running again. Then I can sort through the SW some more
 
If it was made with DirectX 12's DXR, then yes, it should work on the Radeon RX 6000 series. DXR is not specific to NVIDIA.

If it was made with Vulkan, chances are no. Vulkan recently got GPU agnostic ray tracing extensions and games that use Vulkan before then that have ray tracing use NVIDIA's extensions.

Although I'm not sure if it matters any more given how old it is, but just as a sanity check you do need to be on Windows 10 version 1809 or later.
 

bniknafs9

Honorable
Jan 21, 2019
646
65
10,190
If it was made with DirectX 12's DXR, then yes, it should work on the Radeon RX 6000 series. DXR is not specific to NVIDIA.

If it was made with Vulkan, chances are no. Vulkan recently got GPU agnostic ray tracing extensions and games that use Vulkan before then that have ray tracing use NVIDIA's extensions.

Although I'm not sure if it matters any more given how old it is, but just as a sanity check you do need to be on Windows 10 version 1809 or later.

my mortal kombat 11 sometimes run with DX12 sometimes not , why is that ?
and are the rumors true with the new windows 10 update and DX12 you can add DPU VRAM in CrossfireX ?
 
  • Like
Reactions: Bassman999
my mortal kombat 11 sometimes run with DX12 sometimes not , why is that ?
What do you mean "sometimes run with DX12, sometimes not"? Does it randomly switch which API it uses?

[/quote]
and are the rumors true with the new windows 10 update and DX12 you can add DPU VRAM in CrossfireX ?
[/QUOTE]
If by DPU you mean a "Data processing unit", no game is going to support it. Otherwise what do you mean by "DPU VRAM"?
 

MasterMadBones

Distinguished
The AMD Radeon RX 6000 series has support for hardware-accelerated ray tracing, however, where
Nvidia uses dedicated ray tracing cores, AMD has made some tweaks to some of the existing GPU blocks to accelerate instruction and data patterns normally associated with ray tracing.

AMD's ray tracing is a step behind Nvidia's performance in most current games, though. Part of that deficit may be architectural, but some games have also shown that it may also just be a matter of optimization, which could be resolved over time.
 
  • Like
Reactions: Bassman999

bniknafs9

Honorable
Jan 21, 2019
646
65
10,190
What do you mean "sometimes run with DX12, sometimes not"? Does it randomly switch which API it uses?
and are the rumors true with the new windows 10 update and DX12 you can add DPU VRAM in CrossfireX ?
[/QUOTE]
If by DPU you mean a "Data processing unit", no game is going to support it. Otherwise what do you mean by "DPU VRAM"?
[/QUOTE]
sorry my typo a GPU , not a DPU

mortal kombat 11 sometimes launches on Direct X 12 and sometimes it simply won't launch ! i'm using an AMD 5700XT
 

Bassman999

Prominent
Feb 27, 2021
481
99
440
The AMD Radeon RX 6000 series has support for hardware-accelerated ray tracing, however, where
Nvidia uses dedicated ray tracing cores, AMD has made some tweaks to some of the existing GPU blocks to accelerate instruction and data patterns normally associated with ray tracing.

AMD's ray tracing is a step behind Nvidia's performance in most current games, though. Part of that deficit may be architectural, but some games have also shown that it may also just be a matter of optimization, which could be resolved over time.
Thats why then that AMDs gpus take a harder hit with RT if they have to allocate stream cores to ray trace?
 
Solution
like in Call of Duty : Modern warfare 2019 , if you buy a 6000x series can you unlock the ray tracing option since it mentions the name nVidia ?

The 6800XT is about as fast as a 2080ti without DLSS. 2080ti WITH DLSS blows it away.

The 3060 is faster than the 2080ti in ray tracing. That's how large a gap it is.

AMD isn't really offering any value or incentive this generation. For $50 more (MSRP) you can get a 3080 which will beat AMD 6800XT in ray tracing quite handily, and blow it out of the water when DLSS is engaged, and @ 4K. AMD 6800XT is faster in non DLSS 2K and 1080p traditional rasterizing (Non Ray Traced)

But it's kind of a moot point as everyone who has either one in hand wants to F you over on price. Because greed.
 

MasterMadBones

Distinguished
Thats why then that AMDs gpus take a harder hit with RT if they have to allocate stream cores to ray trace?
To reply to this question still, yes and no. It depends on the implementation. As en example, Spider-Man: Miles Morales on PS5 is considered a very good implementation of AMD's technology, and it's definitely up there with Nvidia's RT.

The major difference appears to be in how AMD handles memory accesses, which is one of the major bottlenecks for real-time ray tracing currently. There is some dedicated hardware for handling bounding volume and triangle intersections, but AMD has noted in patents that RT memory access patterns are similar to those involved in texture mapping. Basically they made some extensions to the TMUs to deliver the data to the ray accelerators, whereas Nvidia's RT cores handle the data themselves.

AMD does use the SPs for denoising, but not for the ray tracing itself. That shouldn't be a huge problem, because compute utilization on modern GPUs is generally quite low. It's one of the reasons why Vega and Ampere don't perform as well in games as the raw TFLOPS and memory bandwidth would suggest.

AMD's implementation requires far less die space on its own, but in theory it should cause higher access latency for the ray accelerators, in addition to potential resource contention issues. Part of AMD's solution is to use a very large L3 cache. Obviously that adds to die space again, but it's a tradeoff that AMD thought was worth it, as it also helps performance elsewhere and reduces board complexity by not needing a very wide memory bus.

It has been said that although in terms overall quality and complexity it probably won't be able to match Ampere, with the right optimization it should be able to work with next to no performance hit, however that is something we still need to see. For now, developers have had the most time to optimize for Nvidia's hardware. Only time will tell what the balance really is, but don't trust existing games to start performing much better than they already do.
 
  • Like
Reactions: Bassman999

Bassman999

Prominent
Feb 27, 2021
481
99
440
To reply to this question still, yes and no. It depends on the implementation. As en example, Spider-Man: Miles Morales on PS5 is considered a very good implementation of AMD's technology, and it's definitely up there with Nvidia's RT.

The major difference appears to be in how AMD handles memory accesses, which is one of the major bottlenecks for real-time ray tracing currently. There is some dedicated hardware for handling bounding volume and triangle intersections, but AMD has noted in patents that RT memory access patterns are similar to those involved in texture mapping. Basically they made some extensions to the TMUs to deliver the data to the ray accelerators, whereas Nvidia's RT cores handle the data themselves.

AMD does use the SPs for denoising, but not for the ray tracing itself. That shouldn't be a huge problem, because compute utilization on modern GPUs is generally quite low. It's one of the reasons why Vega and Ampere don't perform as well in games as the raw TFLOPS and memory bandwidth would suggest.

AMD's implementation requires far less die space on its own, but in theory it should cause higher access latency for the ray accelerators, in addition to potential resource contention issues. Part of AMD's solution is to use a very large L3 cache. Obviously that adds to die space again, but it's a tradeoff that AMD thought was worth it, as it also helps performance elsewhere and reduces board complexity by not needing a very wide memory bus.

It has been said that although in terms overall quality and complexity it probably won't be able to match Ampere, with the right optimization it should be able to work with next to no performance hit, however that is something we still need to see. For now, developers have had the most time to optimize for Nvidia's hardware. Only time will tell what the balance really is, but don't trust existing games to start performing much better than they already do.
I have read that AMD and Nvidia use opposite approaches to their GPU design One is sw and other HW priority or something. I dont understand it really.
 
I have read that AMD and Nvidia use opposite approaches to their GPU design One is sw and other HW priority or something. I dont understand it really.
The "NVIDIA does things in software/AMD does thing in hardware" thing is probably the task sorting. AMD GPUs sort tasks in the hardware, whereas NVIDIA GPUs get presorted tasks from the drivers. However, NVIDIA's rationale for doing this was because the tasks were predictable enough that there wasn't really a need for a hardware sorter. While this incurs a minor hit on the CPU side, this allowed them to get rid of something in hardware.

What NVIDIA and AMD do is not necessary "one is objectively better than the other." It's just that they have different ideas on how to achieve the same thing.
 

Bassman999

Prominent
Feb 27, 2021
481
99
440
The "NVIDIA does things in software/AMD does thing in hardware" thing is probably the task sorting. AMD GPUs sort tasks in the hardware, whereas NVIDIA GPUs get presorted tasks from the drivers. However, NVIDIA's rationale for doing this was because the tasks were predictable enough that there wasn't really a need for a hardware sorter. While this incurs a minor hit on the CPU side, this allowed them to get rid of something in hardware.

What NVIDIA and AMD do is not necessary "one is objectively better than the other." It's just that they have different ideas on how to achieve the same thing.
I just assumed they were different just to be different. Not having to compete on an even field.
Like price matching to another store, but each store sells different brands so cant compare directly.
 

MasterMadBones

Distinguished
The "NVIDIA does things in software/AMD does thing in hardware" thing is probably the task sorting. AMD GPUs sort tasks in the hardware, whereas NVIDIA GPUs get presorted tasks from the drivers. However, NVIDIA's rationale for doing this was because the tasks were predictable enough that there wasn't really a need for a hardware sorter. While this incurs a minor hit on the CPU side, this allowed them to get rid of something in hardware.

What NVIDIA and AMD do is not necessary "one is objectively better than the other." It's just that they have different ideas on how to achieve the same thing.
I'll add to this that these decisions were made years ago, when games were DX9 or DX11-based and didn't really utilize more than 2 or 4 threads. That left some performance on the table, and Nvidia decided they could use that performance to do the scheduling. AMD, on the other hand, was developing the console APUs, which had very poor CPUs. For them it was a better idea to move the scheduling into hardware, which is something that trickled down into consumer GPUs. Both options are totally understandable (and Nvidia generally had less driver overhead at the time), but now that games are using DX12 and using more cores as well as making more draw calls, the overall CPU load is higher. That means that "slower" CPUs end up bottlenecking the GPU a little faster when paired with Nvidia hardware.
 
On a side note, I've always wondered if the impact of NVIDIA's system was really minor so I decided to do a quick Google search.

It turns out, it can be bad on lower-end or quad-core CPUs


Though note this is for modern GPUs. It may not have been this bad for earlier GPUs.
 
Status
Not open for further replies.