[SOLVED] What are the AMD GPUs that are going to support PTGI/Ray tracing after Driver Update?

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

lux1109

Proper
BANNED
Apr 30, 2019
117
25
110
0
With the upcoming PCIE4, I've been wondering if the extra bandwidth is going to be enough to allow the CPU to carry some of the real-time raytracing load. I mean, AMD has been hinting at 12 core "gaming CPUs". We don't exactly need 12 cores (24 threads) for gaming, unless... hmmm....

Yeah, I know. I'm allowed to dream though. ;)
imo only the upcoming AMD motherboards support Pci-e gen 4. Doubt this going to get mainstream though. ray tracing load is always going to be on the video card most of the case. I was reading an article just now, and PCI-e 4 might help with bandwidth, though can all the systems benefit from this ?
 

knickle

Distinguished
Jan 25, 2008
221
11
18,695
3
imo only the upcoming AMD motherboards support Pci-e gen 4. Doubt this going to get mainstream though. ray tracing load is always going to be on the video card most of the case. I was reading an article just now, and PCI-e 4 might help with bandwidth, though can all the systems benefit from this ?
My understanding is storage devices designed for pcie 4 will benefit greatly.

In regards to ray tracing, it has always been primarily a CPU task. And while GPU based real time ray tracing seems like a good idea, it has to share video memory with the rest of the scene. Ray Tracing is a memory hungry thing, and gaming GPUs are somewhat limited in that department.
 
Reactions: lux1109

lux1109

Proper
BANNED
Apr 30, 2019
117
25
110
0
My understanding is storage devices designed for pcie 4 will benefit greatly.

In regards to ray tracing, it has always been primarily a CPU task. And while GPU based real time ray tracing seems like a good idea, it has to share video memory with the rest of the scene. Ray Tracing is a memory hungry thing, and gaming GPUs are somewhat limited in that department.
Exactly..
 
Speaking of PCI-e GEN version, PCI-E 6.0 has already been announced and drafted by PCI-SIG.

IMO, PCIe 6.0 is the new hotness, packing a 2x increase in transfer speeds over PCIe 5.0, a 4x boost in raw bit rate over PCIe 4.0 and a whopping 8x improvement over the 32GB/s of bandwidth that an x16 PCIe 3.0 configuration can offer.

PCIe 6.0 is a standard that's currently within its planning stages, with PCI-SIG planning to ratify and release their full standard in 2021, two years after the release of PCIe 5.0. In the PC space, it typically takes two years for PCIe standards to make it into the PC market, making it probable that we will start seeing PCIe 6.0 compliant PCs in 2023 or 2024.

To put the performance of PCIe 6.0 into context, it could enable M.2 devices to deliver speeds that are up to eight times faster than today's Samsung 970 Pro series SSD, which is widely regarded as one of the best performing M.2 SSDs on the market.

PCIe 6.0 should enable higher levels of bandwidth for those who need it while granting PC users access to today's bandwidth levels over fewer PCIe lanes. The only problem with this rapid evolution is that new PCIe 4.0 devices will soon find themselves replaced with PCIe 5.0 and then PCIe 6.0 over the next 5 or so years.

https://www.businesswire.com/news/home/20190618005945/en/PCI-SIG®-Announces-Upcoming-PCI-Express®-6.0-Specification
 
In regards to ray tracing, it has always been primarily a CPU task. And while GPU based real time ray tracing seems like a good idea, it has to share video memory with the rest of the scene. Ray Tracing is a memory hungry thing, and gaming GPUs are somewhat limited in that department.
it's not really GPU really doing the task but the specialized RT core. about memory this is where stuff like NVLink will be very useful. because NVlink will allow you to add the amount of VRAM available on each card.
 

lux1109

Proper
BANNED
Apr 30, 2019
117
25
110
0
Speaking of PCI-e GEN version, PCI-E 6.0 has already been announced and drafted by PCI-SIG.

IMO, PCIe 6.0 is the new hotness, packing a 2x increase in transfer speeds over PCIe 5.0, a 4x boost in raw bit rate over PCIe 4.0 and a whopping 8x improvement over the 32GB/s of bandwidth that an x16 PCIe 3.0 configuration can offer.

PCIe 6.0 is a standard that's currently within its planning stages, with PCI-SIG planning to ratify and release their full standard in 2021, two years after the release of PCIe 5.0. In the PC space, it typically takes two years for PCIe standards to make it into the PC market, making it probable that we will start seeing PCIe 6.0 compliant PCs in 2023 or 2024.

To put the performance of PCIe 6.0 into context, it could enable M.2 devices to deliver speeds that are up to eight times faster than today's Samsung 970 Pro series SSD, which is widely regarded as one of the best performing M.2 SSDs on the market.

PCIe 6.0 should enable higher levels of bandwidth for those who need it while granting PC users access to today's bandwidth levels over fewer PCIe lanes. The only problem with this rapid evolution is that new PCIe 4.0 devices will soon find themselves replaced with PCIe 5.0 and then PCIe 6.0 over the next 5 or so years.

https://www.businesswire.com/news/home/20190618005945/en/PCI-SIG®-Announces-Upcoming-PCI-Express®-6.0-Specification
6.0 has been also finalized ? why ? Pcie 4.0 hasn't hit the market yet, and we are still few more years away for the revision 4.0 to get mainstream.
 
6.0 has been also finalized ? why ? Pcie 4.0 hasn't hit the market yet, and we are still few more years away for the revision 4.0 to get mainstream.
I think your misunderstood my comment. That's the way it actually works. I mean PCI-SIG always plan for the future, but make a note that the revision 6.0 has just been drafted, but it's still a long way, when it comes to the advantages/final specs of this PCI-E revision/version.

Obviously, PCI-e 4.0 will hit the market soon. We will only start seeing PCIe 6.0 compliant PCs in 2023 or 2024. And that's a far cry.
 

knickle

Distinguished
Jan 25, 2008
221
11
18,695
3
it's not really GPU really doing the task but the specialized RT core. about memory this is where stuff like NVLink will be very useful. because NVlink will allow you to add the amount of VRAM available on each card.
It's just a new pipeline inside the GPU. And ray tracing is all that pipe is good for. If your game doesn't have ray tracing features, that part of the GPU goes to sleep. And when the game does support RT, that part of the GPU has to share the same GDDR6 with the rest of the GPU.

Right from NVidia's developer website: "RTX-capable GPUs include dedicated ray tracing acceleration hardware, use an advanced acceleration structure and implement an entirely new GPU rendering pipeline to enable real-time ray tracing in games and other graphics applications." https://developer.nvidia.com/rtx

I'd rather spend money on a 12 or 16 core CPU to enable ray tracing in games than spend a boat load of money on an over engineered GPU that may be obsolete in 2 years. Maybe game engine developers are already working on this... who knows.
 
It's just a new pipeline inside the GPU. And ray tracing is all that pipe is good for. If your game doesn't have ray tracing features, that part of the GPU goes to sleep. And when the game does support RT, that part of the GPU has to share the same GDDR6 with the rest of the GPU.
it is no different that tessellation.

I'd rather spend money on a 12 or 16 core CPU to enable ray tracing in games than spend a boat load of money on an over engineered GPU that may be obsolete in 2 years. Maybe game engine developers are already working on this... who knows.
problem is the performance will be very slow if all calculation being push to CPU even if they are multicore CPU. just look how ray tracing are done in the past for movies. it was done on server farm that consist thousands of CPU. then GPUPU came. now those server farm are equipped with GPU to accelerate the job by offloading majority of the workload to GPU. now RT cores will do even better job at it (though that RT core is part of GPU).

another example of this were mining. in the very early days of mining it is done purely on CPU. then the algo was made to take advantage of GPU to speed up mining performance. then finally ASIC being made to be even more effective than GPU.
 

knickle

Distinguished
Jan 25, 2008
221
11
18,695
3
it is no different that tessellation.



problem is the performance will be very slow if all calculation being push to CPU even if they are multicore CPU. just look how ray tracing are done in the past for movies. it was done on server farm that consist thousands of CPU. then GPUPU came. now those server farm are equipped with GPU to accelerate the job by offloading majority of the workload to GPU. now RT cores will do even better job at it (though that RT core is part of GPU).

another example of this were mining. in the very early days of mining it is done purely on CPU. then the algo was made to take advantage of GPU to speed up mining performance. then finally ASIC being made to be even more effective than GPU.
As far as I know, CPU is still the dominant force behind render farms in the movie industry. I've done research on this and I haven't seen any indication that this has changed or is going to anytime soon. If you have evidence to the contrary, please post the links as I am interested in reading about this.
 
As far as I know, CPU is still the dominant force behind render farms in the movie industry. I've done research on this and I haven't seen any indication that this has changed or is going to anytime soon. If you have evidence to the contrary, please post the links as I am interested in reading about this.
https://nvidianews.nvidia.com/news/pixar-animation-studios-licenses-nvidia-technology-for-accelerating-feature-film-production

As part of the agreement, NVIDIA will also contribute ray-tracing technology to Pixar's OpenSubdiv Project, an open-source initiative to promote high-performance subdivision surface evaluation on massively parallel CPU and GPU architectures. This will enable rendering of complex Catmull-Clark subdivision surfaces in animation with unprecedented precision.
this is from 2015. the push to accelerate ray tracing to GPU from CPU is an effort that GPU maker has try to do for a long time. back in 2010 nvidia try to make GPU more capable in ray tracing computation:

https://www.nvidia.com/object/io_1269840921303.html

The GTX 480 is also the world’s first consumer GPU to enable interactive ray tracing for ultra photo-realistic scenes.
 

knickle

Distinguished
Jan 25, 2008
221
11
18,695
3
https://nvidianews.nvidia.com/news/pixar-animation-studios-licenses-nvidia-technology-for-accelerating-feature-film-production



this is from 2015. the push to accelerate ray tracing to GPU from CPU is an effort that GPU maker has try to do for a long time. back in 2010 nvidia try to make GPU more capable in ray tracing computation:

https://www.nvidia.com/object/io_1269840921303.html
Unfortunately I have to run to work and I didn't dig too deep into the article, but they appear to be referencing the creation phase, not the final render. Final renders aren't performed on the fly. Final renders take insanely long times to complete, and are done at the end as it's very expensive.
 

lux1109

Proper
BANNED
Apr 30, 2019
117
25
110
0
I think your misunderstood my comment. That's the way it actually works. I mean PCI-SIG always plan for the future, but make a note that the revision 6.0 has just been drafted, but it's still a long way, when it comes to the advantages/final specs of this PCI-E revision/version.

Obviously, PCI-e 4.0 will hit the market soon. We will only start seeing PCIe 6.0 compliant PCs in 2023 or 2024. And that's a far cry.
okay thnx...that does makes sense now. I got sort of confused before.
 

david_the_guy

BANNED
May 11, 2019
77
14
35
0
In regards to ray tracing, it has always been primarily a CPU task. And while GPU based real time ray tracing seems like a good idea, it has to share video memory with the rest of the scene. Ray Tracing is a memory hungry thing, and gaming GPUs are somewhat limited in that department.
RT these days is basically done on GPU. And i don't think gaming GPUs are limited in terms of memory, as I was getting similar answers from some other forum as well. So I wanted to update.
 

knickle

Distinguished
Jan 25, 2008
221
11
18,695
3
RT these days is basically done on GPU. And i don't think gaming GPUs are limited in terms of memory, as I was getting similar answers from some other forum as well. So I wanted to update.
RT has been introduced on a very limited scale for games, and memory requirements are lower. The exact opposite is true for professional ray tracing. That was my point.

None of the current GPU manufacturers invented Ray Tracing. It's very old tech. We haven't seen it in games until now because it's hard to do in real time. The reason we're starting to see it now is because GPUs have gotten to a point where even entry level cards are "good enough" for gaming on in High Def. Manufacturers need to stay relevant so that gamers continue to find reasons to upgrade. RT is that latest attempt.
 
Reactions: renz496

ASK THE COMMUNITY

TRENDING THREADS