• Happy holidays, folks! Thanks to each and every one of you for being part of the Tom's Hardware community!

News AMD Files Patent for Hybrid Ray Tracing Solution

Can't wait to see it implemented and benchmarked. So far, Nvidia has had to work directly with developers to optimize their ray tracing to any satisfactory level. It would be fantastic for everyone if AMD found a workaround that give developers more flexibility.
 
Seems like dedicated hardware makes "balancing" the cards more difficult. performance in games could swing widely depending on if the game hits the right ratio of shader verses ray-tracing work loads. If you have more general purpose hardware, it might not be quite as efficient but results may be more consistent.
 
  • Like
Reactions: bit_user
Seems like dedicated hardware makes "balancing" the cards more difficult. performance in games could swing widely depending on if the game hits the right ratio of shader verses ray-tracing work loads. If you have more general purpose hardware, it might not be quite as efficient but results may be more consistent.

Not shader vs ray tracing, rasterization vs ray tracing.

And while it is hard with new hardware it will still be the superior option as ray tracing always has been a performance killer. Even when Intel was working on Larrabee and was showing off ray tracing using it, and doing it better than AMD or nVidia at the time, they used a very old game and a lower resolution than we were used to at the time.

However eventually we will either find the balance, where RT gets used in some areas with rasterization being used everywhere else, or the hardware will evolve to a point where it is much easier to perform RT and nVidia/AMD/Intel and software devs will find a balance in working with it.

This is how it starts. Tesselation was something AMD pushed harder than nVidia and at first very few devs wanted to use it because it killed performance in games. However after time went on hardware got better, nVidia and AMD both worked on it and now its used in most every game.
 
Not shader vs ray tracing, rasterization vs ray tracing.
Actually, I think @spiketheaardvark meant what he said. The point being that if most of the time is spent in shading, then Nvidia's RT cores could sit idle, whereas if the scene is bottlenecked by ray-intersection tests, then shader cores could mostly sit idle.

So, I guess the idea is that you involve the shaders in more of the ray-tracing pipeline and then focus on adding more shaders with less area devoted to specialized ray-intersection hardware. This way, the shader cores are busy no matter whether the scene is shading-intensive or ray-intersection intensive.
 
Actually, I think @spiketheaardvark meant what he said.
bit_user is correct. I was not reffering to rasterization instead of raytracing as a rendering method. A huge chunk of a GPU are the shader cores. With dedicaded hardware, if you have a game that has a very heavy raytrace work load and insufficient resources to process it those shaders(and a big part of your GPU) is spending a lot of time idle. Kinda like dropping a 2080 into an old core2 duo machine, or a 750 into a machine with an OC 9900k.

So is the future of gaming GPU a bunch of asic put together or to become more general (and thus more like a cpu)? AMD has been pushing toward more general compute for a long time. Nvidia has gone the opposite way with these raytracing cores.
 
  • Like
Reactions: bit_user