Thanks, I tried searching google, but nothing came up with regards to RT. It is possible though? How would someone even go about doing it?Explicit Multi-Adapter RayTracing
What would be the point of having the IGPU render anything? UI, I guess, but that isn't exactly taxing for a dedicated GPU anyway.Unlikely that would ever go well. Transferring the data from card to card would be way too slow and you aren't going to see anything like an NVLink between an Nvidia and AMD GPU. AMD stopped requiring a bridge a long time ago, and that had consequences in frame time pacing with Crossfire, which made SLI the superior option for many years.
Explicit Multi-adapter was only implemented in basically one game. Ashes of the Singularity, and that was as a tech demo (honestly I think they did it for marketing purposes) That allowed the system to render using ANY GPU in the system, including iGPUs. This did cause frame pacing issues.
Basically CPU tells GPU0 to make a frame, GPU1 to make a frame, etc and then has to re-order them depending on when they come back and display them properly through the primary display memory buffer. That means sending whole completed frames over the PCIe bus constantly and the additional calculations to re-order them properly.
Doing that with Ray tracing would be whole other level of complicated. You would have to have the ray tracing calculated first, then send that data over to the primary GPU to then render the rasterization so that the modifications to the pixel colors can be calculated.
I can imagine that would only make things even slower rather than having that inside the GPU pipeline itself.
how did it work with physX if that is the case? I remember hearing a ton about it where people would have a dedicated low end Nvidia GPU just for it.Unlikely that would ever go well. Transferring the data from card to card would be way too slow and you aren't going to see anything like an NVLink between an Nvidia and AMD GPU. AMD stopped requiring a bridge a long time ago, and that had consequences in frame time pacing with Crossfire, which made SLI the superior option for many years.
Explicit Multi-adapter was only implemented in basically one game. Ashes of the Singularity, and that was as a tech demo (honestly I think they did it for marketing purposes) That allowed the system to render using ANY GPU in the system, including iGPUs. This did cause frame pacing issues.
Basically CPU tells GPU0 to make a frame, GPU1 to make a frame, etc and then has to re-order them depending on when they come back and display them properly through the primary display memory buffer. That means sending whole completed frames over the PCIe bus constantly and the additional calculations to re-order them properly.
Doing that with Ray tracing would be whole other level of complicated. You would have to have the ray tracing calculated first, then send that data over to the primary GPU to then render the rasterization so that the modifications to the pixel colors can be calculated.
I can imagine that would only make things even slower rather than having that inside the GPU pipeline itself.
It doesn't work that way. You would have the iGPU render a whole frame, not just a UI, but all of it.What would be the point of having the IGPU render anything? UI, I guess, but that isn't exactly taxing for a dedicated GPU anyway.
PhysX wasn't as major a part of the graphics pipeline as ray tracing would be.h
how did it work with physX if that is the case? I remember hearing a ton about it where people would have a dedicated low end Nvidia GPU just for it.
Also the reason why dual GPU cards and SLI/CF GPUs had so many troubles with latency. because the misses from dual GPU or SLI/CF were catastrophic for frame time.It doesn't work that way. You would have the iGPU render a whole frame, not just a UI, but all of it.
Say your iGPU could manage 20FPS and your Discrete GPU could do 60 FPS. In a perfect world you could then have 80FPS total.
In practice you don't achieve that, and the different render rates mean that any misses mean huge frame pacing issues. So you get frames 1,2,3 from the primary and 4 from the iGPU. What happens when frame 8 or 12 or 16 don't arrive on time in pace with the other GPU? Either the previous frame from the primary is displayed again, or you wait for it. Either way you go from 13ms - 13ms - 13ms -110ms -13ms etc.
Frame rate is not really a good indicator of performance, bus consistent frame production is.
Is there any situation where using the IGPU to render anything makes sense? Like you said, the misses mean huge stutters, thus, making it pointless, not to mention the fact that it is only in one game.It doesn't work that way. You would have the iGPU render a whole frame, not just a UI, but all of it.
Say your iGPU could manage 20FPS and your Discrete GPU could do 60 FPS. In a perfect world you could then have 80FPS total.
In practice you don't achieve that, and the different render rates mean that any misses mean huge frame pacing issues. So you get frames 1,2,3 from the primary and 4 from the iGPU. What happens when frame 8 or 12 or 16 don't arrive on time in pace with the other GPU? Either the previous frame from the primary is displayed again, or you wait for it. Either way you go from 13ms - 13ms - 13ms -110ms -13ms etc.
Frame rate is not really a good indicator of performance, but consistent frame production is.
Is there any situation where using the IGPU to render anything makes sense? Like you said, the misses mean huge stutters, thus, making it pointless, not to mention the fact that it is only in one game.
I suppose that a separate monitor would be a great use for an IGPU considering it has been proven that using multiple monitors does impact game performance.No, not really.
AMD had hybrid crossfire around that time as well. Which let you pair up GCN GPUs between iGPU and dGPU. But it had the same issues. Because the iGPU used system memory it was always inherently slower. I did give that a try once with HD6310 and an HD6670, it did not go great. Smoother performance with the 6670 on its own.
iGPUs can be used to do other tasks. Video encoding for one. Running a separate display.
The way AI is going, if some of that gets baked into the iGPU, then that as well.