Discussion Is it possible to use an Nvidia card for RT acceleration while using an AMD card to do everything else?

Unlikely that would ever go well. Transferring the data from card to card would be way too slow and you aren't going to see anything like an NVLink between an Nvidia and AMD GPU. AMD stopped requiring a bridge a long time ago, and that had consequences in frame time pacing with Crossfire, which made SLI the superior option for many years.

Explicit Multi-adapter was only implemented in basically one game. Ashes of the Singularity, and that was as a tech demo (honestly I think they did it for marketing purposes) That allowed the system to render using ANY GPU in the system, including iGPUs. This did cause frame pacing issues.

Basically CPU tells GPU0 to make a frame, GPU1 to make a frame, etc and then has to re-order them depending on when they come back and display them properly through the primary display memory buffer. That means sending whole completed frames over the PCIe bus constantly and the additional calculations to re-order them properly.

Doing that with Ray tracing would be whole other level of complicated. You would have to have the ray tracing calculated first, then send that data over to the primary GPU to then render the rasterization so that the modifications to the pixel colors can be calculated.

I can imagine that would only make things even slower rather than having that inside the GPU pipeline itself.
 
  • Like
Reactions: Order 66
I mean, my best friend & I tried doing an experiment where we took 2 very identically performed GPUs (AMD to Nvidia) & put them in the same rig and booted something like.................idk..................Cold War and tried to see if we could do that actually and we kinda did get it to work............I just don't remember the arrangement of the cards we put it as........................
 
  • Like
Reactions: Order 66
Unlikely that would ever go well. Transferring the data from card to card would be way too slow and you aren't going to see anything like an NVLink between an Nvidia and AMD GPU. AMD stopped requiring a bridge a long time ago, and that had consequences in frame time pacing with Crossfire, which made SLI the superior option for many years.

Explicit Multi-adapter was only implemented in basically one game. Ashes of the Singularity, and that was as a tech demo (honestly I think they did it for marketing purposes) That allowed the system to render using ANY GPU in the system, including iGPUs. This did cause frame pacing issues.

Basically CPU tells GPU0 to make a frame, GPU1 to make a frame, etc and then has to re-order them depending on when they come back and display them properly through the primary display memory buffer. That means sending whole completed frames over the PCIe bus constantly and the additional calculations to re-order them properly.

Doing that with Ray tracing would be whole other level of complicated. You would have to have the ray tracing calculated first, then send that data over to the primary GPU to then render the rasterization so that the modifications to the pixel colors can be calculated.

I can imagine that would only make things even slower rather than having that inside the GPU pipeline itself.
What would be the point of having the IGPU render anything? UI, I guess, but that isn't exactly taxing for a dedicated GPU anyway.
 
h
Unlikely that would ever go well. Transferring the data from card to card would be way too slow and you aren't going to see anything like an NVLink between an Nvidia and AMD GPU. AMD stopped requiring a bridge a long time ago, and that had consequences in frame time pacing with Crossfire, which made SLI the superior option for many years.

Explicit Multi-adapter was only implemented in basically one game. Ashes of the Singularity, and that was as a tech demo (honestly I think they did it for marketing purposes) That allowed the system to render using ANY GPU in the system, including iGPUs. This did cause frame pacing issues.

Basically CPU tells GPU0 to make a frame, GPU1 to make a frame, etc and then has to re-order them depending on when they come back and display them properly through the primary display memory buffer. That means sending whole completed frames over the PCIe bus constantly and the additional calculations to re-order them properly.

Doing that with Ray tracing would be whole other level of complicated. You would have to have the ray tracing calculated first, then send that data over to the primary GPU to then render the rasterization so that the modifications to the pixel colors can be calculated.

I can imagine that would only make things even slower rather than having that inside the GPU pipeline itself.
how did it work with physX if that is the case? I remember hearing a ton about it where people would have a dedicated low end Nvidia GPU just for it.
 
What would be the point of having the IGPU render anything? UI, I guess, but that isn't exactly taxing for a dedicated GPU anyway.
It doesn't work that way. You would have the iGPU render a whole frame, not just a UI, but all of it.

Say your iGPU could manage 20FPS and your Discrete GPU could do 60 FPS. In a perfect world you could then have 80FPS total.

In practice you don't achieve that, and the different render rates mean that any misses mean huge frame pacing issues. So you get frames 1,2,3 from the primary and 4 from the iGPU. What happens when frame 8 or 12 or 16 don't arrive on time in pace with the other GPU? Either the previous frame from the primary is displayed again, or you wait for it. Either way you go from 13ms - 13ms - 13ms -110ms -13ms etc.

Frame rate is not really a good indicator of performance, but consistent frame production is.
 
  • Like
Reactions: Order 66
h

how did it work with physX if that is the case? I remember hearing a ton about it where people would have a dedicated low end Nvidia GPU just for it.
PhysX wasn't as major a part of the graphics pipeline as ray tracing would be.

If my understanding is correct it was mostly math for collisions/occlusion/etc. The raw data about WHAT to be rendered was sent to the primary GPU, no pre-rendering took place.

It wasn't long before latency started getting in the way and it made more sense to use the fastest GPU you had to perform PhysX or fall back on the CPU to do it.
 
  • Like
Reactions: Order 66
It doesn't work that way. You would have the iGPU render a whole frame, not just a UI, but all of it.

Say your iGPU could manage 20FPS and your Discrete GPU could do 60 FPS. In a perfect world you could then have 80FPS total.

In practice you don't achieve that, and the different render rates mean that any misses mean huge frame pacing issues. So you get frames 1,2,3 from the primary and 4 from the iGPU. What happens when frame 8 or 12 or 16 don't arrive on time in pace with the other GPU? Either the previous frame from the primary is displayed again, or you wait for it. Either way you go from 13ms - 13ms - 13ms -110ms -13ms etc.

Frame rate is not really a good indicator of performance, bus consistent frame production is.
Also the reason why dual GPU cards and SLI/CF GPUs had so many troubles with latency. because the misses from dual GPU or SLI/CF were catastrophic for frame time.
 
It doesn't work that way. You would have the iGPU render a whole frame, not just a UI, but all of it.

Say your iGPU could manage 20FPS and your Discrete GPU could do 60 FPS. In a perfect world you could then have 80FPS total.

In practice you don't achieve that, and the different render rates mean that any misses mean huge frame pacing issues. So you get frames 1,2,3 from the primary and 4 from the iGPU. What happens when frame 8 or 12 or 16 don't arrive on time in pace with the other GPU? Either the previous frame from the primary is displayed again, or you wait for it. Either way you go from 13ms - 13ms - 13ms -110ms -13ms etc.

Frame rate is not really a good indicator of performance, but consistent frame production is.
Is there any situation where using the IGPU to render anything makes sense? Like you said, the misses mean huge stutters, thus, making it pointless, not to mention the fact that it is only in one game.
 
Yes, depended a lot on how many FPS you were pumping out if that mattered all that much.

One of the reasons I have an early G-Sync monitor. Prevented a lot of weird issues having that with GTX 980 SLI.

When I ran GTX 580 SLI I would still often run V-Sync at 60hz. Basically just crank up the graphics settings at that point to the extremes. Though if I am honest, I didn't utilize that setup to its fullest, a lot of my gaming at that time was with older titles. I got the second card on ebay around the time the 700 series was launching I think.
 
  • Like
Reactions: Order 66
Is there any situation where using the IGPU to render anything makes sense? Like you said, the misses mean huge stutters, thus, making it pointless, not to mention the fact that it is only in one game.

No, not really.

AMD had hybrid crossfire around that time as well. Which let you pair up GCN GPUs between iGPU and dGPU. But it had the same issues. Because the iGPU used system memory it was always inherently slower. I did give that a try once with HD6310 and an HD6670, it did not go great. Smoother performance with the 6670 on its own.

iGPUs can be used to do other tasks. Video encoding for one. Running a separate display.

The way AI is going, if some of that gets baked into the iGPU, then that as well.
 
  • Like
Reactions: Order 66
No, not really.

AMD had hybrid crossfire around that time as well. Which let you pair up GCN GPUs between iGPU and dGPU. But it had the same issues. Because the iGPU used system memory it was always inherently slower. I did give that a try once with HD6310 and an HD6670, it did not go great. Smoother performance with the 6670 on its own.

iGPUs can be used to do other tasks. Video encoding for one. Running a separate display.

The way AI is going, if some of that gets baked into the iGPU, then that as well.
I suppose that a separate monitor would be a great use for an IGPU considering it has been proven that using multiple monitors does impact game performance.