Question Is SLI worth it?

ch33r

Distinguished
BANNED
Jun 13, 2010
316
4
18,685
I figured out that I can almost get 2 RTX 2080s for price of 1 RTX 2080Ti. If I could SLI two 2080s, I would get about an 80% performance in SLI over one 2080Ti (I think). But I have also been told that many games have stopped supporting SLI, and that DX12 has very little, if no support at all for SLI. Can some highly experienced souls clarify the facts?
 
I haven't ordered parts for it. I just want to know that I can buy any game and use SLI. What I'm also trying to figure out is if games are dropping SLI altogether, as well as what I said about DX12 is true? PSU and that... I can get all that. That's not an issue. I'm just trying to make sure I don't get SLI only to find out I can't use it. I been told DX11 games work with SLI no problem, but a lot are saying that SLI has issues with new games, and DX12 itself
 
NOPE....Going SLI is NOT worth the hassle these days. Not many games scale well on such a setup. You would be better off buying a SINGLE powerful GPU instead.
 
Most games do not have proper SLI support, typically only AAA titles, and even then the results vary.
If you are hoping to get SLI performance in all (or even more than a handful) of games, its just not going to happen anymore.

So does this mean that NVidia and AMD are just going to do away with CrossFire/SLI all together? And why are games no longer supporting it?
 
Implementing SLI/CRFX requires a lot of coding, as well as resources/time. The game developers need to make sure that the game's engine is going to scale well. Apart from this, NVLINK might take the place of SLI though in near future, mostly in DX12 API.

I think the main advantage of Nvlink is that it might help with peer-to-peer interface, VRAM stacking, because essentially the GPUs are much closer together now, also bringing the latency of a GPU-to-GPU transfer way down.

So unlike SLI, where the latency had to go through PCIe as well as memory, Nvlink behaves in a different manner. We can think of it as an app that looks at one GPU, and then looks at another GPU and does something else same time. So it seems NVlink will be the future when it comes to multi-GPU setup, but sadly ONLY on the high-end market segment, as other Turing cards will lack NVLINK support.
 
Last edited by a moderator:
Implementing SLI/CRFX requires a lot of coding, as well as resources/time. The game developers need to make sure that the game's engine is going to scale well. Apart from this, NVLINK might take the place of SLI though in near future, mostly in DX12 API.

I think the main advantage of Nvlink is that it might help with peer-to-peer interface, VRAM stacking, because essentially the GPUs are much closer together now, also bringing the latency of a GPU-to-GPU transfer way down. So unlike SLI, where the latency had to go through PCIe as well as memory, Nvlink behaves in a different manner. We can think of it as an app that looks at one GPU, and then looks at another GPU and does something else same time. So it seems NVlink will be the future when it comes to multi-GPU setup, but sadly ONLY on the high-end market segment, as other Turing cards will lack NVLINK support.

So does RTX 2080 have NVLink support?
 
So does RTX 2080 have NVLink support?

YES, this card supports NVLINK, but to take proper advantage of this feature, game developers, and even NVIDIA, need to do coding/programming, so that this multi-GPU support finds it's way into gaming, similar to SLI.
 
Last edited by a moderator:
YES, this card supports NVLINK, but to take proper advantage of this feature, game developers need to do coding/programming, so that this multi-GPU support finds it's way into gaming, similar to SLI.

Ok, so again I ask: Why do major devs stop supporting SLI/NVLink, and does it mean that NVidia/AMD will slowly phase out Crossfire/SLI/NVLink because devs no longer support it?
 
Its not worth it to them to invest the time, money, and manpower to code it into their games.
Why would they bother when only a small market uses it?
PhysX shares the same fate.

"Why would they both when only a small market uses it?" Maybe only a small market uses it because only a small market plays the games that support it. If more games supported it, more people would use it....??? Ya know what though. I think I know the reason SLI isnt supported anymore. When it first came out like 10 years ago.... its all like cool... this is the coolest idea, imagine how much better our games could run, then all of a sudden, the consumer realized.... wait... hang on a sec, why would I spend $1500 on a 2080TI when I can spend the same amount, buy two 2080s, SLI them and basically get 1.6x the power of a 2080Ti for the same price as one 2080Ti? All of a sudden, consumers stop buying the very top level card at these stupidly high prices, and just get two of the next one a step or two down with SLI because that SLI setup will easily do what they want. So NVidia and AMD realized this was happening and now there is no insentive to buy a 2080Ti for 1500 instead of 2080 for 750. SLI has stopped support for economic reasons. I know exactly whats happening. I bet NVidia and AMD are paying these big game developers to stop supporting SLI, so that the consumer now has to spend the big $1500 on the 2080Ti to get the best performance, and I bet a small cut of each 2080Ti/AMD equivilant sold are going to game developers to stop support. This is what we call BUSINESS
 
No, because that insinuates that Nvidia/AMD would have some control over what the devs wanted to do with their games.
Hardware manufacturers have no control over what devs support in their games. Period. If they did, they would make games impossible to run on anything other than top level flagship hardware.

SLI/Xfire is riddled with other issues beyond simple game support as well.
Increased latency, micro stuttering, heat, power consumption and so on. It was never a viable option.

While its a fun hypothesis, the truth is SLI was never all that great.
 
No. It doesn't insinuate that NVidia/AMD has control over what devs wanted to do. Devs have full control. Just that NVidia/AMD pays them to not support SLI for reasons I mentioned. My next question is though: Will Crossfire and SLI be phased out, as in stopped altogether? I mean I don't see why its such a bad idea with all the performance gains from it. Ok so it consumes more power, get a better PSU. Too much heat, get better cooling. It was only for enthusiasts anyway, no?
 
You seem fairly set on getting an SLI setup.
Go ahead and try it out then, enjoy the ~20-60% increase in performance in the games that support it, and a 0% increase in the ones that done.
Also enjoy the latency and frame stutter that comes along with it.

What about dual-core GPUs, or Dual-GPU cards. Why don't I see any of those???
 
I just cant figure out why we can jam 8 cores into a CPU and 2 CPUs on a motherboard but can't get SLI/Crossfire to work properly
Dual CPU is not much different.
The software has to support it. Large business level (read $$$$$) software does. Like heavily task intensive database functions, supporting thousands of interactions per second.
The development cost in building that is justified, because the customers will pay for it and need it.

Very, very few consumer level applications will talk to two CPU's.
 
Ok, so the next question is: Forget 2 CPUs, why do we jam 8 cores into one CPU but apparently dual-core GPU is useless?
Because it does not seem to be needed.

Acceptable, or even outstanding, price/performance can be had with current single core GPU's and the games.
Why would a company spend the extra $$ in design/code/test for something that cannot be seen or tested?

Lets say you have a quad or octacore CPU. It does x calcs per microsecond.
It does exactly what you need it to do, with the current software you use.

If someone were to autmoagically swap your CPU with a dual or single core that did exactly the same thing....would you notice?
If/when found out, would you be pissed off?
Why? It's still doing the same thing.
 
Multi core inside the CPU is a way to process multiple instructions at the same time. Physics has a way of intruding.
Functionality can be made smaller, which means more "cores" can be put in the same size package.

If you look, the physical size of a typical CPU has not changed in 20 years.
 
So what Im getting is... the work that a GPU does can't be split into multiple threads to make use of dual-gpu the way typical software can for a CPU, true?
If it were cost effective, and performance effective, and physics possible...they would.

You might as well ask...why don't we have 30TB optical/molecular SSD's, or 32 core/64 thread CPU's?
We're not there yet.

Going back not too many years, a 1TB HDD was thought to be the absolute max.
My current GPU has more vRAM than my first 15 PC's had RAM + drive space + video RAM combined.

When/if multi core GPU's or multi GPU in a single package becomes necessary and cost effective, you'll see it.

If one were available today, but it cost $4,500, and gave zero performance gain in current games...would you buy it?