News An unholy SLI emerges — Intel's Arc A770 with Nvidia's Titan Xp pair up to provide 70% boost in FluidX3D

Status
Not open for further replies.
With a 70% performance boost, it might seem hard to believe the gaming industry passed up on multi-GPU technology, a sentiment that was echoed by many commentators on the FluidX3D demo. After all, DX12 and Vulkan have great support for the technology, GPU-to-GPU linking technology is more robust than ever, and the latest versions of PCIe are very fast.

SLI scaling in games was rarely ever that good, and some games would see little to no increase in frame rates from adding a second card. And the value isn't likely to be there either. Two 4070 Tis would cost just as much as a 4090 (going by launch MSRPs), a card that offers more graphics cores than the two combined, along with double the usable VRAM, since VRAM in games generally needed to be mirrored between cards. The performance of the single-card solution would undoubtedly be more consistent as well, and not prone to issues like uneven frame pacing or other bugs or performance anomalies that sometimes affected multi-card setups. Putting a pair of enthusiast-level cards like 4090s or even 4080s into a single system would be even less practical. Top-tier enthusiast cards like the 4090 already basically fulfill the role of what SLI used to offer.
 
SLI scaling in games was rarely ever that good, and some games would see little to no increase in frame rates from adding a second card. And the value isn't likely to be there either. Two 4070 Tis would cost just as much as a 4090 (going by launch MSRPs), a card that offers more graphics cores than the two combined, along with double the usable VRAM, since VRAM in games generally needed to be mirrored between cards. The performance of the single-card solution would undoubtedly be more consistent as well, and not prone to issues like uneven frame pacing or other bugs or performance anomalies that sometimes affected multi-card setups. Putting a pair of enthusiast-level cards like 4090s or even 4080s into a single system would be even less practical. Top-tier enthusiast cards like the 4090 already basically fulfill the role of what SLI used to offer.
Eh, part of SLI's theoretical strength was being able to buy a weaker GPU to start off with, then buying another of the same make later on, to improve performance. Great idea in theory to save costs (all the fantasies of being able to get top tier performance for less), but in reality it just never worked out.

It also didn't help that it took forever to get asymmetrical pairing working, which was really only on AMD's end with being able to X-Fire different card models, and would have theoretically allowed for additional performance gains and less e-waste (being able to use the old card to prop performance up a bit more in theory). But it came too late and the theoretical benefits never actually materialized as AMD could never quite solve the load balancing.

Considering that some still care about e-waste, it'd be neat if it were possible to reuse old GPUs to offload some tasks to them, like how PHYSX once was a separate card for physics calcs or some experiments now with using a second GPU for AI processing, but most consumer-grade mobos don't have the wired slots needed to run a pair in x16. Ironically, that's the main reason I miss the SLI era; not for the SLI/X-Fire capabilities, but because mobos had enough wiring to run at least 2 cards at x16 and split a third for x8 or x4 duties, due to SLI or add-in cards being popular back then. Now that's basically prosumer territory.
 
Device​
FP32
[TFlops/s]​
Mem
[GB]​
BW
[GB/s]​
FP32/FP32
[MLUPs/s]​
FP32/FP16S
[MLUPs/s]​
FP32/FP16C
[MLUPs/s]​
Radeon RX 6900 XT​
23.04​
16​
512​
1968​
4227​
4207​
1x A770+1xTitanXP​
24.30​
24​
1095​
4717​
8380​
8026​

Like the page says it's a memory bandwidth oriented program so it can perform far better than the 6900XT which has about the same FP32 performance, twice the bandwidth equals twice the performance.
 
SLI scaling in games was rarely ever that good, and some games would see little to no increase in frame rates from adding a second card. And the value isn't likely to be there either. Two 4070 Tis would cost just as much as a 4090 (going by launch MSRPs), a card that offers more graphics cores than the two combined, along with double the usable VRAM, since VRAM in games generally needed to be mirrored between cards. The performance of the single-card solution would undoubtedly be more consistent as well, and not prone to issues like uneven frame pacing or other bugs or performance anomalies that sometimes affected multi-card setups. Putting a pair of enthusiast-level cards like 4090s or even 4080s into a single system would be even less practical. Top-tier enthusiast cards like the 4090 already basically fulfill the role of what SLI used to offer.

As a former user of SLI and Crossfire setups, I don't miss it at all. For a long time I couldn't quite explain why it didn't feel as good as a single powerful card until frame pacing analysis started getting more popular. My experience with multi-GPU setups is that however much they may increase the FPS, it was never fluid.

The case for multi-GPU was always based on the false premise that it was improving your experience by nearly two fold, though it never did that. It improved average FPS at the cost of horrible pacing. When one of the cards alone was able to achieve low frame times, you wouldn't feel the bad pacing as much. When it didn't, it was a mess. It felt considerably worse.

For those who may be curious, check the 30 FPS comparison below.
 
Can we PLEASE stop chanting the same old "multiGPU is dead" nonsense?

You obviously haven't gotten a clue what you are talking about!

MultiGPU rendering is still very well possible, the problem is no developer cared to implement it.

You can do quite a lot of different things at the same time with two GPU's, Vulkan is by far the most flexible in setting up two GPU's but yet here we are... News outlets crying it is dead, because nVidia stopped active driver support... Which had absolutely nothing to do with DX12 and Vulkan support!

So please, stop spreading fake news and to all you self proclaimed "gamers" and developers... stop repeating nonsense spread by smollbrain news outlets and read the frilling documentation!

Edit: apparently nobody understands.

Falling back to comparing numbers, slinging percentages... anyone dropping a percentage doesn't know what he or she is on about at all.

I have 200kg worth of tomatos, they contain 99% water. After two days I'm left with just 100kg, what percentage of water was lost?
 
Last edited:
Eh, part of SLI's theoretical strength was being able to buy a weaker GPU to start off with, then buying another of the same make later on, to improve performance. Great idea in theory to save costs (all the fantasies of being able to get top tier performance for less), but in reality it just never worked out.

It also didn't help that it took forever to get asymmetrical pairing working, which was really only on AMD's end with being able to X-Fire different card models, and would have theoretically allowed for additional performance gains and less e-waste (being able to use the old card to prop performance up a bit more in theory). But it came too late and the theoretical benefits never actually materialized as AMD could never quite solve the load balancing.

Considering that some still care about e-waste, it'd be neat if it were possible to reuse old GPUs to offload some tasks to them, like how PHYSX once was a separate card for physics calcs or some experiments now with using a second GPU for AI processing, but most consumer-grade mobos don't have the wired slots needed to run a pair in x16. Ironically, that's the main reason I miss the SLI era; not for the SLI/X-Fire capabilities, but because mobos had enough wiring to run at least 2 cards at x16 and split a third for x8 or x4 duties, due to SLI or add-in cards being popular back then. Now that's basically prosumer territory.
There is a thriving marketplace for old GPU’s on eBay (or whatever second hand market you want). Even really old parts that lost driver support long ago still have an audience.

Anyone that throws away a GPU is literally throwing away money.

I agree on wanting to reduce e-waste, but any card that would be of remote interest in a potential SLI setup can easily find a new home on an auction site or secondary market.
 
It seriously is a sorely missed opportunity, tossed out just because "it was hard back then."

All this other stuff works with 2-4+ GPUs no problem, then and now. And I thought the whole point of mGPU being added to the API's was literally to take all that "hard work" off of the devs (sheesh, as I read it back then, it even sounded like somewhere between "checking a box/a little fiddling" to "the API just handles it" in DX12 and Vulkan)

I want to see it and never wont want to see it. Feeling like we're having a new "Crysis" moment so here's hoping someone will either take the plunge.
 
Can we PLEASE stop chanting the same old "multiGPU is dead" nonsense?

You obviously haven't gotten a clue what you are talking about!

MultiGPU rendering is still very well possible, the problem is no developer cared to implement it.

If a piece of hardware has had no real developer support in years, it can effectively be considered "dead", at least as far as being something worth buying is concerned. Today's demanding games do not support multiple GPU rendering, so no one is buying multiple GPUs for playing modern games. And since no one has been buying multiple GPUs to run modern games, there is no reason for developers to put resources toward supporting the feature. The already-small multi-GPU ecosystem completely collapsed, and would likely need a total reboot to ever be considered viable again. There's always the possibility of that happening down the line, but with videocard manufacturers pushing high-powered single cards to fulfil that role, it doesn't seem like that will be happening any time soon.
 
Status
Not open for further replies.