Pros and Cons of SLI

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

EpIckFa1LJoN

Admirable
So I am thinking about upgrading from my single Zotac Amp! Extreme 1080, to either a single or 2-way 1080 Ti configuration. But I have no idea what comes with using SLI. Cost is not that much of an issue for me so don't worry about the relatively small difference between the 1080 and 1080 Ti. I don't have so much money that I am willing to buy two Titan XP's though.

I don't know much about SLI but here is what I currently know and the pros and cons.

Pros:
Significantly increased performance
With a 1080 Ti should be able to run very high if not ultra settings on 4k.
Will be able to run ultra-wides if I ever want to do that.
Less stress on a single card

Cons:
Weight of potentially 2 Zotac Amp Extreme cards (would go to Asus ROG Strix if I had to) I use a brace with the single card I have now or it sags very bad.
Not all games support SLI, or use them well
More heat in the case
Possible space restrictions
Might have to take out a card to play some games
Constricting space for an M.2

For one I plan on running two separate screens (not in sync)
1.) the upcoming Asus ROG Swift 144Hz 4k G-sync 27" monitor
2.) Similar Asus ROG Swift 144Hz 2k G-sync 27" (pg279q)

The second screen would be used instead of windowing out, so I can multitask while playing games, or as a gaming screen if the 4k isn't optimized well or something like that.


So just in case anyone is wondering here are the rest of my system specs;

System:
OS: Windows 10 64-bit
CPU: Intel i7 6700K
Cooler: Corsair Hydro H115i
MB: Asus ROG Maximus VIII Formula
RAM: Corsair Dominator Platinum 16GB (2 X 8GB) 3200MHz
GPU: Zotac GTX 1080 Amp! Extreme (For sure upgrading to the 1080 Ti)
Storage: Samsung 950 Pro 512GB (soon)
Samsung 850 Pro 512GB (main drive)
PSU: Corsair AX750 80+ Gold (about to get upgraded)
to: Seasonic Prime 850W 80+ Titanium
Case: Corsair 750D Airflow ed.

So would I be better off doing SLI or sticking with a single card?
 
Solution


Honestly, you could probably stick with what you have now, single 1080 should hold ~60fps for that. Might even push 80-90 in some games but that's probably the limit. You saw my numbers above, 2 1080s easily crush 1440p/144hz and only occasionally drop to...


Yeah I'm thinking the same thing. 4k is nice but it requires a lot of power that doesn't really exist right now in a single card, which I realize is becoming more popular.

And yeah I'll keep you posted I am really hoping to get the monitor as soon as I can, only problem is I have been buying everything so far on credit and I owe about $1200, the rate I am able to pay it off right now it will be about 6 months before I should buy anything lol. But hopefully my Christmas bonus and all the OT I will be putting in in the next few months will speed that up and I can get it by Christmas.

Will let you know about the performance once I actually do get it.

Feel free to add me on Steam if you are on; EpickFa1LJoN
 


Yeah I realized that shortly after looking into it lol, I based the decision off the fact that I already have an Acer monitor and I think it is really great, especially for a $200 monitor. Plus it has a 4ms response time and the Asus has a 5ms. Slight difference but meh.
 


That's just different ways of measuring. Response time is notorious for beeing the unrealiable advertised.
Those screens use the same panel so they literally have the exact same characteristics.
 


Well either way. Acer it is
 


Thought i'd update. I did upgrade my display first to the X34. It's really awesome and I'm loving it. However, it is hard running some games at max still with the 1080 (AE which performs considerably better than most other 1080s). Battlefield 1 ALMOST runs at 100fps on ultra and 125% resolution scale (Saw another video of a guy with the same CPU and the FE 1080 running at 50% scale and struggling to stay between 70-80fps). I scaled it down to 75% still at ultra and it runs mostly at 99.5-100 fps with occasional drops into the 80's, the G-sync helps with that. Tomb Raider (2013) at Ultra gets 88-100 fps (like 97 avg). Fallout 4 (uncapped) and with the mods to allow it to run at 21:9 works very well. WoW still has it's issues but it's not too bad, I can hover between 80-100 in most Legion areas, in the cities it's still terrible (between 60-90 in Dalaran, Never over 60 as low as 40 in Stormwind. Also no matter how low I set it, it more or less stays the same, I was able to get up to 120 in Dalaran on LOW. So its a mix between the games poor optimization and its really old engine. So in short the GPU is still being run HARD, it's not uncommon for me to see it sitting at 100% utilization and without the custom fan profile it will get up to 70C, normal gaming temps at the moment are around 55C-60C because it is being pushed so hard.

So while I love it and I think it is sufficient, I don't like how hard the GPU has to work and I think I will still be upgrading to the Ti when an AMP Extreme version becomes available (Or we'll see which one is the best at that point)
 


What do you mean by that. If increase the GPU, so it doesn't work as hard, that means the CPU is now the weak link, and it is working super hard. If you happy with the performance, don't fix it. Most those games you listed are likely CPU bound anyway, and won't let you get higher FPS.
 


Not necessarily. At most I only use 60% of the CPU, and thats pretty much limited to BF1. I almost never use more than 30% or so of the CPU, and I have a very low OC on it (whopping 100MHz) so if I run into any CPU problems I can always really OC it. And again those are limited to extremely CPU heavy games, like DOOM and Battlefield. If the GPU is running at 100% and the CPU is at 50% getting a better GPU will most certainly give me higher performance and FPS, that's basic chemistry. Increasing the limiting factor gives more product, and if the CPU ends up bottlenecking me I'll just upgrade that too, lol.

I didn't say I was "happy" with the performance of the GPU at this point, just very happy that the monitor looks fantastic and the GPU is giving me what I consider playable framerates at Ultra quality, but I want the highest available quality and I can't get that with my GPU, so I'm upgrading. (BTW Scaled down to 75% resolution means I am technically playing at 1080p, I don't like that)
 
A 6700k should be good for 100-120+FPS on nearly every game; so I don't think that's really a limiting factor in that bystander. There's a reason i7s are recommended for 144Hz screen setups. So, even if the X34 is overclocked to 100Hz+, the i7 will not limit the setup.

But, I will say that just because your CPU usage is below 100% doesn't mean your CPU isn't bottlenecking it (in this case it is not; you've noted the 100% usage on the GPU core). CPU usage could be limiting it simply because the game isn't allowing it to run on all 8 threads; that would end up with your CPU not hitting 100%, but still bottlenecking your GPU.

Again, not happening in this case just food for thought.
 


It's not as simple as you seem to think. CPU's will not hit 100% except in maybe 2 games in existence, even when they are the bottleneck. The complexities that go into that are hard to describe, but know that CPU bottlenecks on an i7, may show up with as low as 30% usage in extreme cases. If the GPU is hitting 100%, then it's not being bottlenecked, but you know as well as I do, WoW, when raiding at least, is not allowing 100% GPU load. BF1, in many spots isn't either.

And there isn't an upgrade to be made for your CPU, and an OC will make very small gains. Right now, things are pretty good. If you go and SLI, your GPU usage will almost never hit 100% again.
 
Well again what I know now vs. what I knew when I made this post is that SLI won't give me that much more power either, and doesn't work in all games. I would get maybe 30% increase in power, which is equivalent to the Titan XP (a little bit more) But for the price that the 1080 Ti is suggested to hit and by almost every rumor, the 1080 Ti will surpass the Performance of the Titan, at that point I could roll the money from the 1080 into a new Ti. As far as the game "not allowing" 100% usage, I wouldn't say its not allowing as much as it isn't requiring anywhere near that.

Aside from that a trademark sign of CPU bottlenecking the GPU is high CPU usage and LOW GPU usage, which is not the case, And what the heck would the difference be if I were to simply add another 1080 vs upgrading to the 1080 Ti. Almost none, even in the case of the CPU, but again I highly doubt it will bottleneck. I've NEVER heard of a 6700K bottlenecking anything.

In this case GPU is bottlenecking, if somehow, and I don't see how, the CPU started to bottleneck I will simply upgrade that as well. Was going to upgrade to Cannonlake when it comes out anyways assuming the ROG Maximus X boards are good, and don't look stupid like I think the IX's do.
 
i7 6700K's bottleneck all the time. Almost in every WoW case when raiding, Neverwinter Nights, and most any MMO. GTA V, Assassin's Creed (one of the newest versions), Arma games online and just about any case where someone buys a high end GPU setup in an attempt to get 144 FPS at all times. That isn't to say it isn't good enough for 60 FPS, but it will bottleneck Titan X(P) at 1080p in many games.

Many games are not really designed to allow more than 60 FPS at all times. That's basically what holds things back. You are good atm. I was just saying that you don't need to upgrade so your GPU has an easier time. You'll just shift the work to your CPU.
 
Well really thats a problem with the games themselves not with the hardware I have/will have. Games that are optimized well will perform great.

And again it really doesn't matter since like I said I am planning on upgrading to Cannonlake (Z370?) when it is available as well.

I will definitely be looking into individual cores/threads now for sure, thanks for the heads up on that, but I still fully plan on upgrading xD lol.
 
So WoW DEFINITELY is bottlenecking lol. But BF1 is pretty good. I think a GPU would be served well at least in that game and others which utilize multiple cores better than WoW. But anyways I saw one core being utilized at like 80% while one other was at 50% and all the other cores and threads were at like 15%. Definitely a bottleneck. Thanks for the heads up.
 
Just saw the newest posts on this thread today.

With my SLI setup I usually average 90-120 fps in 1440p/G-sync and 55-60 fps(V-sync on) and 60-85 fps(V-sync off) in 4k for the 2015-2017 games I have. The older games 2010-2015 usually will hit closer to 144hz on the g-sync and ~100 on the 4k's. The GPU's are usually pinged at 90-100% for 4k and 75-90% in 1440p.

I did do some experiments in the last few months with trying to keep the card temps at or below 60c because I noticed how NV really starts to lower the clocks just after that point. Usually at above 60 they go down to 1.8 or 1.9 ghz and when below 60 they will go as high as 2.2 ghz. At around the 2.1-2.2ghz I noticed about 5-8 fps increase. This seems to me that if a 104 or 100 GPU can safely run at 2.4-2.6ghz range then it would likely deliver the fps we all want for 1440p/4k.

I also did experiments with the different bridges(NV HB and standard ribbons). As others have reported the HB seemed to hinder lower resolutions while helping at 4k but overall a more consistently stable fps. I also started closely monitoring the PCIe usage and at 4k it peaked around 25% and closer to 15-20% in 1440p.

Like you I hardly ever see my CPU go above 30-50%. I slightly agree with Bystander about the complexities of identifying a CPU bottleneck but I more heavily emphasize how sloppy most engines have gotten with thread optimizations. I know my 4930k isn't technically a 'gaming' CPU but its still ridiculous to think that a OC'd HEX with hyperthreading could even be a bottleneck source. My CPU temps usually hang in the 55-65c range.

I have my doubts about the accuracy of task manager but it will often show only minimum usage on all physical cores and almost nil on the virtual. A few physical might ping around 60% while the rest are barely used. It also shows the clocks going up to 4.68ghz in these moments. Ultimately I believe that the term CPU bottlenecking needs to be abandon and a more technical description needs to be conceived. From 4/8 cores clocking 5-6ghz to 6/12-8/16 around 4-5ghz and mostly getting within 10-15fps of each other I'd say its pretty obvious there's more than just CPU issues occurring with games but if you can add 200-300mhz on one of these GPU's you can see nearly identical performance gains.

Like many I'm waiting to see what the TI actually ends up being. We've only got a couple of weeks for the next rumored announcement. I might have a little extra money towards the middle of summer and am contemplating putting one in my 2600k system. The more complex part of comparing 1080/TI/Titan XP/ and yes even Quadro's is the core differences, architecture, bit width. I've recently been reading some of the Pascal quadro's reviews and they have some very interesting gaming benches, but yes insane prices(I know, I know, their not gaming GPU's, but still . . .)
 
Well I will probably be upgrading anyways unless it turns out to be completely underwhelming, I'd say as long as it gets close to Titan XP performance I will get it.

Just an update on how the single 1080 (albeit a heavily OC'd 1080) performs with the 21:9 1440p.
 


Why is it, that if a game pushes your hardware, it's considered sloppy coding? Why do people think the CPU should never be pushed, but it's ok for the GPU? Shouldn't the dev's utilize both the CPU and GPU as much as they can, if the resource is available?

As far as the 6 core/12 thread CPU's not being fully utilized, that may start to change, but ultimately, the reason behind it is the job that holds things back is the draw call thread feeding the GPU instructions, is a linear task, and does not like to be multithreaded. DX12 and Vulkan do have methods to help split it up, and will start being used more, but DX11 was very limited in the ability to do it.

Until DX12 and Vulkan is widely adopted and games built for it from the ground up, just realize, any game which is well threaded is one that is exceptionally coded, and those which aren't, are just doing what is basically required of with DX11 and earlier API's. And dev's do not design their games for those who think 144 FPS is needed. I don't mean that as a slight against you, as I too use a 120hz monitor and would like high FPS too, but the games are not meant to be played that way.

It's easy to buy a GPU to improve performance, as GPU makers just have to add more cores, as the job they do is parallel. CPU's are tough, as they perform linear tasks, which require a faster core. Not more cores. That is not due to sloppiness, just a limitation of developing a game.
 


I voted up your answer because I appreciate the clarity of your answer.. I've read other posts attempting to explain those details but often more confusing to read.

My comment about sloppy encoding didn't relate to pushing the hardware but quite the opposite. Again, though, I think your answer helps explain it. What I'm talking about is more of this common throwing around of the CPU bottlenecking thing. I think only a small handful of people really understand, I know I don't, while many around the world keep trying to use a bigger hammer when its not necessarily needed. Whether using the 2nd through 7th gen CPUS w/ their best rated MOBO/cooling/psu/ram per gen I find it odd that for the effort and money those CPU differences only show minimal FPS changes(at most maybe 10-20fps but more like 5-10) using the same GPUS. I feel sorry for the people building their 6th and 7th gen k's pushing 5ghz and equally fast DDR4 systems and then someone tells them they have CPU bottlenecking. I just don't buy it. I agree that the GPU is the easier solution, I've yet to experience it not being.

Even though I've never done modern game programming I do understand linear instruction sets and why they wouldn't necessarily like HT or virtual cores. It's just that we've had this kind of CPU tech for around 20 years(I still remember how excited I was 12 years ago when I did my last P4 upgrade to a HT 4.3GHZ model) and they're still basically just sitting there. Its even more depressing when I see half the physical cores(4 or 6) close to idle while the other half is hammered. That's what I mean about sloppy. If the PC game dev's don't utilize engines that can optimally use even 4 cores running fast it seems wasteful to increase anything but speed. There's plenty of I5 users out there bragging about their achievements and I recently read at HardOCP how it was easier for them to OC a 7600k.

I occasionally do video conversions/3d rendering and an assortment of media tasks. Around once a year or so I'll try different programs and test them for both multithreaded use as well as hardware accelerators. When everything is being used, its beautiful to see and the final product too. From 1080p to 4k things will take minutes. I happily give my money to the programmers who take that extra care since it saves me time and money. When a program doesn't, its like a p2 rendering a VCD-come back tomorrow, maybe.

I also agree with you about how the 120/144fps isn't really a target for the dev's. I get it. Its just a shame the even 60 seems to be an effort them. I still remember 10 years ago reading about 120 and thinking back to school when some scientists believed humans couldn't see more than 60fps. I remember when I got my first 120hz display and thinking, wow! that's smooth and now I undertand. I admit, though, I can't notice much different past 120-30 w/ 144hz.

Twenty years ago a friend mentioned to me at a point when I was troubleshooting something. He said, "You ride the edge of the wave of technology, problem is that its very lonely at the edge". That's exactly how many engines in use leave us enthusiasts feeling.

 


Thanks for the update and especially how it relates to 21:9 1440p. Not a whole lot of reviews on that. A few, but not many.

FYI don't be alarmed if the TI doesn't have HBM2 as rumored, many are still stating how DDR5X is still plenty fast for these GPU's. That's why I'm considering matching one w/ my 2600k.

 
ledhead11, I find the biggest improvements from higher FPS comes on the latency side, but for me, I'd be happy with 90hz. It seems the VR dev's are targeting 90hz/FPS for the same reason I do in normal games. I get nauseated with less in 1st person view games. My mind must be easier to trick into feeling realism than most, who need 3D tech to get there (I have the same issue with 3D Vision as I do any non 3D game).

Anyways, it's exciting to see DX12 and Vulkan starting to show up. That should start to allow for better use of multi-core CPU's. I went with a Hex core this time around, because I knew they were coming and I planned to hold onto my CPU for 8ish years, like I did my last CPU. Hopefully I made the right choice.
 


I wholly agree on all these points. I only very selectively use my 3d vision anymore for similar reasons. I will say the 1440p/144hz renders insanely clear though, and I suspect the VR pipeline stuff they put into Pascal might somehow be helping since I didn't see the performance drop like I did with the Maxwells. The potential of what DX12 and Vulkan are too cool to ignore. It can make all our lives easier, better gaming, and even more affordable solutions. The shoe-horned patches so far have been unimpressive(except Doom which I consider a coding masterpiece).

I did the same with the build in my signature although a lot had to do with an unusual black friday(~$400 for both) deal that allowed me to get the 4930k and that motherboard(which at the time allowed me to use PCIe3.0 2x16 SLI OC970 cards and a x8 SC780 PhysX). It was obnoxious to look at but even then Metro's and Batman Arhkam City ran at 60fps in 4k w/ all maxed/v-sync etc. Another reason I'm bummed that NV abandoned hardware PhysX but that's another story/thread. I at least now know I good for a few years more and only need upgrade GPU's as needed.