Dual GPU cards 16x vs 8x

Timbotron

Distinguished
Oct 18, 2011
6
0
18,510
Do dual GPU cards like 590, Mars II, 6990 or the 6870x2 benefit more from full x16 lanes vs just x8?

I've read the 3 article series about SLI and CFX scaling that's up on this site but it only uses single GPU cards when testing x16 vs x8 and effects of NF200.
 
Solution
Ahhh, a Dual GPUs in one PCP.. ok it's also the same idea even if you run 8 cards in X16 or x8... no significant drop in bandwidth, and that was the video i was gonna provide.
But it's still not a Quad CFX, can't find any benches shows 2 GTX 590s in X16 vs X8 mode, but as i told you it's the same idea.. no significant drop in bandwidth.
Welcome to the Forums.
Technically there's no difference between x16 x16 vs x8 x8, there's a slight increase in performance that can't be noticeable, you'll notice it really in a benchmark program like 3D mark.
Also the difference could be like... for eg. (x16/x16 scores 150, x8x8 scores 155)

You are good to go with both configurations, no worries at all
 
Thanks for the welcome :)

I have looked around the net quite a bit for stuff concerning x16 vs x8 and there is a pretty good consensus that it's negligible but pretty much everything is with regard to single GPU cards which is the reason I decided to ask. Has anyone actually done benches specifically evaluating dual GPU card performance with regard to x16 v x8?

The Microstuttering article here does have plenty of dual GPU card data but it was done on the Gigabyte Z68X-UD7 mobo which has an NF200 chip and dual x16 so I'm assuming all those fantastic results are for that configuration.

Although going from ther being little to no difference in performance in x16 vs x8, are current GPUs simply not powerful enough to saturate even x8 yet?
 
Navigate to minute 4.0 and listen what he says from 4.03 to 4.56
http://www.youtube.com/watch?v=XS5rBS8n-LM
And here is a real world benchmarks between both, the video is one minute
http://www.youtube.com/watch?v=NFMzRZqFh-w

Microstuttering was explained recently by Tomshardware, the issue is noticeable with low end GPUs configuration, and barely noticeable with Strong GPUs such as HD 695 / GTX 560Ti and above.
The issue never exists with 3-way CF/SLI
 
I'm looking for benches which specifically compare performance of dual GPU cards in x8 vs x16.

All the benchmarks out there that use dual GPU cards and multicard configuartions for either tri or quad SLI/CFX in only 2 slots always use x16 on either a 1366 platform with native x16 per slot or 1155 platforms with NF200 chips to get x16 each.

Yes the other benches show that a single 580 or 6970 GPU isn't enough to saturate x8 to the point where x16 is actually needed, but does doubling up per slot finally need that bandwidth?
 

 
http://www.tomshardware.com/review [...] 761-4.html
A little old review.
None of those are dual GPU cards.


Linus showed 2 GTX 580s in 1366 platform, he showed that the difference is a ZERO ONE PERCENT which means that's x8=x16, so technically he showed x16 :)
I understand that, but you're missing the point again. I don't care about 580 at x8 vs x16.

I care about 590 or 6990. 2 GPUs on one PCB plugged into one slot. If that is plugged into x16, does it make it equivalent to x8 per GPU? Of if it's plugged into x8 does it make it x4 per GPU and thus an actually significant performance drop?


EDIT:
nm, found what I was looking for

http://www.youtube.com/watch?v=rSfifE2Domo

Anyway thanks for your time.
 
Ahhh, a Dual GPUs in one PCP.. ok it's also the same idea even if you run 8 cards in X16 or x8... no significant drop in bandwidth, and that was the video i was gonna provide.
But it's still not a Quad CFX, can't find any benches shows 2 GTX 590s in X16 vs X8 mode, but as i told you it's the same idea.. no significant drop in bandwidth.
 
Solution
The question to my mind is moot in the sense that twin 590's or 6990's is a terrible purchase. I can't imagine spending an extra $700 for a 11% increase in frame rates.

Guru3D uses the following games in their test suite, COD-MW, Bad Company 2, Dirt 2, Far Cry 2, Metro 2033, Dawn of Discovery, Crysis Warhead. Total fps (summing fps in each game @ 1920 x 1200) for the various options in parenthesis (single card / SL or CF) are tabulated below along with their cost in dollars per frame single card - CF or SLI:

$ 725.00 6990 (762/903) $ 0.95 - $ 1.61
$ 750.00 590 (881/982) $ 0.85 - $ 1.53

An extra $750 to go from 881 to 982 ????? Not me.

If ya all anxious to spend the same $1500, you could have three 580's and they get 1030 fps.....

As for the no difference, this article says there is, at least w/ some games (STALKER) and at the higher resolutions:

http://www.tomshardware.com/reviews/p67-gaming-3-way-sli-three-card-crossfire,2910-7.html

My beef w/ this article is twofold:

1. Where's the 580, 590 and 6990 ? Ya ain't gonna start to push bandwidth w/o just the low to mid range cards. Kinda like testing whether 200 mph tires have an effect on hi speed handling on car that tops out at 100 mph.

2. I am less interested in average fps than I am in minimum fps. Like memory speed and CAS timings, we don't see a real impact on average fps but we do see an impact on minimum fps. An article like this w/o the top cards and w/o looking at minimum fps misses the obvious questions.