jitpublisher
Splendid
You guys have been putting up some winners lately, and this one is a fine read. Keep up the great work. These are the kind of articles that brought me to Toms years ago. Quite interesting, to say the least!
The reason your original post was deleted was because you sound like an expert, but everything you're saying is wrong. That would be extremely confusing to a newbie who doesn't know that he actually knows more than you.Have to repeat it once more.
1. No GPU card may use more than 1,25GB/s bandwidth - this is becuise they use internal PCIe to PCI-X bridging (same is true for all SAS 2G RAID cards).
2. Pre-Fermi nVidia cards used non-standard "Graphics PCIe" (16-lanes PCIe w/o parity data) - was introduced in early nVidia chipset - that is why you may still find a recomendation not to use RAID cards in the graphics slots, though, later nVidia chipsets support standard PCIe (not at full v.2.0 speed, though - and the inability to support full PCIe speed was the main reason Intel banned nVidia from futher chipset production)
3. Your tests just show that Fermi-based nVidia GPUs support standard PCIe (at v.1.1 speed) as well as "Graphics PCIe" - pre Fermi boards lacked this support (AMD was always using standard v.1.1 PCIe)
The productivity difference in x16, x8, x4 slots absolutly correlate with the bandwidth difference for PCIe - no-parity PCI-X bridging of "Graphics PCIe", standard PCIe (for x8 slot tests) and x4. So, this just demonstrates that at last nVidia supports standart PCIe as well.
3. The only need for PCIe 2,0 in all graphics cards is power (though with sufficient external power they work nicely in PCIe v.1.1 slots)
4. You may easily try to install "slave" card in the PCIe x1 slot (with a raser or after removing the inner "wall") and (with enough power) you will not see any real difference with x8/x16 slots (BTW, AMD openly states that for the second and third cards in bridged CrossFireX they need PCIe x4 slot, but even x1 will do, as only few processor-GPU talk goes over it). That is because GPU-GPU talk goes over SLI/CrossFire bridge, though, unlike nVidia, AMD supports unbridged configuration too - then at least PCIe x4 (prefferably x8) is needed for a slave card.
And finally some off-topic addition - there is no any reason to test motherboard for SLI compatibility - this is a method nVidia is stealing money from the pockets of thous who do not need any SLI. SLI is pure software technology and, as such, may work nicely on any motherboard, but nVidia blocks it use on "non-sertified" boards and stealing our money.
I'm dissapointed to see that they compared to 4x, EXCEPT in the SLI tests. I want to see what the loss is with 16x/4x on a PCI-E 2.0 slot with SLI or Xfire. The burning question nobody can seem to answer.