PCI Express And SLI Scaling: How Many Lanes Do You Need?

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

neiroatopelcc

Distinguished
Oct 3, 2006
3,078
0
20,810
Not surprisingly another good article.
But there are some things I find are missing.

For one, why's there no x4 sli numbers? It would be possible to populate some of the pcie slots with wlan adapters and other crap, forcing lower than x8 speeds onto a pair or trio of graphics cards.

So I find it a bit disappointing that you haven't tested any X2 cards in either article. I think quite a few of us are still using 4870x2 and probably someone bought a 5970 or two somewhere.
What happends to pcie requirements when you add another gpu to the same slot? Is it like splitting the lanes with an nvidia bridge, or does the performance drop according to the lost lanes?

Anyway, on top of all this, we're left wondering if amd chipsets show the same disregard for pcie lane needs, and wether or not the pcie requirements for new cards have changed compared to the last generation.
 
G

Guest

Guest
I have an EVGA P55 FTW and I knew from previous articles that the x8 PCIe 2.0 slots did not suffer a terrible loss of performance compared to a x16 PCIe slot. However, I did had a urge to upgrade all the same. The summary really drives it home why I should be satified with my current P55 motherboard. I only have one monitor at 1680 x 1050 resolution and I am very happy with it, since I have an I7 860 processor overclocked to 4.1 Ghz that should be more than enough to run any Nvidia card in SLI. Thank you Tom's Hardware for finally placing my mind at ease (and my pocket book).


 
neiroatopelcc, Crashman addressed the lack of x2 cards in these reviews a little further up.
Basically, they will show less of a performance drop than a GTX 480 when PCIe bandwidth is restricted.


 

hixbot

Distinguished
Oct 29, 2007
818
0
18,990
[citation][nom]ares1214[/nom]Of course these are averages but he stated "Theres a noticable difference above 60 fps..." If that is one number, then no, there isnt. Of course that number changes from area to area, scene to scene, and so on, but at or around 60 fps as a minimum there is no noticable gain.[/citation]

Actually even if your display is limited to 60hz, there is still benefit to having the game rendered above 60fps.
The physics, the AI, and the rendered graphics will be calculated and rendered at speedier rate. This often allows crucial calculations like hit detection, model movement, etc to be done at a more precise scale.
All this can improve gameplay and fluidity, even if your display or eyes can only handle 60hz.
 

mapesdhs

Distinguished
I studied the field of frame rate perception, stereo, etc. for my dissertation many
moons ago, received lots of advice from a psychologist at UFL who was covering
similar ground for her PhD work involving VR (last I heard, she became a professor
at the Dept. of Behavioral Sciences and Leadership at the US Army West Point).

The truthful answer is simple: there is no hard fixed frame rate at which perception
of movement becomes fluid. It varies from one person to another and depends upon
personal experience, exposure to display technologies, various biological factors,
and other things. I spent ten years working with high-res, high-quality displays,
adminning VR systems, CAVE setup, a RealityCentre, headsets, etc. Low frame rates
really bug me, which is a pain when I have to run Flame at 50Hz. :}

hixbot is right, more than 60 does help, especially if in the context of PC gaming
it means the minimum rate is pushed much higher.

Someone who's spent a lot of time working with high quality displays will find lower
frame rates annoying. Even 60Hz will be noticable. The choice of 60 is arbitrary; a
mishmash of display technology legacy and convenience. A balance had to be struck
between what was technologically possible and the systems already in use (ie. viable
hw to support quad-buffered HD VR in 1993 not exactly cheap; matchip up with the 60Hz
field rate of NTSC was handy). For the vast majority of cases, with non-interlaced
displays using proper double-buffering, 60Hz is about right and has satisfied vis
sim requirements for more than 20 years, but the mantra of these systems is critical:
it mus not drop below 60.

By contrast, PC users tend to refer to average frame rates rather than minimums.
Also, modern PC gaming runs mostly single-buffered, using technologies that do
not support guaranteed frame rates or other features to ensure consistent update
rates (things that have been part of high-end vis sim hw since the mid-1990s, eg.
DVR in IR gfx). This confuses the issue somewhat.

Either way, the upshot is, some people (me included) would rather have their
displays running a fair bit higher than 60 if possible (I run my monitors at 72,
76 or 85) while others will be perfectly happy with rates as low as 50, or even 40
- levels I could never tolerate. I remember one person on a Nintendo forum saying
they were happy with high-20s rates.

SGI had a saying: "60Hz, 30 hurts". Still true today. Be thankful you're not a
bird though, their vision is more like 200Hz. :D For a bird, we move like glue...

Ian.

---
Refs: http://www.hitl.washington.edu/scivw/kolasinski/
http://www.gamers.org/dhs/diss/

 

tophor

Distinguished
Jun 1, 2010
5
0
18,510
Toms hardware.com and Thomas Soderstrom, the printable version of this article is not linked or present! I really really really like this feature of your articles. It is so handy to print out an article and read it at leisure. Could you add it?
 

tophor

Distinguished
Jun 1, 2010
5
0
18,510
[citation][nom]amk09[/nom]I love how people always bash on x8 x8 and how it sucks, when in reality x16 x16 is only 4% better.You spend unnecessary $$$ on a x58 platform while I save money that I can put towards a GPU upgrade with my p55 platform[/citation]

Me too! probably even more so as the farther you get from extreme GPU's, the less the x8/x8 // x16/x16 debate becomes.
 

getritch

Distinguished
Apr 26, 2010
43
0
18,530
[citation][nom]barmaley[/nom]This review tells me that if you already have an i7 and at least 2xPCIe 16x lanes on your motherboard then in order to play modern games, all you are going to be upgrading for the next several years is your graphics.[/citation]

Actually, what i take is that if you have an 4 GHZ proc, two GTX 480s/HD 5970s or anything similar, and 2 x8 slots, the only thing you'd need to upgrade for several years is your *CPU*...
and even THATS not completely necessary unless you're using Eyefinity or sumthing...

kinda makes you wonder how much further behind the 8 ball game devs are gonna get in terms of utilizing this hardware. Crytek may be going in the right direction with their uber-intense gfx engine...
 

Crashman

Polypheme
Former Staff
Have to repeat it once more.

1. No GPU card may use more than 1,25GB/s bandwidth - this is becuise they use internal PCIe to PCI-X bridging (same is true for all SAS 2G RAID cards).

2. Pre-Fermi nVidia cards used non-standard "Graphics PCIe" (16-lanes PCIe w/o parity data) - was introduced in early nVidia chipset - that is why you may still find a recomendation not to use RAID cards in the graphics slots, though, later nVidia chipsets support standard PCIe (not at full v.2.0 speed, though - and the inability to support full PCIe speed was the main reason Intel banned nVidia from futher chipset production)

3. Your tests just show that Fermi-based nVidia GPUs support standard PCIe (at v.1.1 speed) as well as "Graphics PCIe" - pre Fermi boards lacked this support (AMD was always using standard v.1.1 PCIe)

The productivity difference in x16, x8, x4 slots absolutly correlate with the bandwidth difference for PCIe - no-parity PCI-X bridging of "Graphics PCIe", standard PCIe (for x8 slot tests) and x4. So, this just demonstrates that at last nVidia supports standart PCIe as well.

3. The only need for PCIe 2,0 in all graphics cards is power (though with sufficient external power they work nicely in PCIe v.1.1 slots)

4. You may easily try to install "slave" card in the PCIe x1 slot (with a raser or after removing the inner "wall") and (with enough power) you will not see any real difference with x8/x16 slots (BTW, AMD openly states that for the second and third cards in bridged CrossFireX they need PCIe x4 slot, but even x1 will do, as only few processor-GPU talk goes over it). That is because GPU-GPU talk goes over SLI/CrossFire bridge, though, unlike nVidia, AMD supports unbridged configuration too - then at least PCIe x4 (prefferably x8) is needed for a slave card.

And finally some off-topic addition - there is no any reason to test motherboard for SLI compatibility - this is a method nVidia is stealing money from the pockets of thous who do not need any SLI. SLI is pure software technology and, as such, may work nicely on any motherboard, but nVidia blocks it use on "non-sertified" boards and stealing our money.
The reason your original post was deleted was because you sound like an expert, but everything you're saying is wrong. That would be extremely confusing to a newbie who doesn't know that he actually knows more than you.

Nvidia didn't use PCI-X, they simply named PCIe PCX before the PCIe abbreviation was standardized.
We've already seen test that show how bad PCIe 1.1 is for performance, the old article is linked in the first paragraph of this article. Compare the enormous x4 v1.1 performance losses there top the x4 v2.0 performance losses here.

The same article shows huge performance losses for mixing cards in x4 v1.1 and x16 slots, in that case for CrossFire

Finally, the bridge, be it SLI or CrossFire, does not have enough bandwidth to carry 100% of the data required by the GPU. It's only used for synchronization between two cards, not for the graphics data you see displayed on the screen.

No matter how good you are at BS, it's still BS, so please go away.
 

stasdm

Distinguished
Feb 10, 2009
53
0
18,630

My boy, I am in the business since tube computers ...


Wrong. They introduced "graphics PCIe" AFTER PCIe 1,0 was issued. and they never say that they use PCI-X bridging openly (while AMD did it some years back).



Yes, using overcrouded southbridge x4 w/o bridge (which ought to have been installed in the case)
Your test only shows that nVidia Fermi is really uses thrue PCIe x1.1 and so may also use PCIe bus for (some - inadequate) GPU/GPU bridging.



Wrong. With briges PCIe is used only for occasional CPU-GPU talk - just make testing properly and not only vendor-recommended way.
 

stasdm

Distinguished
Feb 10, 2009
53
0
18,630
Some addition on PCIe x16 myth.

When memory prices were high enough to prohibit using of memory on consumer GPUs the need for high-speed memory-to-GPU throughput rased the AGP bus - in principle, same PCI-X but faster and wider. At that times the ultra-high memory-CPU-GPU bus was really critical. But whith the memory pricing down, it became more useful to install all memory, needed for GPU work, at the GPU card and the ultra-fast CPU-GPU connection became unnecessary.

With emerging the PCIe AGP became obsolete. But nVidia at that time could not provide desent bridging speed from PCIe to internal PCI-X, so they "invented" Graphics PCIe - incompatible with standard PCIe bus that used serial sygnalling, but w/o parity.

This "invention" is still often reminded by some RAID cards vendors recommendation not to install RAID cards into x16 slots.

Later nVidia chipsets became compatible both with standard and Graphics PCIe, as well as Intel chipsets start supporting "Graphics PCIe" (have a look at any Intel chipset documentation - such a feature is specially noted).

With arravale of PCIe v.2.0 nVidia was the first to take a bandwagon - they badly needed more power. But nVidia chipsets were newer able to support true PCIe v.2.0 - just try to install two fully-packed LSI 2008-based RAID cards into NF200-provided slots and try to make tham run full-speed (and they need only 3GB/s each).

Same is true for nVidia participation in PCIe v.3.0, where their main idea was to abandon parity control (and there were even some talks that PCIe v.3.0 would not have it) - but such a bus is useless for thouse who needs it really - RAID and IB vendors. All nVidia was able to do, was to keep backward copatibiliti not only with PCIe v.2.0/1.1, but with "Graphics PCIe" too.

And of cause nVidia will newer assume the use of internal PCI-X bus openly - this will ruin all their lies about the need of ultra-high-speed bus.
 

Crashman

Polypheme
Former Staff
[citation][nom]stasdm[/nom]Some addition on PCIe x16 myth.When memory prices were high enough to prohibit using of memory on consumer GPUs the need for high-speed memory-to-GPU throughput rased the AGP bus - in principle, same PCI-X but faster and wider. at that times the ultra-high memory-CPU-GPU bus was really critical. But whith the memory pricing down, it became more useful to install all memory, needed for GPU work, at the GPU card and the ultra-fast CPU-GPU connection became unnecessary.With emerging the PCIe AGP became obsolete. But nVidia at that time could not provide desent bridging speed from PCIe to internal PCI-X, so they "invented" Graphics PCIe - incompatible with standard PCIe bus that used serial sygnalling, but w/o parity.this "invention" is still often reminded by some RAID cards vendors recommendation not to install RAID cagrds into x16 slots.Later nVidia chipsets became compatible both with standard and Graphics PCIe, as well as Intel chipsets start supporting "Graphics PCIe" (have a look at any Intel chipset documentation - such a feature is specially noted).With arravale of PCIe v.2.0 nVidia was the first to take a bandwagon - thay badly needed more power. But nVidia chipsets were newer able to support true PCIe v.2.0 - just try to install two fully-packed LSI 2008-based RAID cards into NF200-provided slots and try to make tham run full-speed (and they need only 3GB/s each).[/citation]
So what you're saying is, you're going to continue spreading misinformation until someone does a PCIe 1.1 vs PCIe 2.0 test on the GTX 480, just to prove you wrong? But you're not privileged to get an entire week's worth of work devoted to your theories. And even if you did get your dream article, you'd find some other excuse about the results.
 

stasdm

Distinguished
Feb 10, 2009
53
0
18,630
I'm dissapointed to see that they compared to 4x, EXCEPT in the SLI tests. I want to see what the loss is with 16x/4x on a PCI-E 2.0 slot with SLI or Xfire. The burning question nobody can seem to answer.

I got an official answer from Sapphire that the "slave" cards need only PCIe v.1.1 x4 (when connected by CrossFIreX bridge).

See no difference with any nVidia card
 

stasdm

Distinguished
Feb 10, 2009
53
0
18,630




Just do the testing! :lol:
BTW, look at your non-Fermi cards tests with motherboards with NF100 chip - which IS PCIe v.1.1 :bounce:
And have You noted that Quadro Plex 7000 uses only X8 cable for two GPUs?

For the testing I got the advice - connect to Cyclon Microsystems - might be thay will be interested to supply you with their 600-2706 (PCIe v1.1) and 600-2707 (PCIe v.2.0) expansion systems, so you'll be able to make the tests on the (basically) the same platform.
 

Marcus52

Distinguished
Jun 11, 2008
619
0
19,010
Keep in mind this article focuses on the limitations of the P55 chipset in regards to single and dual graphics only; it does not address it's limitations in regards to CPU choice or total PCI lanes for other uses.

The P55 is more than adequate for most PC users. The X58 was for early adopters and is for users that have criteria beyond it's capabilities (and should know that before they buy), now and in the near future. Neither choice is good or bad, if made consciously and based on the facts. Look at what you want, study up, buy what you need to fit the bill.

;)



 
Status
Not open for further replies.