PCIe Bandwidth Question

Sparkofd

Reputable
Jan 21, 2015
47
0
4,540
Hi, could anyone explain me what is the meaning of a PCIe 2.0 bandwidth?
I've read that a PCIe max bandwidth is about 8 GB/s but what I can't understand what happens with cards like GTX Titan with more than 200 GB/s Bandwidth.. it may be a silly question but it's really confusing to me
Thanks in advance
 
Solution


The 200GB/s is the cards bandwidth to its own RAM built in to the card itself . This obviously is not dependent on the pci-e slot
pcie 2.0 x 16 is about 8 gb/s and 3.0 x 16 is about 16 gb/s so twice as fast. isue is that no card on the market even comes close to using all the bandwidth available to a 2.0 x 16 slot. so doubling it accomplishes nothing really. i have read tons of benchmarks and they show no change in fps when going from 2.0 to 3.0 on any card. i have even seen benchmarks for dual titans running at x8/x8 vs. x16/x8 and there was little change.

here is an example http://www.pugetsystems.com/labs/articles/Impact-of-PCI-E-Speed-on-Gaming-Performance-518/
 
2.0 is 500MB/s per lane so as Math Geek said 16 lanes = 800GB/s
3.0 is 985MB/s per lane, not quite double, so 16 lanes is 15.75GB/s
4.0 is 1969MB/s per lane so x16= 31.51GB/s

But the bandwidth they are referring to on the GPU is the cards memory bandwidth, not the pcie bus.
 


The 200GB/s is the cards bandwidth to its own RAM built in to the card itself . This obviously is not dependent on the pci-e slot
 
Solution
now i get what you are asking. the memory bandwidth is how fast the components of the card can talk to each other. basically, the graphics card is a mini pc with it's own memory, controllers, power distribution and such. the 200 gb/s you see is the card's ram speed.

as for how much bandwidth a card actually uses, i don't think they have been able to measure it yet like we want to see. this article attempted to do just this http://www.tomshardware.com/reviews/graphics-performance-myths-debunked,3739.html

if it is right, then it suggests the usage is very minimal, like 1.0 x4 is enough for any card!! folks largely ignored this article and dismissed it as it did not match what people want to believe about pcie slots. from what i have seen, folks want to believe that the 3.0 x 16 slots are needed and will argue to the death that it is needed despite there being no basis of fact for the belief.

i am not sure you will ever know exactly the bandwidth needs of one card vs another as it is not in the interest of manufacturer's to prove they are overselling things we don't need at more than we should be paying. personally, i don't think gpu's are what will start using this bandwidth first. ssd's that run on pcie lanes in my opinion will be what actually begins tapping into what is currently available on mobo's. the faster the ssd's get, the more bandwidth will be needed to feed the data to them. we could test pcie usage with an ssd easy but no one really knows how much data a gpu is getting from the system.

directx and other such things optimize what has to be sent to a gpu to get tit to draw what is wanted and the better those get, the less info needs to go to the gpu to get it to do what you want. it seems we optimize as quick as we increase the bandwidth needs if this makes sense. remember that all those pixels that come out of the card is not directly related to what goes in. it takes a lot less info to get the gpu to draw the picture than the size of the outputted picture suggests.

a single 1080p frame is 2,073,600 pixels and at 60 fps this is 124,416,000 pixels per second!! so even if it was 1 to 1 data in and out, this is still far short of the 800,000,000,000 per second the pcie slot can handle!! i'm not sure if pixels and bits are directly related but i do believe they are the same data size. 1 pixel = 1 bit

i hope this has shed some light on this a bit. read that article as it has a lot of good info that is right in line with what you are asking.