now i get what you are asking. the memory bandwidth is how fast the components of the card can talk to each other. basically, the graphics card is a mini pc with it's own memory, controllers, power distribution and such. the 200 gb/s you see is the card's ram speed.
as for how much bandwidth a card actually uses, i don't think they have been able to measure it yet like we want to see. this article attempted to do just this
http://www.tomshardware.com/reviews/graphics-performance-myths-debunked,3739.html
if it is right, then it suggests the usage is very minimal, like 1.0 x4 is enough for any card!! folks largely ignored this article and dismissed it as it did not match what people want to believe about pcie slots. from what i have seen, folks want to believe that the 3.0 x 16 slots are needed and will argue to the death that it is needed despite there being no basis of fact for the belief.
i am not sure you will ever know exactly the bandwidth needs of one card vs another as it is not in the interest of manufacturer's to prove they are overselling things we don't need at more than we should be paying. personally, i don't think gpu's are what will start using this bandwidth first. ssd's that run on pcie lanes in my opinion will be what actually begins tapping into what is currently available on mobo's. the faster the ssd's get, the more bandwidth will be needed to feed the data to them. we could test pcie usage with an ssd easy but no one really knows how much data a gpu is getting from the system.
directx and other such things optimize what has to be sent to a gpu to get tit to draw what is wanted and the better those get, the less info needs to go to the gpu to get it to do what you want. it seems we optimize as quick as we increase the bandwidth needs if this makes sense. remember that all those pixels that come out of the card is not directly related to what goes in. it takes a lot less info to get the gpu to draw the picture than the size of the outputted picture suggests.
a single 1080p frame is 2,073,600 pixels and at 60 fps this is 124,416,000 pixels per second!! so even if it was 1 to 1 data in and out, this is still far short of the 800,000,000,000 per second the pcie slot can handle!! i'm not sure if pixels and bits are directly related but i do believe they are the same data size. 1 pixel = 1 bit
i hope this has shed some light on this a bit. read that article as it has a lot of good info that is right in line with what you are asking.