That isn't exactly a fair comparison. The information exchanged between your CPU and GPU is not the same as the information transferred between your GPU and monitor.
In a cloud gaming setup, you have several streams of data that need to be transferred. You have the information from the human interface device(s) going to the server, and then the audio and video streams coming back from the server. The HID information is likely going to require negligible bandwidth, and the audio bandwidth is going to be small compared to the video bandwidth.
According to http://www.emsai.net/projects/widescreen/bandwidth/ 2560x1440 @ 60Hz is 7.87 Gbit (with a lower case B). This number is much lower than the 4GB/s PCI-e interconnect you referenced. This is uncompressed. Compression can greatly reduce this, but adds additional latency.
Lets assume that they are using H.264 compression for the video. According to
http://stackoverflow.com/questions/5024114/suggested-compression-ratio-with-h-264 the formula is [image width] x [image height] x [framerate] x [motion rank] x 0.07 = [desired bitrate] where the image width and height is expressed in pixels, and the motion rank is an integer between 1 and 4, 1 being low motion, 2 being medium motion, and 4 being high motion (motion being the amount of image data that is changing between frames, see the linked document for more information).
Video games tend to be very fast paced. As a result, in order to make the game playable, lets assign a motion rank of 4. That leaves us with:
2560 x 1440 x 60 x 4 x 0.07 6.19315x10^7 bps = 59.0625 MB/s = 472.5 Mbps.
In other words, to get a quality almost as good as what you have now, you would need a 500mbps internet connection all to your self. In other words, you could do this if you lived in Kansas City. You might even be able to get this kind of bandwidth at your local university. Either way, this is something that could be possible on a wide scale in the future.
That is completely ignoring the issue of latency.