[citation][nom]wrxchris[/nom]I agree. Rendering 6.9 million pixels really puts quite a bit more strain on the PCIe bus. I would order a 2600k tomorrow if someone showed me some solid proof that x8/x8 doesn't significantly choke framerates @ 5760x1200.[/citation]
I too would also like to see these benchmarks along with the CPU of the X58 I7 running at the speeds the Sandy Bridge is, probably need a 950 or better to do it. I am going to be getting 3 monitors for surround but no 3D I get migraines from it and fluorescent lighting so no way I am going to be able to handle it. I am not saying it shouldn't be tested but you should do the test with 3d and with out.
This article is a start or a good measuring stick if your looking to upgrade from a core 2 duo or lower with only 1 monitor then sure the Sandy Bridge is a great choice but if you guys don't do these other tests it doesn't really tell us if it will hinder the enthusiast or hardcore gamers.
I just remembered an article I read over at Hardocp that showed the difference with 16x & 8x pcie if you want to read it here you go
http://hardocp.com/article/2010/08/23/gtx_480_sli_pcie_bandwidth_perf_x16x16_vs_x8x8
If you don't want to read the whole thing or just want the cliff notes here you go
If you are running on a 30" display at 2560x1600 or below, an x8/x8 SLI or CFX configuration will perform the same as a x16/x8 or x16/x16 configuration. The only time that you should even be slightly concerned about running at x8/x8 is when you move up to a multiple display setup. When we pushed the GTX 480 SLI at 5760x1200 we saw up to a 7% difference in performance between x8/x8 and x16/x16, in favor of x16/x16, but that was in one game only.
It also appears that the type of game will impact the result and if there are even any differences at all. In texture and AA bandwidth heavy games, you will see more of a difference, but in a game that is more pixel shader heavy, there will be less or no differences at all.