Yes they do, but they can suffer from latency when crossing CPUs. Dell's R730 with dual processors has 3 x8 slots that attach to the second CPU, and my last company had some performance issues with SAS cards for external storage attached to those slots. They ended up switching things around and putting the FC VTL cards and 10Gb NIC there instead. (This was a backup appliance that my company bought, added cards and software, and then sold for 10 times the price.)
The number of PCIe lanes available depends entirely on the board. If the board has the capacity of using all 80 lanes, then that's what you'll get. From the slots I see available on that board, it looks like you have 66 lanes available if all slots are used. The downside is that those cards will need cooling. If the case isn't set up to push air through those cards when they're densely packed like that, you'll lose available lanes to putting cooling fans next to devices.
I, too, have been experimenting with things like this. I currently have a P9X79-E WS board for my server, with two Dell RAID cards, one internal H710 and one external H830, a Qlogic 10Gb NIC, an Intel 10Gb NIC for iSCSI, and a low end video card. (That's 40 lanes worth of devices.) Due to cooling concerns, it hasn't gone as well as I had hoped.