@palladin9479
FC is a terrible thing to use for backplane's and disks. no new tier 1 Enterprise storage array's (I think 3PAR is the only recent product release that still uses that crap). New SAS back planes are 4x6Gbps which destroys single channel 8Gbps FC arbitrated loops (There is no 10Gig FC, that is FCoE, which is a terrible idea but thats another matter). SAS supports duel ports, and two connections to two HBA's. EMC/HDS/Netapp have all made the switch, for performance, reliability and cost.
FC is way more expensive than Ethernet (Have you seen the cost of a director?). Pricing for FC switching scales exponentially more expensive the more port you add, and FCoE is an even worse excuse.
-Disclaimer, I was fixing a Netapp this morning, so my storage rant mode is on high.
Umm no, seriously no.
http://en.wikipedia.org/wiki/Fibre_Channel
http://en.wikipedia.org/wiki/Serial_attached_SCSI
Right outside my office door I have several 10GFC connectors. We use them as connectors between our storage processors and our cisco FC switch's.
Internally most systems have 4~8GFC connectors as that's been determined the safest speed over copper. You can do up to 16 provided its a really short distance.
SAS is just the SCSI protocol serialized, its nothing different then U320 or any previous implementation. It's used as internal storage options because it's cheap and efficient. FC is expensive due to the complex circuitry you need at every point, it's highly redundant and can get ridiculous speed but that is often overkill for a 6~8 disk back-plane.
400MB ~ 800M
Bps dual (so actually 800MB ~ 1600MB/ps aggregate) is overkill when your disks won't put on more then 50~60MBps each sustained.
FC-SW is just a transport protocol, you can technically put anything on it. I've seen SAS arrays have 4 x 8GFC connectors on their back that we loop into the SAN. The SP's treat them just like any other disk and export them out to the appropriate host / zones. PATA, SATA, U320 SCSI, SAS and FC can all be connected via FC-SW.
Your horribly wrong about SAS and multi-pathing. SAS allows for multi-pathing by providing a mechanism to identify via WWN the same device being advertised through multiple initiators. SAS itself is a point to point protocol, there is no bus looping or aggregation going in. SAS disks themselves have only one data bus not two (FC disks have two). What you can have is a non local initiator connecting to target disk via multiple SAS channels. As in one disk to a back-plane, but the back-plane will have two connects to either the same controller or (preferably) two different controllers. Both controllers will advertise the disk to the host OS through different channels so the OS will see two different disks. Having the same WWN on each instance of the disk is what allows the OS's storage layer to identify that the disk has multi-pathing. This is something that's been around since SCSI just implemented in different flavors.
FC supports the disk being physically linked to two different channels, typically A and B. Either the channels will connect to the same HBA or they'll be on two different HBAs, in either case the result is the same that the OS see's two separate instances of the disk and the storage subsystem is what links them together. Because of how the FC protocol works you can send packets to different logical targets and they'll be treated the same when they arrive at the disk. You can send one half of your file down channel A, the other half down channel B at the same time and the FC circuitry on the disk will know how to work with it. SAS won't let you do that as it's only got a channel A and must receive all communication from the same initiator.
There is a reason FC-SW is king in the enterprise storage sector.
-=Edit=-
Also we have EMC as our storage provider and we've seen their SAS offerings. Its a SAS disk array that's all. It plugs into the storage processors using ... Fiber Channel. From that point on it's treated like a FC array by the SP's and zoned out appropriately.