MRFS,
Are we talking local storage or enterprise wide storage? Really big difference between those two.
Local is just system to an internal disk, or via a DAS array sitting nearby. This is an extremely simple setup, the host system has direct access to all disks present without any arbitrators or fabric involved. It's severely limited in expandability though.
Enterprise fabrics are a bit different. You have one to n+1 storage bays full of disks all looped to a central Storage Processor (or more). The SP actually has it's own set of CPU's and memory in the GB's. The SP will cut up the disk's into LUN's and manage any / all RAID0/1/5/10/50/60/51/61 or however you want to slice it up. From the SP you have multiple paths to your switch's, preferably at least two fabric switch's. From each host system you should have four connections to the switch's. Port 0 on HBA0 goes to Switch A, port 1 on HBA0 goes to Switch B, Port 0 on HBA1 goes to Switch A, Port 1 on HBA1 goes to Switch B. That provides four lanes of bandwidth and complete redundancy. Any switch / controller can be taken off line and you still have access to all your disks. The SP's then do host LUN mapping to the HBA's of the systems you want to assign them to, you zone the switch's to segregate your disk traffic. This setup is incredibly important because your doing visualization using VMWare (our vendor) or your vendor of choice. Each LUN mapping represents a single virtual machine's file system and can be on any combination of disks you want. Further you can disk dupe the contends in real time to a secondary set of disks either at your facility or at a remote facility using a RPA.
Now you have multiple physical system's all connected to each other as a single VM Cluster. Because you have the host LUN's mapped to multiple systems they can all communicate with the disks. This allows you to seamlessly move a virtual machine from system A to system B without shutting the virtual machine down. You can move all the VM's from one physical system to another without shutting them down, this allows you to perform maintenance on or expansion of that physical system. All with zero downtime in your enterprise.
None of that cool magic is possible with DAS, you need a shared storage fabric to do it.
Also remember, disks have a finite speed upon which they communicate at. It makes no sense having 1.0Gbps (approx 100MBps) per disk when each 15K 2.5inch HDD tops out at 80MBps max and has sustained 40~60MBps. So unless we're talking SSD, which in and of itself is a different beast, no amount of bandwidth will make a HDD faster. You can have a single HDD using a 10GFC optic connection and it would still be limited to it's 40~60MBps. Plus we already have ridiculous bandwidth now, 16GFC x 4 with encoding gets you about 7 ~ 7.8GBps from SP to fabric switch. Then 8GFC x 4 gets you half that from switch to host system. You'd need a mainframe or something like an Sun M9000 to consume that much storage bandwidth.
Are we talking local storage or enterprise wide storage? Really big difference between those two.
Local is just system to an internal disk, or via a DAS array sitting nearby. This is an extremely simple setup, the host system has direct access to all disks present without any arbitrators or fabric involved. It's severely limited in expandability though.
Enterprise fabrics are a bit different. You have one to n+1 storage bays full of disks all looped to a central Storage Processor (or more). The SP actually has it's own set of CPU's and memory in the GB's. The SP will cut up the disk's into LUN's and manage any / all RAID0/1/5/10/50/60/51/61 or however you want to slice it up. From the SP you have multiple paths to your switch's, preferably at least two fabric switch's. From each host system you should have four connections to the switch's. Port 0 on HBA0 goes to Switch A, port 1 on HBA0 goes to Switch B, Port 0 on HBA1 goes to Switch A, Port 1 on HBA1 goes to Switch B. That provides four lanes of bandwidth and complete redundancy. Any switch / controller can be taken off line and you still have access to all your disks. The SP's then do host LUN mapping to the HBA's of the systems you want to assign them to, you zone the switch's to segregate your disk traffic. This setup is incredibly important because your doing visualization using VMWare (our vendor) or your vendor of choice. Each LUN mapping represents a single virtual machine's file system and can be on any combination of disks you want. Further you can disk dupe the contends in real time to a secondary set of disks either at your facility or at a remote facility using a RPA.
Now you have multiple physical system's all connected to each other as a single VM Cluster. Because you have the host LUN's mapped to multiple systems they can all communicate with the disks. This allows you to seamlessly move a virtual machine from system A to system B without shutting the virtual machine down. You can move all the VM's from one physical system to another without shutting them down, this allows you to perform maintenance on or expansion of that physical system. All with zero downtime in your enterprise.
None of that cool magic is possible with DAS, you need a shared storage fabric to do it.
Also remember, disks have a finite speed upon which they communicate at. It makes no sense having 1.0Gbps (approx 100MBps) per disk when each 15K 2.5inch HDD tops out at 80MBps max and has sustained 40~60MBps. So unless we're talking SSD, which in and of itself is a different beast, no amount of bandwidth will make a HDD faster. You can have a single HDD using a 10GFC optic connection and it would still be limited to it's 40~60MBps. Plus we already have ridiculous bandwidth now, 16GFC x 4 with encoding gets you about 7 ~ 7.8GBps from SP to fabric switch. Then 8GFC x 4 gets you half that from switch to host system. You'd need a mainframe or something like an Sun M9000 to consume that much storage bandwidth.