Fibre Channel-like or iSCSI-like storage using USB 3.0?

Gordon Fecyk

Distinguished
Nov 23, 2014
14
1
18,515
I was looking at Intel NUC boxes as ESXi hosts to replace my Microserver. With vSphere Essentials being relatively cheap, I could get as many as six NUCs and use small USB devices to boot ESXi from, then use external storage of some kind to run my virtual machines from.

The NUCs are dreadfully limited in network connectivity, though, with only a single 1 Gb LAN port each. I'd have to run all of my networking and storage through that one port. To do this properly, I'd want to have at least four NICs per host; two for LAN and two for storage. Then I looked at the number of USB 3.0 ports on these things (four) and wondered if I could use these instead. Yes, I could get 1 Gb USB NICs, but that'd limit the bandwidth from 5 Gb to 1 Gb each. How about shared storage over USB 3.0?

Ideally, such a shared storage device would have two or more independent USB connections, and my dream device would support a single host connecting to it with two or more cables, each connection looking like a unique storage host adapter.

This can't be done with off-the-shelf parts as far as I can search, and ESXi does not currently support using USB devices for datastores. But maybe I can inspire some vendor to develop something. Or if not USB, maybe Thunderbolt?

There may be other small PCs that have two or more NICs, or have 10 Gb NICs that I can use iSCSI with; that can work too.

hypotheticalusbsharedstorage.png
 
Solution
It's been my experience that when you begin layering protocols for VM storage, you take a major hit performance wise due to the overhead. USB 3.0 might be a lot faster than ethernet for sequential reads, but will get killed in 4K random access due to the fact that it's using the ATA command set over the USB command set. If you throw in an ethernet protocol you're now converting the communication 3x, which will result in major latency issues for small read/writes.

I'm guessing this is the reason why ESXi doesn't support USB drives in the first place. They would be great for storing large files, but not for the thousands of random reads done by an OS (even if it is a virtual OS).

However the newest USB revision now includes...

DataMedic

Honorable
Nov 22, 2013
384
0
10,960
I guess I'm just not clear on why you'd want to do this??? Part of the beauty of VM is that you get to share higher end resources across several virtual machines. Why not just get one good server for the cost of the NUC's and run all your VM's from there. Then use the NUC or other cheap computer as terminals.
 

Gordon Fecyk

Distinguished
Nov 23, 2014
14
1
18,515

I spent the last few years administering a VM environment with multiple hosts and shared storage, and I've come to appreciate having several smaller, disposable PCs as VM hosts. So I wondered what it would take to replicate this environment at a hobbyist level. My own application, a Windows domain and attendant services, does not require a lot of computing power. I also have to watch out for power consumption, and NUC-like PCs are almost perfect for this.

But the question wasn't about why I'd use NUCs as VM hosts, but if USB-based shared storage was a possibility. USB 3.0 is supposed to support 5 Gb/sec bandwidth; compare that with Fibre Channel, which supports 2, 4 or 8 Gb/sec but is way overpriced for any hobbyist application.

If USB shared storage isn't possible, then I'd just do this in iSCSI and get USB 1 Gb NICs. But that would limit the USB bus to 1/5th of its capable bandwidth. Even a USB 3 to "10Gb" adapter that maxed out at 5 Gb would be better.
--
 

DataMedic

Honorable
Nov 22, 2013
384
0
10,960
After reviewing the spces of the newer i3 and i5 NUC's it looks like they now have a SATA port on the motherboard. Would be simple enough to cut a small hole and run out a SATA to eSATA cable so you can connect an external dock or enclosure.

You can see the port on these pictures: http://www.hardwarezone.com.sg/review-intel-nuc-kit-d54250wyk-haswell-comes-nuc

Unless you're suggesting that you want all the NUC's to have access to the same storage. In which case I think iSCSI is really the only option.
 

Gordon Fecyk

Distinguished
Nov 23, 2014
14
1
18,515

Shared storage is a necessity for any cluster functionality in vSphere, even in Essentials. So yes.

Using the SATA port would only work if SATA can work as some kind of networked storage, which may be possible (SATA to SAS maybe) but seems way too complex for a hobby platform after a cursory search.

It looks like my best bet is to somehow add additional 1 Gb NICs to the VM hosts and go iSCSI. This would apply regardless of what I use for hosts or what my external storage would be. There's a large selection of USB 3.0 NICs out there, including some dual NIC devices, so maybe I can make up lost bandwidth using MPIO or something.

If someone ever enables Fibre Channel protocol over USB 3 (FCoUSB anyone?), or can emulate a FC HBA over USB 3, or makes a "10Gig" USB 3 NIC, the makers of real FC gear are going to have tough competition... :)
--
 

Gordon Fecyk

Distinguished
Nov 23, 2014
14
1
18,515
OK, let me toss one more wild idea out there for vendors to possibly steal and build. And if you're a vendor who makes this, just give me one or two for free and we'll call it even. ;-)

My worst experiences with iSCSI as shared storage come from the low bandwidth of 1 Gb Ethernet, compared to 3 to 6 Gb/sec on SATA or 8 GB/sec on Fibre Channel, along with the small frame size of 1500 bytes. I've tried improving on this with multiple paths (MPIO) and jumbo frames (9000 bytes) without success. I can always buy or build a faster NAS box, but I can't improve on the 1 Gb bandwidth. I can't find 10 Gb adapters that will fit in a NUC-like device. And FC over USB doesn't exist yet.

How about Ethernet over USB? Yes, such a thing exists, even if it's in development stages. Only to make it work with current NAS devices, make a USB "switch" that looks like an Ethernet NIC to the hosts it connects to, and then behaves like an Ethernet switch.

hypotheticalusbswitch.png

Ideally I'd like this fantasy switch to support jumbo frames and VLANs, but it'd be enough to start with an unmanaged switch that can handle 5 Gb/second. I'd then use a pair of these to provide redundant connections between my VM hosts and NAS, and use iSCSI.

It's just... we have this bandwidth; it'd be a shame to waste it on 1 Gb NICs.
--
 

DataMedic

Honorable
Nov 22, 2013
384
0
10,960
It's been my experience that when you begin layering protocols for VM storage, you take a major hit performance wise due to the overhead. USB 3.0 might be a lot faster than ethernet for sequential reads, but will get killed in 4K random access due to the fact that it's using the ATA command set over the USB command set. If you throw in an ethernet protocol you're now converting the communication 3x, which will result in major latency issues for small read/writes.

I'm guessing this is the reason why ESXi doesn't support USB drives in the first place. They would be great for storing large files, but not for the thousands of random reads done by an OS (even if it is a virtual OS).

However the newest USB revision now includes optimization for SCSI command over USB, so this might become possible in the future when it gets better implemented. Using the SCSI commands it is theoretically possible to connect the same storage to multiple computers over USB. Plus the speed is vastly improved using SCSI command set. I think the option you're looking for will ultimately be answered as a RAID enclosure that allows multiple computers to connect to it via USB.

This article might be helpful: http://en.wikipedia.org/wiki/USB_Attached_SCSI
 
Solution

Gordon Fecyk

Distinguished
Nov 23, 2014
14
1
18,515

Argh, yes, didn't consider the latency hit each layer adds.


Considering the NUC is an Intel product, I suspect the support is there, waiting to be exploited. And Tom's did a review of UAS a little while ago. I wonder if it'd be good enough for a ESXi storage driver some day.

It looks like this approach with this hardware is not optimal yet. Maybe some other form factor that would let me use 10G PCIe NICs on the hosts would make more sense, like Mini-ITX.