@issuemonkey
Glad to see Netapp paying attention. Need some accuracy in your comments however.
1) the caching; you could have 48GB of ram or more, this is for your server between able to handle to load of your I/Os and nothing more. It requires real caching to proper handle identical request like specific files like word, excel, or even a vmdk.
With proper caching, like the one we have on our gear, you can cache a couple of vmdk used to boot storm a full stack of virtual desktop... access it once on your disks, get it from fast SSD cache
response: Actually you're wrong. The reason you have a seperate "cache" product is your file system doesn't handle it natively, oh and you charge out the nose for it as a seperate product. The RAM or ARC (tier 1)cache works for both read and write. The next layer is split in two, with Write cache separate from Read cache, both on SSD or DRAM caching disks(tier 2) Both of these handle workload IO for different application sets you described. This removes the barrier of having to tier your disks to handle load, so that your zpools can be sized to handle the working set presented to the ESX cluster or application X. Your solution with expensive PAM cards and Flash cache do this as well, both costing more and not included with the initial cost of buying an array. Even then though PAM isn't as fast as the native RAM cache that Nexenta provides.
2/ looking at the pictures, the RAID adapter are not battery powered, so does it mean that there is no protection of your data during a controller lost?
response: No. please read about ZFS.
This is good to have a dual "server" to protect against failure but if you last writing to your DB are lost... basically, you run into trouble. This is the equivenent to a FAS, not the servers attaching to it.
Enterprise storage is using interconnect card between the controllers with some cache, we call that NVRAM, and if one is going down, this cache is battery powered and will be accessed by the remaining node to discard the set of data o disks.
response: Agreed, this is called the ZFS intent log, or ZIL and is on the shared trays of storage and can fail over to the other head. We do that and also think it's important. anyone that actually doesn't do this shouldn't be in the business of enterprise storage.
3) they do speak about failover mechanism, this is also scary, is this automatically done in a transparent way for the different protocols?
response: why is this scary? Does it scare you how Netapp does it? Yes it's transparent. Yes you're trollolololing. If you really wanted to see how enterprise failover works at scale, Nexenta has 45 day free trials. I'd be glad to show you why we outperform Netapp day in and out.
4) there is no concept of tiering, performance or workload type. This kind of setup will not fit for all...
response: ZFS doesn't need special tiering software or hardware because it's baked natively into the file system. Please read about ZFS before making crazy comments. Netapp doesn't provide me software that I can run on commodity hardware. That's not bad, but the fact that nexenta doesn't need expensive PAM cards or ridiculous "fast cache" products that cost piles extra doesn't mean it's not solving EXACTLY the same issues. Again I have to point out, we do it in the file system itself, not with crazy bolt on stuff. Not every Enterprise array works for every scenario, Netapp included. Fact is though, with a software oriented Open Storage Solution like Nexenta, I can take my software to other hardware if I need to. same can't be said for Netapp, and I'm stuck with an over priced solution with over priced support.