QNAP TVS-863+ 8-Bay NAS Review

Status
Not open for further replies.
If it wasn't for the price (expensive, though justifiable) I would snap one up, seems to be a great option for photo/video storage and playback, and if you have a 10gigE network, even photo editing from it is going to feel snappy!
 
G

Guest

Guest
"iSCSI is an amazing technology that allows users to mount a volume to a host computer and have it control the volume as a local drive. You can even set the computer up to boot from the iSCSI share, just like a SAN."

A massive over-simplification which is almost up there with "I want to buy an internet for my PC". It's not a technology, it's a protocol which runs over dead-basic Ethernet connectivity. The technology is "Ethernet", not iSCSI.

You can't boot ALL computers from an iSCSI mounted volume unless you NIC supports it - and most integrated NICs don't.

The "Con" of only having a single 10GbE interface isn't really a con for this type of device - if you need dual 10GbE then it's more likely to be for path diversity than performance, in which case you'll be wanting multiple switches and you're then into the realms of enterprise requirements, and if that's the case you wouldn't buy one of these in any case.
 
"It's not a technology, it's a protocol which runs over dead-basic Ethernet connectivity. The technology is "Ethernet", not iSCSI."

iSCSI is technology, bridging two different protocols, and it doesn't need to be done over ethernet (though most commonly done over ethernet). Sure it's not "network technology" in the sense of low level protocols and physical devices, but it's still just as much a separate technology as TCP/IP, TLS, etc. (i.e. not all technology even has to have the same purpose or independent from others)

"You can't boot ALL computers from an iSCSI mounted volume unless you NIC supports it - and most integrated NICs don't."
Pretty sure all newer vPro systems support it, and definitely anything with PRO series NIC from Intel (and of course server grade NICs). Considering this device is 10gigE, I don't think they meant consumer grade computers booting over it!

As for single 10gigE not being an issue, the only case in which I think people would see it as an issue is in the case of a legacy network still running gigE, in which case two teamed adapters running gigE would certainly still have a benefit. Other than legacy networks, you're right on the ball there.
 

CRamseyer

Distinguished
Jan 25, 2015
426
11
18,795
The comments really show just how far NAS system have come. You can do so much with them. I wouldn't go as far as to say one size fits all (not even close) but a small inexpensive system like this can easily serve 20 office systems over VDI.

Dual 10gbE is nice for redundancy in a large network but I'm referring to performance increases against cost. A dual port 10GbE NIC has a very small price premium over a single port 10GbE NIC. QNAP sells both dual and single port 10GbE NICs but only offers the TVS-863+ with a single port.
 

nekromobo

Distinguished
Jul 17, 2008
110
0
18,680
Did you test the cache with 1 or 2 SSD's? Because with only 1 SSD you can only have read accelerated and need 2 SSD's to get read/write benefits.
 

willgart

Distinguished
Nov 27, 2007
139
9
18,685
"With HGST's new He8 drives with 8GB density, users can easily store up to 64GB of data. After RAID 6 overhead, that comes out to about 48GB of usable space with dual disk failure redundancy. "

pretty small ;-)
I prefer the other solutions where we talk about TB not GB... ;-)
 

SirGCal

Distinguished
Apr 2, 2010
310
0
18,780
"The TVS-863+ with eight drive bays is a little too large for most home theater installations"

WHAT? I currently have two 8-drive setups running RAID 6. A 12TB and a 24TB setup (2G and 4G drives respectively). And I'm almost full (89%). I have a large movie collection (all legal and no, you can't get any... ;-) and I also use about 8TB of that for (fake) work data storage. So I'm up to 28TB of movie and music storage that is just about full. I'd happily retire them for a single 48TB solution.

Although building them myself is far cheaper, it's not as small. This is the first unit I'd actually consider buying that I've seen thus far. I'd for sure be excited to test it and see if it'll do everything else I need also (seems like it should).
 


Prices are live updated, right now it's showing up as abut $1418, which is about right for an extra 8GB of LPDDR3. Math not only works out, but it doesn't have apple's "add $30 worth of memory and charge $200 for it" math. I guess the MSRP is actually $1499 but everyone is selling it cheaper.
 
Remember this doesn't include hard disks, just the chassis itself. Mostly this is for people who either don't know how or don't want to bother with building their own home storage solution cause otherwise it's much cheaper to just build one on your own. It's just regular mini-ITX components in a special case.
 


Don't forget an easy to use OS with features not found in out-of-the-box linux solutions. Back when Windows still sold it's home server software it was easy to make an amazing home server, but nowadays you need to know linux to even try, unless you get one of these systems.
 

CRamseyer

Distinguished
Jan 25, 2015
426
11
18,795
You beat me to the reply about the software stack. In regards to the off-the-shelf Mini-ITX comment, that is also false. The system uses proprietary hardware designed to reduce the footprint and optimize airflow. Building a system like this with the display, one-touch copy and drive sleds would be difficult for the price. Tack on the extensive software features, warranty through one company and the support tying it all together.

I'm also sad to report that Windows is not longer a viable solution for file storage if you want decent bandwidth. We'll have a few of a Windows based system in the coming weeks.

Around six months ago i had a spare dual Xeon board and purchased a Supermico 4u case with 36 3.5" drive bays and four 2.5" internal bays. I used 12Gb/s HBAs to build the arrays with Storage Spaces and the performance was awful outside of cache. It didn't take very long to get outside of the cache either with sequential transfers to the system. The system was also connected to the network via 40GbE.
 


Two tier solution with SSD cache? It should provide several times more performance than this NAS, though I can see it being less viable than a dedicated linux fileserver. Maybe with "Server 2016" 's release and container OS model it might do better than 2012r2. Sadly I only have an old workstation re-purposed as a multipurpose server in my control right now, so I'll just have to drool looking at your setups for a while to hold me over.
 

CRamseyer

Distinguished
Jan 25, 2015
426
11
18,795
There are other reports of slow RAID 6 performance in Windows Server 2012 R2 online. I have another server here under test now that uses four 800GB SSDs for cache and that still doesn't fix the write performance drop off issues.
 


Everything that item does can also be done by a knowledgeable individual. There is absolutely nothing special about it. This is just regular stuff with a custom form factor. All they did was make an easy to use GUI bolted on top of a BSD based distro. It's going to be regular BSD ZFS for the underlying file system with volumes created inside and then exported with NFS, SMB or as a block device via iSCSI. iSCSI isn't a very good solution, it's a "poor mans' SAN" type of implementation for when the application doesn't warrant doing FC or converged fabric. Unless someone is doing labs or absolutely needs block level access across a network it's best to stay away from it. Windows is fine for 1GbE, the overhead from SMB and NTFS cause issues if your trying to do higher bandwidth then that and doing either NFS or iSCSI on a windows platform is just begging for pain.

Seeing as they don't make 40GBe connectors I then assume your talking about 4 x 10Gbe ports? Are they active-active or active-passive? Also with 802.3ad are you using mode 2, 3 or 4 (mode 1 isn't used anymore unless doing back to back configuration)? This all has pretty significant impact on how packets are handled, especially if your trying to do a benchmark.
 


To be honest, haven't used RAID 6, only 1 and 5 (but mostly 1) and the three primary modes of storage spaces (mostly mirror). Mirror vs RAID 1 isn't too big of a difference (though mirror is a heck of a lot easier to manage, especially if you can't source identical disks), but I did have issues with parity mode and kind of have it only for read intensive file serving that needs the extra protection of a parity disk. I'll definitely be looking forward to that Server 2012r2 assessment then!

Talking about managing different sized disks, how well does the TVS-863+ handle mixed sizes? Does it fall back to JBOD or can it scale well while keeping RAID (lets say 2 disks each of 1TB to 4TB, does it let you make a RAID 6 pool with all the disks and then have 2TB left over for a RAID 1/0?). The article did have a tiny section saying a disk could be part of two pools, but never really explored it much (then again, most would just buy the maximum capacity they could afford and never look back)
 

CRamseyer

Distinguished
Jan 25, 2015
426
11
18,795


Actually they do make 40GbE connectors and they are not new. I have several from Mellanox (ConnectX-3, that can also do 56GbE), Intel (XL710) and Supermicro (XL710-based). I also have some new 100GbE connectors on the way.

The QNAP system is not based on a BSD distro and does not use a ZFS file system.

As for iSCSI being a poor man's SAN. We are talking about an $1,100 storage appliance and not a configuring where you need an $1,100 switch, a handful of $500 cards, $100 GBICs and a spool of fiber. The TS-863+ costs just a little over what the tools cost to cut and make connections for fiber.

 

CRamseyer

Distinguished
Jan 25, 2015
426
11
18,795


The system can use mixed density drives without issue. QTS supports storage pools so you can mix and match your volumes as you choose.

I plan to write a detailed article for each of the major NAS players. Each article will show the software interface and the features as well as building arrays, RAID types supported, third-party applications, application installation and so on. I can then link back in each review.

If I covered very angle of a NAS appliance the review would be three times as long as this one....just to cover the software.

 


Just tore through one of these and they are based on a customized Linux distro using LVM, so it's actually worse then BSD ZFS in terms of capability. It's a mini-itx board mounted sideways with a 10Gbe PCI-e adapter plugged into it's slot. There is an active community of people who load Ubunto, RHEL or even a "NAS" variant of BSD onto these devices.

I looked into the 40GbE and I'll be damned if someone actually made them. The spec allows for them but since 100GbE was also made at the same time I figured people would just jump from 10 to 100 and not muddle around, especially since cheap 40GBASE-T won't be standardized until next year.

I mention iSCSI because if this is aimed at "home office" market, what on earth would need remote block level access to a storage volume? The performance impact of trying to run it over a home gigabit network is so bad and the price of disks so cheap, that it makes zero sense. That's why it's largely relegated to lab environments where the user is doing some sort of project / module and needs a cheap way to simulate an "enterprise" like environment.

This device, and others like it, are very expensive for what they offer. It seems more a collection of bullet point style features where the point is to have more +1's then the competition. There is no denying the market for home automation is expanding as our lives become more and more integrated with technology, and many people either don't have the knowledge or have the knowledge but not the time, to do it themselves. Or in the case of branch offices of business's, they want someone else legally liable for when / if something breaks. That is the market these are aimed at because otherwise there is absolutely nothing special inside them, which is the point I made earlier. They are just file servers where someone else has built some really nice management software.
 
Status
Not open for further replies.