Can a NAS on a 10Gbps ethernet connection compete with SATA 3

wbattel4607

Honorable
Jan 28, 2014
83
0
10,660
Hello. I am reaching the end of my planning phase for my upcoming PC build. My current intention is to have a Samsung 950 Pro M.2 card as my OS and frequently used application storage, and have all other data stored on a separate machine i'm also planning to build shortly after to act as the household NAS, likely to be fitted with WD Reds (and/or some SSDs).

My question is, how will read/write operation speeds compare between an Ethernet (10gbps) NAS (the NAS would use SATA 3 or SAS internally) with HDDs and the more mainstream setup of internal SATA 3 (6Gbps) HDDs?

Additionally, what about using solid state disks instead of mechanical hard disks, any difference in answer?


The hard drives (or solid state) in both cases would be equivalent, and technically the 10gbps Ethernet should equal 6gbps SATA 3 because both the NAS and PC would rely on the SATA connection, the only difference is where in the system that is. In practice, can someone attest to the truth of this? I'd love to put all of my normal data on a NAS rather than give each PC it's own main storage, but am not particularly patient- especially when it comes to data transfer...

Thanks. If any other details would be helpful, I'd be happy to provide them.
 
Solution

If you already have the 10Gbps switch, which is a big chunk of the cost, then I can see why it might make sense.

You're talking data though, so I'd suggest you're just fine with whatever. If I was in your shoes I would honestly start out with HDDs and 1Gbps and see how you go. I think you'll find it's absolutely fine. Give yourself capacity to upgrade, add an SSD...
with a 10Gb connection our transfer speeds at best will be 1000MB/s. I run a gigabit nas in much the same setup you are trying to do so i only get 100MB/s transfers. SO with a mechanical disk you will have no bottleneck of your speed though a slight bit of latency (1-3ms maybe depending on length and quality of cable..mine is 1ms) unlike my setup that pinches off some of my HDDs sequential read/writes bandwidth. If you go SSD it gets more complicated. If you just pool them as a large extended volume then you should get there full speed and I/O capabilities though again with a small latency hit for being a remote machine. However should you raid the SSDs in raid 0 (or any raid that scales speed with drives) then you could hit a bottleneck. 2 sata 6g SSDs in raid 0 could in fact saturate the network connection, in theory.
 
The HDD speeds you list are INTERFACE speeds, not actual HDD speeds. 10Gbs will far exceed anything HDD's will ever be capable of, even in RAID 0.
Furthermore, most SSD's aren't getting anywhere near that, you're talking about 500Mb/s.

Every pc needs it's own local storage of course, but bulky data can be stored on a central NAS and accessed by everyone. Depending on the data speed requirements and redundancy needs there are loads of suitable options.

At work I run 2 NAS bays. One with an SSD for low latency PDF files. It's fast enough to preview files as if they were locally stored. Another HDD nas for backups.

At home i have a media nas bay for movies etc. I'm the only one getting data from it, but depending on the file, multiple users could reasonably access it at the same time. Obviously 3 people wanting to watch a different 4k movie at the same time will probably exceed the cpu and hdd read.
 


if he goes HDD your right but if he raided SSDs in 0 (say 2 or more SSD's) he could well max out his intranet connection to his nas. SATA 3 is good for up to 6Gb/s x2 is >10Gb/s network connection. though best speed SATA SSD usually max out at 540Mb/s.

generally speaking though he should be fine as long as he is pooling SSDs not putting them in raid 0/3/4 etc
 


Is that small latency actually noticeable? I'd think not but I wouldn't know.

And I'd probably create a RAID-50 either with WD Red HDDs or Intel SSDs, so if using SSDs I'd agree that if I get roughly 1000MB/s bandwidth and have, say, 1.2GB/s throughput on the SSD RAID, then a fair but not severe bottleneck will exist- which adds on to the reasons that I'll probably use mostly HDDs with only a couple SSDs just for high frequency/demand files.
 
So just to clarify, are you talking about just storing files on the NAS? You won't be trying to run games or programs installed on the NAS?

Do you do anything like video editing or high level photo editing (i.e., you do it regularly and use advanced features)? For those things latency *can* matter, and throughput does matter, so local storage is preferred.

For everything else though, photos, videos, music, documents, etc, you won't even notice the NAS. Honestly for all of that there's no way it's even worth going 10Gbps at this stage. 1Gbps gives you >100MBps throughput which is loads unless you're regularly working with massive files.

I run everything off a NAS (1Gbps), regularly copy video files around which are 4GB or more. It's honestly seamless, like it's locally connected.

For sure there are edge cases which can justify a 10Gbps NAS, local SSDs or Thunderbolt storage, but you'd need to have a very demanding workload to even notice, let along benefit. If I was editing 4K videos for a living, I would go large SSDs, Thunderbolt, or if sharing was required, a 10Gbps NAS with SSD caching. But apart from that, I'm honestly struggling to see a possible use-case that could justify that kind of investment for a small household/business.

Make sure any cabling you fit or purchase is capable of 10Gbps for future, because you don't want to have to re-run anything if the cost-benefit ratio makes more sense a few years down the track, but other than that, you're very likely absolutely fine with a standard gigabit network IMHO.
 



Most applications/games (at least those that'd benefit from less latency) would live locally, but most data would reside on the NAS.


Also, I have bulk Cat 6 cable and 10GBASE-T switches to use. The NICs would be all that'd keep me from 10GBASE-T if I didn't want to spend the extra, but I see where you're coming from in terms of what is practical. Thanks for the insight.
 
Yeah, absolutely.

RAID 0 2x decent SSD's and their theoretical speed will equal or exceed 10Gbs.
Assuming you actually get the max theoretical SSD speed all the time.

Bit of a long shot scenario, though. A home NAS usually needs decent capacity primarily. For the cost of 1Tb worth of SSD RAID 0, you could probably buy a barebones PC with more storage, or a decent NAS bay with many Tb and save $$, all while serving the intended purpose just fine.
 


No for the most part the latency is not noticable. I even have my nas server set to run some programs i rarely use remotely. For most programs it will not be an issue but some programs will have a noticeable lag, like photo editing and some content creation. But for games and what not it should be fine.
 
Also, just FYI, I am custom building a second PC for my NAS (basically a low-tier computer with lots of storage and FreeNAS), rather than purchasing a prebuilt enclosure.

So let me culminate the feedback you all have given and my new understandings into a better-defined final question. Understanding that...

- games/applications that require intensive instantaneous data transfers could take a small hit from latency of external storage
- SSD RAID 0 setups *could* create bottlenecks depending on the situation
- interface speeds are greater than actual disk speeds

Would it be reasonable to have only a single M.2 drive on my PC for wicked fast data on the OS and many of my frequently used applications and/or those needing low latency, and put eeeeeverythinnnngg else on the NAS that'd consist of WD Reds in RAID 50, all residing on a 10gigabit (just for the fun of it) ethernet connection?
 

If you already have the 10Gbps switch, which is a big chunk of the cost, then I can see why it might make sense.

You're talking data though, so I'd suggest you're just fine with whatever. If I was in your shoes I would honestly start out with HDDs and 1Gbps and see how you go. I think you'll find it's absolutely fine. Give yourself capacity to upgrade, add an SSD cache and 10Gbps card to the NAS and desktops if you find you want them down the track.

Good luck.
 
Solution


i see no reason why not. It's what i do now with a gigabit NAS (win 10 based) with no issues
 


If you're going FreeNAS make sure you use ZFS (i.e. not RAID) and have plenty of RAM.
For me, a NAS is for files, while local storage is for applications and games. I personally wouldn't want applications and games running over a NAS. While that used to be about performance, now I'd argue it on reliability grounds. You can have issues with some programs crashing because they're expecting a local disk rather than a mapped network drive. I've also hit issues with network drives mapped to my user account, but when a program requests admin privileges (required for an update, for example), it crashes because the "admin" version of my account doesn't have the mapped drive. Now there are work arounds for those sorts of issues, but it's time consuming and frustrating, and just not worth it IMHO.

My advice: all data on your FreeNAS array.
OS and as many programs and games as you can comfortably fit on your super-fast 950 Pro
If you have additional games/programs that can't fit on the 950 Pro, get yourself a larger, cheaper SSD (or even HDD if your budget is tight) for your desktop and install them there.
 


Sounds reasonable to me.
 
I am contemplating doing the same, structuring one barebones PC to function as a NAS to two high-end PCs and the idea of using FreeNAS sounds really good, everything wired together through 10gbps switch and 10gbps-NICs. However, after thinking about it for a while, and taking into account the decreasing costs of SSD drives, it seems like its a better idea to go with adding local SSDs instead to each machine, i.e., 850 EVO/PRO 1TB.

The justification of a centralized NAS-type solution seems to make more sense when there are a plethora of users and/or devices -- it doesn't really make sense for my individual use. However, who said it had to make sense? 🙂 The cool-factor of setting it up makes it all worthwhile! Once built, I am certain I could rationalize it to perfection.