Down side to using NAS drives for non-Raid desktop machines?

Stringtheory

Honorable
Dec 3, 2014
71
0
10,630
I've read claims that NAS drives generally disable some error-checking in order to improve speed. This is apparently deemed unnecessary in RAID systems. Is there any truth to this? If so, are disadvantages balanced out by the generally better MTBF specs of NAS drives?

Main drives I was considering: HGST 8TB NAS vs WD Gold or Black (not really spec'd for NAS, but very high MTBF numbers). The application is general high-volume storage of video and other large files for a workstation. Throughput not really crucial.
 
Solution
NAS drives usually have a higher tolerance to vibrations, so even if you don't intend to raid it might still be a better option to use NAS drives in a multi drive configuration.

Seagate recommends their NAS drives for upto an 8 drive configuration and their NAS Pro drives upto 16.
I assume it would be roughly the same for WD and their red drives.
I think you have it around the wrong way, the NAS drive will usually lack some speed compared to a normal desktop drive with improved error recovery/checking and reliability.

If you are running multiple drives in the one system you are better off with the NAS drives.
 


Thanks for your reply, Shady. I've heard the comment about some error-checking being disabled on NAS drives from a few sources, including tech support for one of the drive manufacturers. It didn't make complete sense to me, but the logic was that RAID already has the capability to correct errors, so speed was more important. OTOH, you'd think that data integrity would be even more important in server apps.

I've used NAS drives in non-RAID applications before with no problems. Usually HGST, due to their higher MTBF ratings at the time. But now WD's Gold series claims 2.5 million hour MTBF, so I was considering them.

Re your second comment:

Not sure if I follow that. I've got multiple drives running in all systems, but not configured as RAID. How does 'multiple drives' relate?

 
NAS drives usually have a higher tolerance to vibrations, so even if you don't intend to raid it might still be a better option to use NAS drives in a multi drive configuration.

Seagate recommends their NAS drives for upto an 8 drive configuration and their NAS Pro drives upto 16.
I assume it would be roughly the same for WD and their red drives.
 
Solution
a few years ago home hard drives were built better then they are now. they also had better warranted then the newer drives. the last two hard drive vendors wd and seagate know it a few years the old mec drives will be dead in retail pc. once the cost of ssd drop to a set point people are going to use full ssd rigs with just one drive. if i was building a rig today i look into reviews to see what drives had the lowest doa rate. even if it was a server drive. sometime paying more for a higher end server drive may be worth it if it a work pc or one that going to have a lot of data on it. the question you have to ask yourself is how valuable is your time and data if your data drive goes south.
 


You raise a good point, but when HD prices plunged into the can along with their quality , I started mirroring everything just to save time in the event of drive failure. Backups are still mandatory however.

 

Oh, so vibration from the other drives...that makes sense then. I know that NAS drives usually make a big deal of the fact that the spindle is supported on both ends. I would have thought that would be the case for all drives, but apparently not. That does sound like it would make regular drives more susceptible to vibration-related problems.
 

I usually buy drives in 3's. Main workstation, backup system (also used for running background tasks), and offline backup (external drive case that only gets powered up during backups). Perhaps I should set up an actual server/NAS box configured with RAID, but it seems like an ordeal, and maintenance hassle.
 

I've been paying attention to MTBF ratings, which is why I was considering WD Gold. I've used HGST and WD Black in the past with very few problems, but WD Gold's 2.5 million hour MTBF and 5 year warranty indicate that they'll be even more reliable.

I don't know about SSD drives replacing mechanical drives for high volume storage just yet, unless there has been a breakthrough that I'm unaware of. I use SSD's for boot drives (Intel, as they seem to be the only ones with some protection against unexpected power loss). But it would be expensive to set up 20TB of SSD storage.
 


Most enterprise class SSDs have large caps for dumping the cache back to storage. I have an old 960GB OCZ (after the Toshiba takeover) and you should see the caps in it - huge.

 

I would have thought they'd all have large caps, for just that reason. But when I first looked into SSD drives, a few years ago, even Samsung was reported to lose data (or even brick) on power loss. I heard enough horror stories about other brands that I just stayed with Intel, since I had seen photos of their circuit boards with caps visible. I hadn't heard about the OCZ's. I'll keep them in mind the next time I'm buying SSDs.

 

Mark, If you're still tuned in: Is your model OCZ drive still available? I'm considering buying more SSD's, and I definitely want to make sure the caps are in place in the design. (I've heard that Samsung's regular SSD line does not have them)

 


I doubt the 3600/3700 series drives are sold anywhere. However, I deal with enterprise drives on a daily basis, and most enterprise drives have fairly beefy caps to sustain power for that last little data write from cache. For example Seagate .. the key phrase you're looking for is not capacitor it's "power loss protection"

https://www.newegg.com/Product/Product.aspx?Item=9SIA1K666T5815
 

Well, look at that...an enterprise 500gb SSD for $240. Thanks for the recommendation! I didn't see 'power loss protection' but the page says 'data protection.' I'll definitely follow up on that.