Which RAID setup for 26TB Media Server / HTPC ?

steelbeast

Distinguished
Sep 3, 2011
209
0
18,710
Hello everyone,

I have a 13-bay (13x HDDs) HTPC / Media Server that I built and need to know which RAID setup will be best for me. I have already purchased the hardware (scroll to bottom of post for equipment). I understand this may be overkill, but I went with this card because of the 16 ports (need 13) huge cache, built-in hot swap so I can sleep at night and the SSD "upgrade" key that I'm connecting an Intel 320 Series SSD to (Will just tape it to my case). The BBU is great for those occasional brown outs too. All of this is worth my peace of mind for years to come IMO. Oh, and the 4 SAS to SATA cables are included which I know are pricey.

Let me tell you what I do with it so you understand my needs.
Tasks and objectives from the HTPC / Server:
- Streaming 1080p content to two (MORE IN FUTURE) different HDTVs at the same time.
- Record up to two ATSC/QAM shows at the same time with a dual tuner card.
- Torrenting
- Central backup unit that 3 PCs report to in the middle of night for Images and folder syncs.
- 3D Blu-Ray Player
- I use a SSD to keep my OS separated from the RAID array.

Questions:


Do I go with Raid 5, 6, 10, 50 or 60?Keeping the most possible storage space is highly preferable because I already have 8TB of just media, not including backups. However, I want redundancy. I plan on using the "Hot Swap" feature built-in to the card so in case of HDD failure, it will automatically replace the failed HDD. I am thinking RAID 5 or 6 is my best option. Am I correct or incorrect?

Can I do a relatively painless transition from 2TB HDDs I own currently (Seagate Greens) to 4TB HDDs (When available)? The card supports 3TB+, and I have UEFI BIOS.

RAID Equipment purchased:

Intel RS2WG160 PCI-Express 2.0 x8 SATA / SAS (Serial Attached SCSI) Controller Card $799 with cables
This is the exact same as LSI MegaRAID Internal 9260-16i that cost $930 without cables.
Excellent Performance, Highly Scalable: LSI SAS2108 ROC technology, x8 PCI Express Generation 2 host interface and 512MB on-board DDR II 800 MHz cache enhance the performance of mainstream applications. Capable of connecting up to 16 drives directly or up to 128 using SAS expanders.

Supports data redundancy using SAS or SATA hard disk drives through mirroring, parity, and double parity (RAID levels 1, 5, and 6), plus striping capability for spans (RAID levels 10, 50, and 60).

BBU Support: This adapter supports optional Intel Smart Battery AXXRSBBU7 or AXXRSBBU8 to maintain data in case the server or power fails, eliminating the need of an additional bulky power supply.

Hot Spare: Includes global hot spare support that automatically comes online to replace the first drive to fail on any array or disk group on the controller.

Intel RAID Smart Battery AXXRSBBU7 $169
This Intel RAID Smart Battery AXXRSBBU7 monitors the voltage level of the DRAM modules on the RAID controller. If the voltage drops below a predefined level, this Smart Battery switches the memory power source from the RAID controller to the battery pack. This battery pack provides power for the memory until the voltage returns to an acceptable level, at which time the Smart Battery circuit board switches the power source back to the RAID controller. Cached data is then written to the storage devices with no loss of data. This Smart Battery provides additional fault tolerance when used with an UPS.

Intel AXXRPFKSSD Activation Key $170
Uses solid-state drives (SSDs) as additional cache for the RAID controller by means of SSD flash tiering;
frequently accessed information is stored in cache to allow for rapid access

Accelerates SSDs using FastPath I/O, providing up to 465,000 I/O reads per second for small, random
block-size I/O activity; this is a dramatic increase over solutions that do not use FastPath

Thanks in advance :D
 
Solution





With that controller and with descent 2TB drives, you should see read/write speeds in excess of 90MB/sec easily. That should be sufficent for the IO demands you will be throwing at it. You would only need a RAID 50/60 if you were going to be experiencing heavy disk demands (e.g. manipulating SQL databases, data sets, etc. on a server). As it sits, 90MB/sec is going to completely saturate a gigabit ethernet connection anyhow. If you were to go to a RAID 50/60, any external connections to the server through the network would be bottle-necked by the ethernet connection...

danraies

Distinguished
Aug 5, 2011
940
0
19,160
I honestly don't know how to answer your question. 13x2TB is a lot. RAID5 is a good choice, but finding the best setup for 13 drives is difficult. RAID6 would have a smaller capacity but would allow a fault tolerance of two drives. RAID5 will give you a capacity of 24TB and RAID6 will give you a capacity of 22TB. RAID50 is a strange situation and I'm not sure how well it would work with a prime number of drives.

I would be interested to see your entire configuration if you wouldn't mind posting it.
 

ammaross

Distinguished
Jan 12, 2011
269
0
18,790
If you're looking to go purely for space, RAID5. HOWEVER, since you have 13 drives in your array, that were all likely bought at the same time, probably some from the same batch even, you're safer bet would be RAID6, since hard drives have a rough time during the rebuild process and sometimes an additional drive will fail. These aren't enterprise-level drives either, so bear that in mind.

Any form of RAID0 (RAID60,50,10,etc) are shearly for performance gains over the base (6,5,1,etc respectively), and don't give you any additional protection, and most likely will cost you in space.

Any form of mirroring (RAID10,1,etc) will HALVE your available space, but give you good data protection. Since you're looking for space, avoid these.

In short, RAID6 would be your best bet. You can lose up to two drives at any time and still have your data. Just remember, RAID is not a backup, so accidental/malicious (think virus) deletions, file corruption, parition/file-system correction, etc will still be your weak points. Granted, you'd have to have a complete similar system to keep backups, but you could at least keep copies of your most important/favorite things on a 3TB external drive just in case.
 

steelbeast

Distinguished
Sep 3, 2011
209
0
18,710


13x 2TB Seagate Green Drives (Storage) with 2x Evercool HDD cooling boxes
1x 250GB Intel 510 Series SSD (OS)
1x 40GB Intel 320 Series SSD (RAID Cache)
Intel Core i7 2600k CPU
ASUS Maximus Extreme IV Gene-Z Micro-ATX motherboard (Remember, size matters here)
16GB DDR3 1600 RAM
Corsair H80 Water Cooler with fan resistors for ultimate quietness / coolness
PC Power & Cooling 650W PSU
12x External Blu-Ray Burner connected via eSATA.
NZXT GAMMA Mid-Tower Case (small case that can accomodate 13x HDDs and isn't all lit up, cheap too)

No GPU as the iGPU provides all my needs, and of course my RAID card and a Hauppage Dual-TV Tuner. This is connected to a 47in. 3D HDTV from Vizio and I currently use Wireless Headphones and/or the TV speakers, no sound system is needed yet.

I do plan on some overclocking, but I need to get a Kill-a-Watt so I can find the price/performance sweetspot.
 

steelbeast

Distinguished
Sep 3, 2011
209
0
18,710

I just might go RAID 6, given you're recommendation. I would probably skip the "Hot Spare" feature and instaed use a "scratch drive" for my downloads and such to avoid needless writes and possible malware on the RAID array. I don't think I could bring myself to use the hot spare feature, that would put 3 drives completely dedicated to backup lol. That would leave me with 10 drives without the hot spare, or 9 drives with the hot swap. Thanks for your input.
 

tomatthe

Distinguished
I wouldn't consider anything less then Raid 6 with 13x consumer grade drives. I would highly recommend having a hot spare if you care about the data. That many drives running 24/7 will have failures.

Personally I think that case is a really bad idea to run 13x drives inside it. It's going to be one hell of a hotbox in there. The design also doesn't really encourage hot swapping drives when one does die.

Maybe something like this, http://www.pc-pitstop.com/sas_cables_enclosures/scsas156g.asp

attached to your main case would be a bit better idea. I realize that adds a lot to your cost, but if you are putting together something with that much storage you should spend more then $40 on the case imo. I just did a quick google to get to one of those devices, but there are probably lots of option and that one may be a complete pos, I was just linking it for an example.
 

steelbeast

Distinguished
Sep 3, 2011
209
0
18,710

It doesn't get hot at all. The HDDs remain between 27c-38c at any given time thanks to the cooling boxes. Ill admit, swapping drives is a bit of a chore, but it is regardless if I buy a $1,500 case or not. Plus, that adds a whole other box that I don't want to maintain. Having this "all in one" unit in a mid-tower is extremely awesome. I'll live with the sacrifice of the occasional drive swap.

EDIT: RAID 6 is looking like a better option, but if I have one drive failure on RAID 6, I still have to "rebuild" the array to bring back the double parity protection correct? So then RAID 5 with a hot spare still sounds like a decent option if that holds true.

Also, having a 128k stripe size will best suit my needs correct?

I also thought that there were some tests done that enterprise drives are no better at reliability than a consumer drive. It's the firmware that is the primary difference, not mechanics.

Choices, choices...
 

tomatthe

Distinguished


$1,500 is pretty excessive I agree, just seems like a pretty major setup to have in a standard mid tower case. Normally a device with 26tb of storage would be placed somewhere completely out of the way and probably never even looked at. Your setup sounds a bit different since you are also wanting to use the same machine as an htpc.

You would have to repair the array, but the advantage with Raid 6 is that you could lose a drive while it was rebuilding, and still have the chance to replace the drive. Raid 5 systems can definitely drop another drive while rebuilding, particularly when using as many drives as you've got in your set. The rebuilds are very hard work on the drives, which is why it's not that uncommon for another drive to fail during a rebuild.

Not sure on stripe size, prob fairly easy to google and get some good comparisons.

I thought the hardware was actually better suited to running 24/7 in enterprise drives, and the warranty reflected that. Never researched it though.
 

steelbeast

Distinguished
Sep 3, 2011
209
0
18,710

Yeah, by having the RAID Array setup on the HTPC, I am saving tons of bandwidth on the home network by being able to atleast have one HD stream off the network since it passes through the HDMI to the HDTV. Which I have setup with CAT6a cabling through out the home by the way. This also eliminates the need for building/buying a seperate NAS enclosure that is more overhead, and it still clogs the network even more. Plus, I have highly efficient PC components used on this rig for good power efficiency. I dont know the idle wattage load as I dont have a kill-a-watt however.

I'll google stripe size to verify the 128k stripe size.

You bring a great argument for RAID 6 and I'm quite certain I'll do it. I know my RAID controller can handle it no problem lol.
 

mavroxur

Distinguished
With that controller, with that many drives, and shooting for a large array, personally i'd go with one large RAID 5 and dedicate one drive as a hot spare. That way, you have redundancy, with maximum storage space, and the piece-of-mind of a hot spare so if anything happens, the controller can automatically rotate the hot spare in and rebuild the array on the fly. If you want more piece of mind, you can go with a RAID 6, you'll just lose 1 more drive of storage space. (e.g. RAID 6's capacity is n-2, as opposed to RAID 5's n-1 capacity). Depending on how critical the data is should be your guide as to which direction you take. I don't see any reason at all for you to go with anything like a RAID 50 or 60 though. If you go the RAID6 solution with a hot spare (best redundancy) you'd have a redundant unformatted capacity of 20TB. A RAID 5 solution with a hot spare would net you 22TB. For example, a RAID 50 setup with a hot spare would net you 10TB with that hardware configuration.
 

steelbeast

Distinguished
Sep 3, 2011
209
0
18,710

Thanks for giving me solid numbers on the available space I would have, that helps alot. Considering I would be at 20TB with RAID6, including a hot spare, will do me just fine. I've been vigilant/lucky in never losing any critical data, but my data storage requirements has exploded in the last few years and I really like to know I can have extra redundancy along with a hot spare for automation that leaves me with virtually ZERO headaches in the future. That is what's most important to me. I can lose an additional 2TB over that. I will be upgrading to 4TB when they are out and main stream anyway, so 20TB should do just fine till then.

"The tribe has spoken" RAID 6 it is :D

Thanks again to the great Tom's Hardware community.
 

leandrodafontoura

Distinguished
Sep 26, 2006
898
0
19,060
Seriously, NOT RAID 5. This RAID only gives you 1 faulty HDD option, and with 13 HDDs of that TB, that is a stupid choice. Altough RAID 5, 10, 0+1 and 0 are the most common consumer RAID options, it is not your case.

You need a RAID solution that gives you the the benefit of not losing your data should more then 1 HDD fail at the same time. There are more than 10 RAID options available. Some alrady include a "spare" HDD, should any HDD fail, it automatically rebuilds using the spare. I sugest you search online for ALL RAID and study them carefully
 

steelbeast

Distinguished
Sep 3, 2011
209
0
18,710
Well, one more question. should I use a "scratch drive" for my dual TV tuner and downloads to reduce the load on the RAID array and vet any viruses. This would put me down to an 18TB array using RAID 6 with a hot swap, yikes. Is it worth it?
 

keebs

Distinguished
Oct 13, 2011
2
0
18,510
Considering that you want to stream to multiple tvs simultaneously as well as a range of other activities as well of retain adequate redundancy, I would definately recommend RAID 50, but keep a small stripe size due to the number of large files. I once did a full work-up comparison on Raid 1, 10, 5, 5ee, and 50 using stripe sizes 64k, 256k, and 512k. Now granted I was measuring this against Jetstress for Exchange, however, if I forget the exchange data and strictly examine the specs for the drive i/os, my raid 50 and raid 5 had nearly identical read latency, the raid 50 had inconsequintial higher write latency, and raid 50 handles nearly 300% the read/write throughput of raid 5. Raid 50 is impressive if you've never tried it, plus you can have multiple simultaneous failures IF they are in the right slots. I was using enterprise class Savvio drives and the raid 50 was nearly equal to a RAID 10 in performance. Plus all those lights mvoing for one raid set are a really awsome show to watch lol.
 

steelbeast

Distinguished
Sep 3, 2011
209
0
18,710

Sounds like quite a test, but it seems that RAID 50 may be too much overkill for my case since these loads are staggered through all hours of the day and not all at the same time.

Why keep a small stripe size for large files? This is new to me.
 

keebs

Distinguished
Oct 13, 2011
2
0
18,510



I never heard anyone complain because something was too fast. But it's your config... You do what you want to do... Just passing along my results...
 

steelbeast

Distinguished
Sep 3, 2011
209
0
18,710

I didn't mean to insult you, I am just wanting to know the logic behind these conclusions so that I understand them is all. Because it seems the consensus is RAID 6 for my case, but you say RAID 50.
 

mavroxur

Distinguished





With that controller and with descent 2TB drives, you should see read/write speeds in excess of 90MB/sec easily. That should be sufficent for the IO demands you will be throwing at it. You would only need a RAID 50/60 if you were going to be experiencing heavy disk demands (e.g. manipulating SQL databases, data sets, etc. on a server). As it sits, 90MB/sec is going to completely saturate a gigabit ethernet connection anyhow. If you were to go to a RAID 50/60, any external connections to the server through the network would be bottle-necked by the ethernet connection, so you would see absolutely NO difference in backups and file copy operations over the network.

And to the poster that said ABSOLUTELY NO RAID 5, I specifically mentioned "how critical is the data" in my reply. RAID 5 still provides fault tolerance. You will only have no redundancy while 1) a drive is dead in the array and 2) while the controller is rebuilding the array to the hot spare. If the OP didn't want to use a hot spare, then i'd definately say "NO" to RAID 5. But with redundancy and a hot spare, you're only looking at a 2-3 hour window where you'll be "with your pants down", and if the data set only contains DVR'ed TV shows and backups of home computers, that might be an acceptable risk. If it isn't, that's why I recommended RAID 6 for additional redundancy.


 
Solution

steelbeast

Distinguished
Sep 3, 2011
209
0
18,710


This is again exactly the kind of reasoning I'm looking for, thank you. I really want to go with RAID 5, but I really don't want to deal with the headache of the real possibility of a drive failing during the rebuild since all these drives are probably from the same batch and what not. RAID 6 with a hot swap for sure. Yes, it is overkill for my data, but it's worth my peace of mind.

EDIT: My last question. Would a scratch drive be recommended (worth it) to take load off the array for my dual tuner (writes on the array) and downloads, to vet any possible viruses before transferring to the array? Or is that defeating the purpose of using RAID 6 anyway? Thanks.
 

Zenthar

Distinguished
I don't know about the TV tuners, but a "scratch drive" for viruses sounds a bit weird to me. A virus on a disk isn't a problem in itself, it won't compromise the disk; it's when the virus activates that the problems start and no matter on which drive the virus is on, it can always "decide" to delete/corrupt stuff on other drives. The only things that can protect from virus damage is a good AV and a good backup. And the later is the best as it also protects against human errors, but backing-up 20+TB of data is kind of complicated :p.
 

steelbeast

Distinguished
Sep 3, 2011
209
0
18,710

Yeah, I know that most viruses and such are coded to go to the "C:" drive, so it would be my OS drive, but I wouldn't want to be forced with dealing with the removal of the virus from the array. Plus, with my tuner card, you're potentailly looking at alot of writes to the array. I'm guessing that with the RAID 6 setup, a scratch drive is too much overkill and another loss of 2TB of space which is a real compromise given I'll be using RAID 6 with a hot swap. Or can I eliminate the hot swap in favor of the scratch drive? I just can't bring myself to sacrifice more than 3 drives. Going from 2 initially, now to 3 drives (RAID 6 and hot swap) is my max for this data. Which leaves me with 20TB total storage. I already have 8TB I need to migrate over, really only leaving me 12TB for expansion (really less because of firmware on the drive), which should be enough until 4TB or 5TB drives are mainstream.
 
A thought: Would you have any objection to having two separate RAID arrays, say two of RAID 5 or RAID 6 plus a hot spare (I love controllers that can swap to a hot spare)? You would end up with two separate, smaller drives. If the content that you stream is not the content that you record, perhaps recording to one and streaming from the other would lower contention from concurrent access? Just an idea that occurs to me reading this; I have never build a RAID with more than four drives. You should see FireWire2's rig, though.
 

steelbeast

Distinguished
Sep 3, 2011
209
0
18,710

This sounds good, but I guess my logic of using a scratch drive is to avoid needless writes to the array (less wear and tear). Your solution is partial to my reasoning, but not a complete solution. I suppose that if I'm trying to find reasons for it, but no one can come up with it, then there's no reason to have a scratch drive.
 

mavroxur

Distinguished
I don't see any point to a scratch drive for your setup. Generally a scratch drive is used for video editing / photo editing, not just plain recording. And as far as mitigating a virus threat, that isn't what a scratch drive does. A scratch drive is just a temporary storage space that programs use when editing/modifying files (such as video/photo editing) and to store save points during the edit process.



@WyomingKnott -

Not a bad idea with the split array. The only negative i could point out would be that it'd reduce the available space, since you'd have parity disks on each array. A split RAID5 with a global hot spare would net him 20TB total before formatting. A RAID6 would give him 16TB. Unless it's totally necessary to split the array, i'd just make one large GPT volume, and move on. Just a reminder to the OP, you'll need to be running XP x64 or newer OS on the server to use a GPT volume, otherwise, you'd have split it into a dozen 2TB MBR volumes.
 

steelbeast

Distinguished
Sep 3, 2011
209
0
18,710

Thanks for confirming my suspicions on a scratch drive, I figured it wouldn't serve any real purpose for me. I will definitely be using Win7 x64 as the OS too since this is a HTPC.