Shopping for a RAID controller

David86_1608

Reputable
Jan 26, 2016
12
0
4,520
I've been shopping for a RAID controller that will support 7+ 2.5" SATA drives. Ideally, I'd love to build a RAID 10 with 10 1TB 2.5" WD red drives (I've already got 7 of them, so it'd be able $150-$190 for 3 more) but after looking for about 3 weeks, I feel more qualified to shop for feminine hygiene products than for RAID controllers.
I'm shooting for something I could use with a Windows 10 workstation and then run Hyper-V VMs (so I can use the hardware for gaming and for a virtual lab without futzing with someone like dual boot).
Perhaps I am going about it the wrong way, but all of the devices I've found are either $450+ or the reviews say "don't trust this with your RAID 5" or "starts fires, be careful"
Sounds like I need someone, who has had a chance to get their hands dirty, to let me tap into their expertise.

Please help.
 
1) To have a RAID card WITHOUT SAS Expanders to support more than 8 Drives are costly. They pretty much go from

4 to 8 and then 16 drive support. Then if you need more they use SAS expanders to get up to 128 drives or more on drive controllers.

2) DO DO A SOFTWARE RAID. You are BETTER off spending money on a GOOD RAID card.

Something like this

http://www.newegg.com/Product/Product.aspx?Item=N82E16816118242

that has 4 SFF-8087 so that you can connect up to 16 drives without any extra hardware (besides the cables) but there are a lot out there.

And yea a RAID 10 you would get the most read and write speeds, won't have to deal with a controller that doesn't have a battery backup or Cashe since those are used for RAID 5/6 and not 1 and 0, but the downside is you only get half the space of the total space of all the drives but if you need speed over size then RAID 10 is the way to go.

I have been using a Dell SAS 5 HBA for a long time for my 4 drives but upgrading to a LSI MegaRaid SAS 8888ELP. I it is used but it will do up to 8 drives, which is perfect for what i need. I will only be doing RAID 0's (Then i have backup drives for those RAID 0's that i update all the time)

The other option would be to buy one good RAID card and then by a SAS expander which can then multiple your drives.

They pretty much work like a network swich. You have a router with only 4 ports on it but you have 16 PC's. you connect a cable form the Router to a 24 port switch. You then connect all the PC's to the switch.
 


So basically, if I wanted to RAID 10 ten WD Red 1TB drives and then RAID 5/1E my 3 Kingston 120GB SSDs, I am assuming I would be better off buying a good RAID controller for the RAID 10 and then...maybe using my board's native RAID control to handle the OS (the Kingston Drives) RAID?

The end result I am trying to achieve is to be able to spin up a virtual environment with 5-10 VMs AND run WoW or Starcraft off of Windows 10. I haven't had a chance to work with the Hyper-V in Windows 10, but I am hoping if I do a RAID 10 it will work as well as my 2012 R2 Hyper-V with a RAID 5 (using 6 drives) that I had before on a server board (it's an old HP with an NVIDIA controller that you can't get drivers for anymore)

If I was going to do it right, arguably, I should just have two so-so boxes and make one exclusively a workstation for games and one exclusively a workstation for Hyper-V. But if possible, I would rather try to ride the fence and make it work without starting a fire or dropping $500 on a RAID controller card. I'm actually kind of curious to see how well games like WoW and SCII would run off of a RAID array, with a nice card, when my VMs are in a saved state (or when only a few are spinning).

I know one problem I had with the Windows 8.1 Hyper-V was that it hammered whatever disc the VMs were on, but if I am using a RAID with 10 drives in it, instead of a single SATA drive, I am hoping to avoid that particular problem.
 
here is the thing. a RAID 5/6 is Best NOT on a motherboard controller. Why? it isn't full hardware RAID, it just hardware assisted. With RAID 5/6 you are using a parity drive so if a drive fails you can keep going. the thing is if is it NOT an actual RAID controller the CPU has to do all the work and it can tax the CPU a lot if you have a lot going on. RAID 1/0 requires almost ZIP overhead as all it is doing is either splitting the data or mirroring it. no extra work to be done.

Also with SSD's you are better off by buying one BIG SSD VS doing any kind of RAID on them honestly. Unless you were doing some intense caculation or just had the extra SSD's and needed the space you are only asking for trouble. Also unless you are doing a RAID 1 with SSD's you lose ALL TRIM support which can degrade the life of a SSD quite a bit if there are a lot of write to it.

To me it sounds like have two different boxes is better.

And a RAID 10 of 1TB still wouldn't be faster than like a Samsung Evo SSD. RAID isn't as fast as you think it can be unless you have the right stuff. If you have the proper RAID card then yes you can get a lot of speed out of RAID if it is configured right. If it has a BBU then that helps A LOT with RAID 5/6 if you have write back cashe setup. What that is is rather than writing directly to the hard drives it writes to a Cashe or RAM that is on the RAID Card. Once it is there the PC thinks it is done, then the RAID card writes the data in the background. This is good for a small files as more RAID cards don't have more than 1-2GB max on Cashe because once that fills up you will then start to get the speed of the actual RAID 5. So if you have a RAID 5 and actual though put is like 200MBps and you are writing at lets say 500MBps from a SSD that one big file will start out SUPER fast then once the Cashe is maxed out it becomes slow.

Now i must as. Why 5-10 VM's?
 


Well, full disclosure, the reason I was going to use my Kingston SSDs was because I already had them. So I was trying to be cheap. I hoped that if I kept them as RAID 1E (which most basically sounds to me like a RAID 10 with 3 drives instead of 4) I would get the redundancy and speed out of them. If I might as well just RAID 0 the 3 of them together, and run frequent backups, then so be it. I am sure I could make that work. I just wanted to make use of what I already had.

It is sounding more and more like I am better off just biting it and buying a server, second hand somewhere, and then building a nice gaming rig without all of the complications of trying to RAID anything. Having said that, is there any performance advantage I would see from putting together a machine with RAID 0 for 3 120GB SSDs and then a RAID 10 of 4 of the 1TB drives? I figured RAID 10 would be safer than RAID 0, but would still be faster than a single drive or a JBoD configuration with 4 drives.

Then I could just find a system on eBay or craigslist, that will support Server 2012 R2, that has a healthy backplane for me to setup my other 6 WD 1TB reds on and just use that for a Virtual system. Certainly couldn't be worse than what I have right now (which is this out of date HP system throwing my data RAID every few months).

That brings me to thing about 5-10 VMs. The short answer is that I am slowly (so slowly) working on burning through my MCSA for Server 2012 R2 certifications (there are 3 of them, then 2 more for the MCSE). So I would need a DC (for starters). I was then going to get an FW server, an SCCM system, a SQL server, a SharePoint server, and a WSUS server (though I may put that on the same system as the SCCM system). So, that would be five out of the gate, and then I would end up spinning up about 2 other server boxes (so anything else I need) and at least 2-3 clients (was hoping for a Windows 7, 8.1, & 10).

I am probably trying to scale a solid cliff face with my bare hands with this approach, instead of just taking it slow, but I figure if I set up everything correctly right out of the gate instead of having to go through the trouble of incrementally upgrading and changing my setup, it should save some time (and maybe I wouldn't have to go through these motions EVERY year because the manufacturer stops supporting my hardware).
 
AH! gotcha! Yea i got and old server from a client who didn't need it. Uses the older 771 socket Xeons. It is a nice board, can take up to 2 quad core Xeons 5400's up to 3.2 ghz and 64GB of ram, not to bad for a 10 year old server, but it is the nice, but out dated chassis it is in that makes it lose it value, It has a SCSI backplane and the SATA/SAS backplanes for the same chassis don't fit this one 🙁 It has 6 hot swap bays. So for the time being i just took out the back plane and have the power and SATA going to the drives direct. Playing around with things like Roaming Profiles, Upgrading Windows 7 Pro to 10 while on a Domain, Folder Redirects etc.

Might also want to look into a WSUS server since I have a few clients on bonded T1's still.

but yea you might be better off finding a cheap server, or you can build a cheap PC with like a 8 core AMD and lots of ram to help with the VM's and then buy a cheap RAID card (I just bought a LSI megaraid 8888 for 13 + Shipping on ebay to replace my Dell SAS 5 HBA which only does 4 drives) and that can do any raid. its only SATA II but hell you are only using HDD's and not SSD's so you still can't max out the controller. Just find a case with lots of hard drives or find a server Chassis.

Also got a Dell Power edge 2970 (Single quad core AMD with 4GB i think) that has been sitting in our office for years but it is heavy as hell and would be expsnive to ship unless you live in socal lol. But yea I would look around for a good Rack/Tower server that can hold your hard drives or at least a Chassis or Case that can store them.
 
Yea that should be fine there. Just BEWARE!! DO NOT UPDATE OR RESET THE BIOS without doing a backup FIRST! As a BIOS Update or Reset will WIPE the RAID config. Now you should be able to go in and just turn the RAID back on in the bios and it SHOULD pick it up but a lot of people have issues with that. Just a warning is all.