File server / DIY NAS - Cabinet for 16x 3.5" disks

B4vB5

Honorable
Oct 24, 2013
39
0
10,540
I'm planning a 56 TB RAID5 NAS for personal use. Enclosures that does that stuff for me are too expensive or lacking flexibility or I have little faith that they will last 10 yrs, as I wan't the NAS to last.
I set my mind so far on a cheap barebone 1155 PC with just enough interface for dual Adaptec 2805s and then each of those should run 8x WD Green 4TB's in RAID5 at 7-1. This has been chewed on quite a bit and is the cheapest I can get the non-HDD costs of my build, which is so far 23.5% of the total costs, meaning the 16 HDDs will cost the other 76.5% of the total expenditure. That's on par with the cheapest 4x3.5" RAID5-capable-NAS enclosure without disks I could find at 23.9% and it offers the flexibility of being a real machine and has a Gbit LAN card etc and later to expand the LAN features if needed. Furthermore the Adaptecs should be very realible.

If people have comments or suggestion to the above, I'd like to hear them too, so far its just a plan.

Now I need a case that'll hold 16x 3.5" disks, physically, and up till 8x 3.5" cases are fine, the cases are cheap($42) and have room for the in and out fans to cool the many disks.

But once I tried to find a case with 10/12/16 3.5" bays, the prices skyrocket from $42 to $250 for a case, so basically,

(1) How/What do I modify in a case to make it hold 16 HDDs anyway?
- Does side-by-side bays exist so I can make like an 8-high stack of 2x3.5 in the same vertical space?
- Does any cabinets come wide enough to hold side-by-side 3.5s in "stock" mode?

(2) Any sub $80 cabinets recommended that can hold that many disks or sub $80 compound solutions that'll make something hold 16 HDDs?

I could just get 2 cases of 8 each, but then I would need 2 MB/CPU/RAM/OS/PSU and not really interested in that. Hence the 2x $42-case-that-holds-8 = $84 case budget.
 
DO NOT use green drives for raid arrays, they park the heads so often that the controllers lose them regularly and break the array. You can use just about any other drive -- just not the greens. You can, however, use green drives if you use ZFS (discussed below).

Use RAID 6 for that size array if you want hardware RAID, RAID 5 is not very reliable for that size -- RAID 6 and skipping the hot swap is better to avoid a failed rebuild, which is highly likely with that size array.

And with that size array, I would strongly suggest that you consider FreeNAS and ZFS with raidz2. You don't need RAID controllers, just less expensive HBAs and the IBM M1015 (a rebadged LSI product that is much cheaper with the IBM brand on it) with mini SAS to SATA forward breakout cables like THESE.

And AFAIK the M1015 needs a firmware update to accommodate drives over 2TB, so be aware to find that and flash it before doing much else.
 
Excellent info, thanks alot.

I was thinking of going RAID 5E or 6, but I will now plan to go Z2 as the info seemed to indicate it basically is RAID6,
"RAID-Z2 doubles the parity structure to achieve results similar to RAID 6: the ability to sustain up to two drive failures without losing data".

I will plan for WD Reds(WD40EFRX) or Seagate NAS(ST4000VN000) then, is there any issues with these drives in your experience, for my purpose? (Which would you pick, they cost the same at my location)

I saw on the FreeNAS site that it would like 1GB RAM pr 1TB diskspace, thats..quite alot, will it actually use 32GB if I stuck it in there(that would be the pain limit I would spend on RAM at €7/GB) or is 4GB as long as I dont expect stellar results? (I planned on 4 or 8 GB for the cheap base PC thats holds the HBAs)

On the firmware side, I found this post and it looks promising,
http://forums.servethehome.com/raid-controllers-host-bus-adapters/2648-ibm-serveraid-m1015-lsi-megaraid-9220-8i-maximum-hdd-size.html
At least alot of other people had it working so and I always flash MB/SSDs/HBAs/Netcards(old days) when I have an old firmware.

Again, superb info, opened my eyes for another way :)
 
If you go ZFS you can use Greens, if you go RAID you could use either the Reds or the Seagate drives. With ZFS you are just using the cards to connect all the drives an not for any raid purpose, as it is far better not to use RAID cards with ZFS so that the drive scrubbing utilities have direct access to the drives and do not have to deal with finicky raid controllers.

With ZFS I would accept 8GB with that size array since it constantly checksums and corrects data on your individual drives more is better but I think that if it runs 24/7 8GB should work fine. The checksums really reduce the chances of data corruption killing your array (unlike RAID that has no similar feature and why RAID 5 with that size array is a bad idea -- indeed that is the real attraction of ZFS). Spend some time reading up on FreeNAS and ZFS, there are tons of good resources on the Internet.

As far as a motherboard you need two PCIe x8 slots and don't use MSI motherboards (everyone has issues with them). I've used ASUS and ASRock without issues with the bottom i3 CPUs.

ZFS is IMO the only way to go for that size array and I would use raidz2 or maybe even raidz3 and no hotspares. The only problem is that it will take time for you to get up to speed, but it sounds like you've been around a while like me and you will adapt to serve the ZFS collective. 😀
 
If you're looking to support that many drives the m1015 might not be the best solution as it only supports 8 drives directly an needs expander cards to add more. The expanders go for $300+ so you might just look for an HBA that supports 24 to 40 drives right from the start. I dont follow the used market that well to know whats at the best pricepoint but the m1015 is usually about $100 and an excellent buy at that price, but its not a raid5/6 card. It shines in raid0/1. You'd want to flash to IT mode anyways which makes it a basic SAS/Sata controller.
 
He would need 2 in IT mode, but they are still a lot cheaper if he goes with ZFS and needs just an HBA.

Personally, I would not try to build a cheap array of that size but if I went "cheap" and wanted RAID I would go with an Adaptec 72405 since I've used a lot or the Adaptec 7 series cards and spent some time playing with that specific card. I would use RAID 6 and some hot spares though and use high quality drives. At that size and drive count I would be much more comfortable with ZFS though, and it would allow the use of cheaper drives although I might just go raidz3 for his desired array.

Like the child who just had to touch the stove, earlier this year rather than just do a fresh build, I did a RAID 5 to 6 migration adding a couple drives on an Adapter 6805 card -- a couple weeks later it actually finished despite a drive failure and replacement. Ugh.

It's just hard to justify any expensive solution for a personal use array of that size, I rip all my BluRay disks to RAID 6 8 x 3TB array on an Adaptec 7805 and with over 400 full disk rips I still have loads of room. And I'm too lazy to set up a FreeNAS/ZFS box unless someone is paying for my time. :)
 
I am currently planning to go with RealBeasts suggestions, mostly, with what I know so far.

I was planning on using 2x M1015s just like I started with planning for 2x 2805s, but just as is, no hardware RAID setup, the FreeNAS is suppose to be doing all that for the ZFS advantages.

Currently the plan has changed to 16 WD Reds, 8 on each M1015 and then 16 x 4TB RAID Z2 for 50.9TB useable space, using FreeNAS to run the RAID Z2 as a software raid. After flashing every green card in sight.

RAID Z3 sounds like too much parity for me, its into 17TB of wasted space on parity which is over 25%.

I still need a hands-on DIY thing for the disks themselves though. If it becomes a dremel this and that solution, I might just plan for half the solution for now and maybe live with 2 cases.

Googling it kinda resorted to either (a) welding two cases together or (b) suck it up and buy an über expensive LianLi case or (c) get a case that holds 8 drives and then fill the 5.25s with metal shelves for 3.5s while attaching a metalcage holding the remainder to the bottom of the middle of the case. That will require quite a bit of cooling throughput though.

So I am gonna focus on the 8 drives case solution for now and just accept double overhead on the PC parts. This will mean I should start with a used Sandy Bridge era computer from the yellow pages or the like to save on overhead. Thats the plan so far.
 

TRENDING THREADS