[SOLVED] Program for software raid 5?

Araphen

Distinguished
Oct 20, 2013
18
0
18,510
Howdy homies,

I hope this forum post finds you well. I recently upgraded from 4 to 6 4TB drives for my plex server. They're all hooked up directly to my motherboard and I was using fake raid for raid 5 for the 4 drives which took about 18 hours to initialize. I backed up all the data on an external drive, added the two new drives, and tried setting up the same thing but it ran for 3 days without completing. For some reason if you use 3-4 drives it shows a progress bar but if you use 5-6 it doesn't. I had no idea if it was progressing but 3 days felt like way too long to wait so I canceled it and I'm looking for alternative solutions.

I read up on fake raid and how most people recommend just using software raid instead but win10 pro doesn't offer raid 5 in disk management. I set up a striped volume with plans of relying on my weekly backups but that feels like too much of a compromise. Is there a go-to program for software raid 5? Or does anyone have any other suggestions?

All suggestions are much appreciated

Forever yours,
Araphen
 
Solution
Windows 10 does allow software RAID 5, they just don't call it that.

Under Manage Storage Spaces and Create a new pool and storage space, select at least three drives and Create Pool. RAID 5 is Resiliency Type "Parity." If you set it up right you should lose a drive's worth of capacity for resiliency in the case any one drive should fail.

Software RAID is safer in the sense that if your hardware like the motherboard fails, you can recover the array without having to find the exact same hardware like you would have to with fake RAID. But RAID is only for uptime, and does not obviate the need for a backup.
Software or hardware, raid is not a back up solution and should never be used for such purposes

Your best bet is to build yourself a cheap Nas using Linux
 
Windows 10 does allow software RAID 5, they just don't call it that.

Under Manage Storage Spaces and Create a new pool and storage space, select at least three drives and Create Pool. RAID 5 is Resiliency Type "Parity." If you set it up right you should lose a drive's worth of capacity for resiliency in the case any one drive should fail.

Software RAID is safer in the sense that if your hardware like the motherboard fails, you can recover the array without having to find the exact same hardware like you would have to with fake RAID. But RAID is only for uptime, and does not obviate the need for a backup.
 
Solution
Software or hardware, raid is not a back up solution and should never be used for such purposes

Your best bet is to build yourself a cheap Nas using Linux
I was considering that as well but I've already spent too much money and I want to make this work with what I have. I also considered an external raid enclosure since they're cheaper from what I was finding but it's still an extra $200 USD.

I know raid isn't a backup that's why I'm running weekly backups to an external. I know that's not up to some people's standards and ransomware might get me but at the moment I'd rather risk it than spend another few hundred dollars on more backups.


Windows 10 does allow software RAID 5, they just don't call it that.

Under Manage Storage Spaces and Create a new pool and storage space, select at least three drives and Create Pool. RAID 5 is Resiliency Type "Parity." If you set it up right you should lose a drive's worth of capacity for resiliency in the case any one drive should fail.

Software RAID is safer in the sense that if your hardware like the motherboard fails, you can recover the array without having to find the exact same hardware like you would have to with fake RAID. But RAID is only for uptime, and does not obviate the need for a backup.

I'm still running weekly backups to an external HDD. I think there's a higher chance of losing everything to ransomware or a house fire than losing 2 drives in the raid and the external. And offsite backups are way too pricey and the data is just plex media. So it's not critical stuff and it would be a pain and take a few weeks to replace but not impossible.

I tried setting it up but I'm only seeing 14.5TB size. Does this look right?

SEHobNn.png
 
I tried setting it up but I'm only seeing 14.5TB size. Does this look right?
Yes.
The RAID 5 loses the size of one drive.
4 x 6TB = 24 (21.8)
3x 6TB = 18 (14.5)

You're seeing the "14.5" instead of 18 is simply due to different units.
Base 10 vs Base 2.
You're not 'losing' anything.


Having said that...
Why the RAID 5? Its only real purpose is continued uptime in the event of a physical drive fail.
If you have a reasonable backup routine, then that parity space is just wasted space.
 
Weekly backup to external is fine, especially if you can keep the external at a different location.

Looks about right. You're losing 1/3 of your capacity to the RAID (the formatted capacity of each 4TB drive is 3.63TB as that's the difference between 1024kB sized megabytes and the marketing megabyte that's 1000kB to allow a larger number on the box)
 
Yes.
The RAID 5 loses the size of one drive.
4 x 6TB = 24 (21.8)
3x 6TB = 18 (14.5)

You're seeing the "14.5" instead of 18 is simply due to different units.
Base 10 vs Base 2.
You're not 'losing' anything.


Having said that...
Why the RAID 5? Its only real purpose is continued uptime in the event of a physical drive fail.
If you have a reasonable backup routine, then that parity space is just wasted space.

Oh I looked straight at the lowest number and was confused immediately but I'm still confused though. If RAID 5 uses one drive for parity and I'm using 6 4TB drives (each has about 3.63TB usable space) the total usable space without RAID is 21.8 and with RAID it should be about 18.17TB, right?

I wanted to go with raid 5 since it takes 10+ hours to restore from backup (external's max read is 200MB/s) and if one drive fails, it would be great to be able to just order a new one and swap it without 2-4 days of down time (diagnose, order replacement, swap it, and restore from backup). I figured with 6 drives the chance of a drive failure isn't negligible and I'm expecting this set up to last another 4-6 years at the rate I'm adding new content.
 
Weekly backup to external is fine, especially if you can keep the external at a different location.

Looks about right. You're losing 1/3 of your capacity to the RAID (the formatted capacity of each 4TB drive is 3.63TB as that's the difference between 1024kB sized megabytes and the marketing megabyte that's 1000kB to allow a larger number on the box)

Since I'm using 6 drives shouldn't the amount of storage used for parity be 1/6th? When I was using fake raid 5 with 4 drives the parity space was 1/4th of my total capacity
 
Oh I looked straight at the lowest number and was confused immediately but I'm still confused though. If RAID 5 uses one drive for parity and I'm using 6 4TB drives (each has about 3.63TB usable space) the total usable space without RAID is 21.8 and with RAID it should be about 18.17TB, right?

I wanted to go with raid 5 since it takes 10+ hours to restore from backup (external's max read is 200MB/s) and if one drive fails, it would be great to be able to just order a new one and swap it without 2-4 days of down time (diagnose, order replacement, swap it, and restore from backup). I figured with 6 drives the chance of a drive failure isn't negligible and I'm expecting this set up to last another 4-6 years at the rate I'm adding new content.
Replacing a dead drive in a RAID 5 also takes a long time.
The entire array gets rebuilt. On the order of 1.5 hour per TB, and is unusable during that time.
Yes, I've done this, and it DOES take that long.

And I misread the drive config.
I thought you moved from 4TB drives to 6TB drives.

Not 4 drives to 6.
 
Yes, with actual RAID 5 you are only supposed to lose the capacity of a single drive (3.63TB) and the array is only supposed to be able to recover from the failure of a single disk. I don't know if Microsoft's parity allows losing any two disks, or if they just programmed it expecting people to use only three disks, where 1/3 would make sense. That's a question you'd have to ask them.

With RAID 1 you lose a whopping 1/2 of the capacity but it's fast because no parity calculations are required. This even allows a degraded array with a dead drive to perform at fully normal speeds.
 
Replacing a dead drive in a RAID 5 also takes a long time.
The entire array gets rebuilt. On the order of 1.5 hour per TB, and is unusable during that time.
Yes, I've done this, and it DOES take that long.

And I misread the drive config.
I thought you moved from 4TB drives to 6TB drives.

Not 4 drives to 6.

Damn I didn't know it took that long and would be unusable during the rebuild. Is that 1.5 hours/TB of the drive being replaced (so if I'm replacing a 4TB drive it would take less than 3 hours) or 1.5TB/hour of content in the raid? That's still sucky but a drive failure wouldn't be immediate down time and the rebuild would only take about 12 hours

So 14.5 is an unexpected number, right? Since the storage space allocated for parity should be equivalent to one drive but this looks like it's using 2 whole drives for parity. I could just change it to 18.17TB when i set it up but would that cause problems?


Yes, with actual RAID 5 you are only supposed to lose the capacity of a single drive (3.63TB) and the array is only supposed to be able to recover from the failure of a single disk. I don't know if Microsoft's parity allows losing any two disks, or if they just programmed it expecting people to use only three disks, where 1/3 would make sense. That's a question you'd have to ask them.

With RAID 1 you lose a whopping 1/2 of the capacity but it's fast because no parity calculations are required. This even allows a degraded array with a dead drive to perform at fully normal speeds.

I have the option when setting it up to change the size manually and I can set it to 18.17TB. Doing so actually changes the size of the usable storage space but I don't know if it retains the parity
 
1.5 hour per TB of the actual data space.

So if you had 10TB consumed, you're looking at a 15 hour rebuild time. During which ALL drives are hammered 100%.
So if Drive #2 dies, and a bunch of these were bought at the same time, what are the chances of a second one going during this rebuild. A RAID 5 cannot recover from 2x dead drives.

That is for an actual RAID 5, I am unsure of the specifics regarding Storage Spaces.
As well as unsure of their calcs for how much space is available with whatever drives.

If you can expand it to 18.1TB, presumably that will take into account the 2 new drives.
 
I have the option when setting it up to change the size manually and I can set it to 18.17TB. Doing so actually changes the size of the usable storage space but I don't know if it retains the parity
Yes, that should retain enough space for 1 drive resiliency then.

I would think that using an actual computer with a fast CPU to do the calculations would rebuild an array much quicker than whatever cheapo router SoC CPU they put into NAS boxes! If fast enough then the drives themselves could be rate-limiting
 
1.5 hour per TB of the actual data space.

So if you had 10TB consumed, you're looking at a 15 hour rebuild time. During which ALL drives are hammered 100%.
So if Drive #2 dies, and a bunch of these were bought at the same time, what are the chances of a second one going during this rebuild. A RAID 5 cannot recover from 2x dead drives.

That is for an actual RAID 5, I am unsure of the specifics regarding Storage Spaces.
As well as unsure of their calcs for how much space is available with whatever drives.

If you can expand it to 18.1TB, presumably that will take into account the 2 new drives.

Oh right I was doing my math wrong. I think I'm gonna just set the size to 18.1 and if a drive fails and I actually don't have any parity then I'll have to rely on my backup I guess. Maybe I'll use the backup and the storage space array and put them in a mirrored volume in disk management. I wonder if I can even do that and if so how it would impact performance.


Yes, that should retain enough space for 1 drive resiliency then.

I would think that using an actual computer with a fast CPU to do the calculations would rebuild an array much quicker than whatever cheapo router SoC CPU they put into NAS boxes! If fast enough then the drives themselves could be rate-limiting

That's what I'm hoping for. The read/write of a single WD red is about 200MB/s so if the CPU can handle it I would think it could be rebuilt in less than 6 hours but USAFRet is saying it would take way longer than that so idk.
 
I would think that using an actual computer with a fast CPU to do the calculations would rebuild an array much quicker than whatever cheapo router SoC CPU they put into NAS boxes! If fast enough then the drives themselves could be rate-limiting
Its not just the CPU.

https://serverfault.com/questions/967930/raid-5-6-rebuild-time-calculation