• Happy holidays, folks! Thanks to each and every one of you for being part of the Tom's Hardware community!

2 Big SSD in RAID1 or 4 Small SSD in RAID10

feezioxiii

Prominent
Feb 26, 2017
13
0
510
I'm preparing to build my new setup but still wonder if there will be an 'actual' performance differences between the two below:

1. 2 x 1TB SSD - RAID 1
2. 4 x 500GB SSD - RAID 10

My main use-case would be hosting VMs (not any kind of video editing, ...)

Which one would be a better choice and why?
 
Solution
One of the better discussions on raid and backups I have seen.
Good job.

My similar 2 cents:
The value of raid-1 and it's variants like raid-5 is that you can recover from a drive failure quickly. It is for servers that can not tolerate any interruption.
Modern hard drives have a advertised mean time to failure on the order of 500,000+ hours. That is something like 50 years. SSD's are similar.
With raid-1 you are protecting yourself from specifically a hard drive failure. Not from other failures such as viruses, operator error,
malware, raid controller failure fire, theft, etc.
For that, you need external backup. If you have external backup, and can tolerate some recovery time, you do not need raid-1



3. Individual drives with no RAID, and an actual backup routine.

Performance is the same (or better...#3) among the 3 options.
 


Not really need 100% uptime (as it's expensive), but in case 1 drive fail, I would need to recover it within few hours.

I have also read that before the drive failed, it will lead to the data being corrupted (so mirrored data will be corrupted as well) and we will have to recover the data again? Is it true? If so, then why we even have RAID1/10 in the first place 😀
 


So should I go with 1 TB SSD + a daily backup solution and save some bucks?
 


With an actual, tested backup routine, and a replacement drive...you can recover in about 30 minutes.
No RAID 1 needed.
And get much finer grained backup vs a RAID 1 mirror.

Done properly, a true backup can let you recover from some point in the past.
My current backup schedule lets me recover any specific drive from any day in the last 2 weeks.
As opposed to a RAID 1 witch is only "now".

Anything other than a physical drive fail..the RAID 1 does nothing for.
Accidental deletion, virus, ransomware...the far more common forms of data loss.

Read here: http://www.tomshardware.com/forum/id-3383768/backup-situation-home.html
 


So a single drive + daily backup solution would be good enough?
 


If you don't need real 24/7 100% uninterrupted ops, that is the preferred method.
Full + Incremental or Differential backups.

My systems are on a 2 week rotation.
Full, and then 14 days of Incremental.
Repeat, deleting the oldest ones as it goes.

I can recover any drive, or the whole system, or any individual file from any day in the last two weeks.

"Oh, I need a copy of my resume as it was last Tuesday? No problem."

And in a lot less drive space than a RAID 1.

My C drive is a 500GB SSD.
2 weeks of backups on my schedule takes up 775GB over on my NAS box.
A RAID 1 mirror would consume an entire duplicate 500GB drive, for a single "mirror".
 
RAID is NOT BACKUP.

You are correct, if "garbage" is written to SSD1, garbage automatically replicates to SSD2, so that won't help you with malware for example.

SSD failure is so low. Is SOLID state. RAID was created for mechanical drives. People concerned about 24x7 uptime use dual PSU those puppies will fail before a SSD will.

DISASTER RECOVERY is easy. This is your OS, and it typically doesn't change much. All you have to do is to keep backup images, make C = OS + Apps only to facilitate. Can restore to working system in 15 minutes. Great for malware/viruses, you don't need to run lengthly cleaner.

DATA BACKUP. OK this one takes more planning.

CONSTANTLY CHANGING DATA. Run some kind of backup procedure, full, incremental, the standard stuff, at intervals.

RARELY CHANGED DATA. I.e. you have an audio/video library. Can schedule this backup more infrequently.


I know, you want to RAID and forget about it. Real life doesn't work that way.
 
One of the better discussions on raid and backups I have seen.
Good job.

My similar 2 cents:
The value of raid-1 and it's variants like raid-5 is that you can recover from a drive failure quickly. It is for servers that can not tolerate any interruption.
Modern hard drives have a advertised mean time to failure on the order of 500,000+ hours. That is something like 50 years. SSD's are similar.
With raid-1 you are protecting yourself from specifically a hard drive failure. Not from other failures such as viruses, operator error,
malware, raid controller failure fire, theft, etc.
For that, you need external backup. If you have external backup, and can tolerate some recovery time, you do not need raid-1

 
Solution




How about the performances differences? Will there be a huge difference in real life usage compared between single drive vs raid0/10? As I will be hosting VMs so the usage will be above 'normal'.
 


Now you've branched off into RAID 0, striped. Which is a whole other thing.

With SSD's, RAID 0 performance does not scale as it did with HDD's.
At best, you'll see the same performance as individual drives. In some cases, RAID 0 + SSD is slower than individual drives.
 
As to performance, that is a different issue.
Normally a single large ssd will be the best performer.
A large ssd will have more free nand blocks to accommodate fast updates.
But, I see that you will be having several vm machines running.
It is likely that they will all be operating independently.( how many would be typical?)

In such a scenario, I can see a performance benefit with several ssd devices which can operate simultaneously.
What motherboard and processor will you be using?
A single 1tb drive is normally easiest to manage for a single user.
But, in your case, perhaps 4 250gb drives would be equally easy and have the benefit of simultaneous operation.
I do not know how many simultaneous sata operations are possible with different systems, but it would be worth looking into.


 


Ya that one you will have to Google it. I would want to see some data graph myself.

Educated sense though tells me helps but no huge diff. U got the dough, go for it.

Performance in RAID were originally applied to file servers, with a bunch of people hitting those HDD, pounding and pounding. Now you have solid state SSD, there is no longer R/W HD to move around for latency, and it's a single user machine although you are running VM but still, are those VM constantly pounding the storage, crunching data? maybe.
 





My VMs are most likely not running any IO intensive usage (such as database processing etc). Just running some simple software 24/7 or even idle.

My typical number of VMs would be around 10-40 (highly depends on usage at the time) on an old server.
Mainboard: X9drh-7tf (I saw the mainboard has lsi 2208 too, not sure if it's a good one for SSD?)
CPU: Dual E5-2670v1
RAM: 128GB DDR3 ECC

I've heard people saying that more drives will spread the loads better than a single drive when it comes to virtualization. However, there is no solid article that proofed the above statement yet. What do you guys think about this?
 


Again, this came from the time of mechanical drives and it has validity, for MECHANICAL drives. Think about this, in general, PARALLEL processing faster than SERIAL processing. Serial = one thing at a time, everybody must queue up, on line. Parallel processing = more than one line at the check out counter. So spread your data to multiple drives = parallel processing. But once more an intelligent person would ask, small or large diff for the cost? Going straight to the chase, if your VMs are doing intensive IOs and you got the dough, go for it.
 
Multiple SSD's, probably.
Multiple SSD's + RAID 0, probably no addl benefit.

The main benefit of the SSD is the near zero latency in accessing data. No waiting for the drive heads to move around.

In your use case, I would probably split the VM's across a few individual drives.
2 or 3 each.

But this depends on your usage and size requirements.
 
Your ssd performance will not be an issue.
Since your vm's are not I/o intensive, I see no need to try to optimize I/O performance.
A simple 1tb drive will do.
Your motherboard supports two 6gb sata ports.
If your budget permits, buy a 1tb Samsung 860 PRO. It is marginally faster than other drives with better endurance.
You could buy two @500gb, one for each sata 6gb port.

Your motherboard supports up to 512gb of ram.
You have 128gb.
With 40 vm's active, that allow 3gb per vm.
Sounds reasonable to me.
Whatever you do, you do not want to get into the situation where excessive number of vm's cause hard page faults.
 
10-40 VMs all relying on a single (mirrored) SSD?

I'd choose your initial idea of 4 drives in RAID 10... just because no single VM will be demanding constant disk access is no reason have up to 40 VMs bottlenecked by a single SSD's read/write throughput.

(You might get better informed virtualization host design answers over at Spiceworks, vice a few folks perhaps speculating that one SSD is more than fast enough for up to 40 VMs perhaps based on...well, who knows what...)