One SSD Vs. Two In RAID: Which Is Better?

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

bigdragon

Distinguished
Oct 19, 2011
1,145
620
20,160
I've had 2 Vertex 3 240GB SSDs in RAID0 on my X79 system for a year and a half now. Love it. There's only ever been one little issue where the Intel controller dropped a drive which caused a corruption issue with Chrome. That was a year ago. Haven't had any problems since upgrading the Intel RST drivers. It's probably overkill, but I like seeing one big drive that's a piece of cake to backup and constantly puts me in games faster than friends. Speed is more of a priority to me than reliability.

My experience has been that the internet browser is the primary SSD-killer on a system. Putting its cache on a RAM disk has kept my system running strong. All those little writes for cache really wreck havoc after a year or two. Yeah, the real-world performance isn't anything significant over a standalone SSD most of the time. I don't bother upgrading computers incrementally though.
 

flong777

Honorable
Mar 7, 2013
185
0
10,690
Great article. One thing not pointed out is that Intel RAID setups are a pain in the ass to keep running all the time. This article shows that they are unnecessary for normal desktop use. It is amusing that RAID is of little or no benefit to the average user.
 
My experience with RAID 0 ended with SATA drives. In years past, drives were smaller, and much slower. Memory was also expensive in those days, only the wealthy could afford much more than 8 meg, yes MEG of memory, let alone 8 gig like the standard today. RAID 0 did realize some decent performance gains in those old systems. Since the SATA age though, along with cheap memory, RAID 0 shows very little to no benefit what so ever in a modern system, gaming or otherwise.

Drive Failure. RAID, in my experience again, does not lend its self to data loss because of a drive going bad. Far more often, it is simply because the array is degraded, or broken, usually because of something the user has done tinkering around, and before they realize it's broken, or how to fix it correctly, they have damaged the array to the point of no return.
 

ipheegrl

Honorable
May 23, 2013
2
0
10,510
Use a workstation with 6x 256GB Samsung 840 Pros in RAID 0 on an LSI 9271-8iCC PCIe RAID controller.

Pulls about ~100,000 IOPS on the IOMeter workstation benchmark pattern at high queue depths, so performance does scale as expected.

[citation][nom]Kamab[/nom]Putting them in RAID0 doubles your chance of data failure, aka either drive fails and you probably lose everything.[/citation]

Look at data for the returns rate for Samsung SSDs

http://www.behardware.com/articles/881-7/components-returns-rates-7.html

At 0.48%, the odds of at least one drive failing in even a six drive array are around 2.9%. Now compare that to HDD failure rates here:

http://www.behardware.com/articles/881-6/components-returns-rates-7.html

and it's in the same ballpark. Let's also not forget how differently HDD and SSD failure rates change with time:

http://www.tomshardware.com/reviews/ssd-reliability-failure-rate,2923-9.html

Samsung lists consumer AFR for the 840 Pro at 0.16%, but the real number is likely in between the two.

IMO, the increase in failure rate is not really significant. I'm not saying you shouldn't keep backups of crucial data, I have an HDD RAID 1 array just for that purpose. But it's certainly not a deal breaker, especially with the performance it gives.

[citation][nom]Vorador2[/nom]RAID0 considerably increases the wear and tear on the drives so they will fail earlier.[/citation]

This could only hold true for drives that use on-the-fly compression, like Sandforce SSDs. Samsung's in-house controller doesn't utilize on-the-fly compression. Moreover, given the same load, writes will be distributed across all drives in RAID 0 compared to being written to a single drive. So even if maximum wear scales linearly with drive capacity, you won't see any appreciable change in lifespan.

Not to mention that for 256 GB MLC drives with 3000 P/E cycles, you have to write at least 700 TB before you get close to the limit. Given the caution most people have been told to have writing to SSDs, it's more likely you'll buy new drives before your old ones wear out.

[citation][nom]tsondo[/nom]I am surprised that the article did not mention the biggest performance drawback of RAID0 with SSDs. TRIM is not supported for most RAID0 controllers. Over time, you are going to lose any performance advantage you initially had, at least with MLC. No one should use RAID0 with SSDs unless they can periodically delete everything and reset the drives. If you are using a recent Intel controller you may be ok, but it's worth checking before you invest in it.[/citation]

Garbage collection algorithms are a lot better than they were in the past. I'd agree with you if you were running a server under constant load, but if you give the drives some idle time, they'll recover. Just look at the review for the Samsung 840:

http://www.anandtech.com/show/6337/samsung-ssd-840-250gb-review/11

Some idle time restored write performance completely after torture tests.

Unless you need large and sustained I/O, garbage collection will suffice.
 

ipheegrl

Honorable
May 23, 2013
2
0
10,510
[citation][nom]Evolution2001[/nom]I have an Asus P9X79 with an i7 3930K and 32GB RAM. I initially built it with (2) 120GB OCZ Vertex 3's in a RAID0 on the Intel SATA3 RAID controller. I was hitting a max speeds over 1GB/s. This was built for video production. During the times I had to do video file conversions, the CPU had the biggest impact (along with software that would support multiple cores), but this shouldn't really come as any surprise.I have since switched to a single Samsung 840 PRO 256GB. And there is no difference. And I didn't really expect there to be much difference. Standalone, the drive still pushes 500MB/s on sequential read/writes. I don't expect any DVE or NLE workstations to be able to process more than the read/writes of a capable single SSD on a 6Gb controller.And I'll tell you what...the primary reason I switched to the single drive? I was constantly getting parity errors. I got burnt twice with the RAID0 failing. So I then switched to a RAID1 setup. I figured that while my storage capacity just got cut in half, having my data saved in the event of a catastrophic drive failure was more important. Unfortunately, I was still getting parity errors. (I spent months troubleshooting this on various tech forums...OCZ, ASUS, Intel, where ever.) Eventually I dropped the idea of a RAID altogether and went with the Samsung 840 PRO. System has been pretty much trouble free since then. And the upside is now I have two Vertex3's I was able to repurpose to some aging laptops.So yeah, this article by Toms simply confirms what has been said for years and what I knew deep down was a concern. Single drive failure ruins everything. And quite frankly, if I have data that's not important, I'm going to store it on slower HDDs and not on costlier (per MB) SSDs. I'll save my SSD for the most important, mission-critical things like OS, applications, and immediate data needs.On a side note (re: threadjack), y'all need to check out the SK Hynix line of SSDs on NewEgg (or where ever). The SH910's are, based on my benchmark results, the Strontium Hawk series. (I haven't cracked open the cases to confirm.) The 120GB drives are giving me better max speeds than my Sammy 840 PRO 256GB drive! Sometimes you can even find the SH910's for $90. Definitely worth a look![/citation]

Your problem was likely with the drives themselves, as OCZ is notorious for having high returns rates (see the Vertex 3 data in the link below).

http://www.behardware.com/articles/862-7/components-returns-rates-6.html



 

nekromobo

Distinguished
Jul 17, 2008
110
0
18,680
[citation][nom]Marcus52[/nom]Yes, but the actual MTBF of SSDs is so high that I personally consider it to be a moot point.[/citation]

Well SSD's do fail, maybe not because NAND is gettings too much writes but also because of controller and firmware or software bugs. I myself fried crucial m4 on a laptop, probably because of bsod/power loss for some other reason. It had 99.9% of its life time still "left".
 

Emil Simunovic

Honorable
May 27, 2013
1
0
10,510
First of all Raid 0 is not ment to be storage disk and then please don't mention safety. Safety you can have if you buy quality pieces, nothing to do with raid 0. Single no name SSD will fail also.
I use to run single Vetrex 3 128 GB for long time and the decide to go with another one in Raid 0. I can feel the differences here and there. Everything depends what you do. I do some video editing and for me just from that point Raid 0 make me extremely happy. Huge differences compare to single SSD. All depends what you do. If you browse all day long or use just Office apps then sure, Raid 0 is not for you.
 

sna

Distinguished
BANNED
Jan 17, 2010
1,303
1
19,660


do you know that all SSD have raid inside ?

why do you think more capacity gives more performance then?

SSD are not mechanical ... dont worry much about it ...

and allways backup your important files , doesnt matter one or 2 in raid ...

 

Paul Kucherka

Honorable
May 31, 2013
1
0
10,510
Try the latest Intel Matrix Storage software. TRIM is now supported in older RAID systems. And why are we testing these shoddy hard drives? Try something with the superior SandForce 2281 controller. Then you could bust the 1Gb/s barrier on BOTH read & write on RAID 0 - ALWAYS 2x SSD RAID 0 for OS for me!!!!
 

bp_968

Distinguished
Nov 20, 2012
25
7
18,535
Someone mentioned SSDs having a very high MTBF (published by the drive makers) but that number is just a (slightly) scientific guess made by the manufacturer. Take a look at OCZ drives from a year or two ago if your curious how much weight that number should hold for you. MTTDL is the number thats truely important and in my experience with SSDs (especially OCZ) its a matter of months. lol. Considering the pain and effort expended when you lose a drive/data I simply can't phantom the mindset of people willing to risk it all with RAID0 arrays and simple, or no, backup scheme. Here is what *i* would do if I felt the need for a RAID0 SSD array for my gaming machine. Primary drive = 2SSDs in RAID0 (or however many SSDs you want). Secondary drive = 2 2/3/4TB drives in a RAID1 mirror. Third drive = 2/3/4TB drive. Nightly backup of primary SSD array to the mirror set. Keep 2 nights worth. weekly backup to the single drive. Keep 2 weeks. If you don't like "wasting" that much space then only keep one nightly and one weekly and keep them both on the single drive, drop the mirror and instead use 2 single drives and syncing software. Honestly for home users a set of single drives synced using file-level syncing software is superior to a mirrored set anyway unless you really need the extra read speed from the mirror. A mirror isn't a "backup" of two drives, its a way to increase uptime, *not* provide a backup.
 


I agree completely that mechanical drives are doomed to become ancient history pretty quickly. But I have no idea why an average, or even most power users would need more bandwidth than the speed of a single SSD. Not to mention if you break the array (no if's, and's, or but's about it, it will happen) you lose everything on them.
 

qwerrewqqwerrewq

Honorable
Jun 3, 2013
1
0
10,510


I don't know what RAID controllers you've been using, but it's fairly simple to break arrays and reconfigure them (I do it all the time for drive firmware updates). They allow for this by saving configuration data to each of the drives. As long as drive data is uncorrupted, breaking the array is harmless.



 

mapesdhs

Distinguished


That's not true, some SSDs have very good garbage collection and in the long run perform perfectly
ok without the need for TRIM, which includes the Vertex2E (deliberately made that way since at launch
many users were still on XP). It's why I use V2Es with my SGIs since the IRIX OS does not support
TRIM, yet performance now (after 2 years) is actually faster than when I first installed them. Having
said that, it could well be that certain types of RAID usage prevent such drives from carrying out
their garbage collection effectively.

Who knows though, perhaps there are now SSDs which don't function as well as they might unless
TRIM is available, because the vendor has assumed the user's OS supports it and thus made less
effort to include good garbage collection in the fw.

Ian.

 

Well no kidding? Of course if anyone breaks the array on purpose, they are most likely going to know what they doing and can re-establish the array without problems. However, tell that to all the people who have lost everything on their RAID 0 array because they broke the array somehow fiddling around and did not realize what was going on until they did mess the data up on the drives past the point of no return. Back in the day when RAID was a popular solution for an enthusiast rig, this problem happened commonly, and was posted on these forums countless times. The point is, most people don't need RAID in the first place. The benefits of a RAID array in a modern system for MOST people is moot.
 

Suferbus

Honorable
Jul 1, 2013
231
1
10,710
2 512 gb ssd in raid 0 gives you 1 tb of storage.....If you want 1 tb of storage, there are better ways to get it, you are 2 times as likely to fail due to running 2 drives in raid0. one drive goes, you lose everything. Raid0 writes 1/2 of the data being written on ea. disk.
 

sna

Distinguished
BANNED
Jan 17, 2010
1,303
1
19,660


simply wrong.

2 512GB in Raid 0 will give you 1TB .
 


I think maybe you have your RAID types confused.......RAID 1 or mirroring is what you are thinking here, where the second drive is simply an exact mirror of the first. Great for redundancy and reading large files, (notice I said redundancy, it should never be considered a backup) RAID 0 is data striped across 2 or more drives.
 

Suferbus

Honorable
Jul 1, 2013
231
1
10,710

No I am not utterly wrong. In raid0 1/2 of the data is written on each disk. So say you have 1 gig of data being written, 500 mb of data is written on each disk. So yes, in theory you have 1 tb if you have 2 512gb disks, but if one drive fails, which it will because ssd's fail, then you lose everything. I'm not confused, but maybe I do need to explain myself a little better for amateurs.
 

Suferbus

Honorable
Jul 1, 2013
231
1
10,710

Ya thanks, I know what raid0 is. i just did not explain myself well enough. In theory you do have 1 tb, but 1/2 of the data being written is on each drive, and one drive fails (hence the 0 in raid0), you lose everything... Sorry maybe I need to explain my comments further. I was trying to say 1/2 data written on 1 of the 512 and 1/2 on the other. Not utterly wrong in any way, and very unstable set up, and useless in ssd configuration.
 

mapesdhs

Distinguished
Suferbus writes:
> No I am not utterly wrong. ...

Let me quote your eariler post:

"2 512 gb ssd in raid 0 gives you 512 gb of storage.....If you want 1 tb of storage, you cannot run 2 in raid 0."

That is WRONG, 100% incorrect. Whatever you may have been trying to say
about reliability was ruined by truly appalling wording. :D


> ... which it will because ssd's fail, ...

You'll find plenty here who will say they're using 2+ SSDs in RAID0 perfectly
happily. Of course a single unit failing will lose all the data, but those not
using RAID1/10 will most likely just be manually backing up to some other
device on a regular basis, such as an image backup with Macrium to a
normal mechanical drive.


> ... I'm not confused, but maybe I do need to explain myself a little better for amateurs.

ROFL. :D Perhaps you underestimate the level of knowledge here.
I can't speak for others, but I own over a thousand drives, any my
largest RAID0 just now has 36 x 146GB 15K SCSI for Flame (SGI).

Read your original post, the wording is just wrong, period. Maybe
that's not what you meant to say, but that's what you did say.
Indeed, your post before that was wrong aswell.


> ... I know what raid0 is. ...

Doesn't sound like it. Your later comments suggest you do get the
the way data is spread across multiple drives (though do you know
how it's affected by request size, boundary alignment, page
alignment, etc.? These are important for some uses of RAID0).


> ... and one drive fails ...

IF one drive fails. You seem to be implying it always will,
and that somehow such an occurence is directly linked to
using SSDs, which is false.


> ... Not utterly wrong in any way, ...

We all know data is striped across both drives, but you
quite clearly said earlier that the total space is then just
equivalent to one of the drives, which is indeed wrong.
Are you trolling or something?


> ... and very unstable set up, and useless in ssd configuration.

The stability of RAID0, where it uses SSDs or not, depends
on a whole host of factors, and it's certainly not "useless"
if one's purpose in using RAID0 is speed; I expect most users
would confirm that was their intent.

I suggest you read your original posts & just concede that
whatever it was you were trying to convey at the time, the
literal, implied and inferable meaning of what you said was
not true. Then we can move on.

Ian.

 

Suferbus

Honorable
Jul 1, 2013
231
1
10,710
I am not admitting sh*t because that is not what I meant, and your not right either. Raid0 is not needed, and not recommended in most cases when SSD's are being utilized. The performance gain a user gets just from upgrading to the SSD is enough, and running RAID0 is pointless in most cases due to double the failure rate and data loss. It is posted everywhere, on every forum, and you cannot change that fact, and I did not mean my first comment any other way than that way, and your full of sh*t period.

There you go just putting words in my mouth again. Your such a loser dude. I never said or implied SSD fail more than any other drive, I said you are twice as like to fail since your using at least 2 drives in raid0, do you understand that or should i repeat it one more time.. here lets try it one letter at a time.. i f y o u r u n r a i d 0 w i t h 2 d r i v e s, y o u a r e 2 t i m e s a s l i k e l y t o f a i l, hence not worth the performance gain if you are already using SSD drives....holy sh*t your dense.. and just because you have alot of hard drives doesn't make you a hard drive expert, which i also never said that i was either, so leave me alone you a**
 
Status
Not open for further replies.