Raid zero with 4 120gb ssds?

pazsion

Distinguished
Feb 9, 2009
155
0
18,690
we all need that speed =c

Need and could use... is two different things. I'm still mad the internet went backwards in terms of megabytes vs megabits...and nobody noticed or complains. Basically because some people decided we didn't need that much speed...it's so frustrating when i want to watch HD videos...and neither the provider or my isp doesn't have the proper bandwidth to play it correctly.

So i download it, on my 5400rpm HD which is also inadequate...sigh

lets get a theoretical number here...for the HD's

If one normally gets 20-40MB/s random read and write...which is closest to actual users during multitasking. These ssd's are not far from this base number.

you'd get 80-160MB/s constant, in theory. This still doesn't meet basic needs for any pc user. But it works. And sure beats waiting around for one drive. If your lucky the HD's will have their own controllers, and won't hog cpu cycles. The os can use all of that up by itself if this was the drive the os was installed on!

You could play 3 or 4 or more HD videos and 320bit data rate mp3's at the same time from disk. depending on your cpu and gfx setup. with almost no hiccups. And record a few RAW music files as well. You can easily do this from the net, because your only getting 600-800kbps per video or mp3. And maybe a 10kb-100kb cache file is all that hits your HD. Mostly useing ram,cpu and gfx there.

The other need, these days that would be met by doing something like this... Is you could play multiple files while also streaming, uploading or downloading the exact same file.

Another possibility of a solved need. Loads of ram... the peak speeds of such a setup could exceed 2GBs. This is approaching the speed of ram. If only we could tell windows to use it in such a way. There are some instances even with 40+ Gigs of ram...that it gets fully used, and hangs and bottlenecks happen. Simply adds more bandwidth adding page files to each drive.

Try all that with a single drive...and you may end up getting a file in use or does not exist error. You may even crash =c

What would cause a raid 0 to fail?? other then the HD itself failing?

Could this be corrected by haveing a separate raid 1 array of 4 drives? 8 in total?
 
^ A couple of things
1) A pair of drives in raid0 is NOT double the peed of a single drive. 4 x drives is now wheres need 4 x.
2) Raid0 provides a big boost to squencial read/writes. Little if any change access time and not a large change to small file Random read/writes. Raid0 makes sinse if you do a lot of work with LARGE data/file structures.
3) Even 10 SSDs in raid 0 will not be as fast as using Ram. Ramdrive is about 10 X a single SSD. 10 SSDs raid0 is way less than 10 x single SSD.

As to failure rate / data loose. It is not just a single drive failure, There are other failure modes that can can cause you to loose your data, EVEN on Raid0 set ups. How about a virse that requires a re-format to get rid off, How about the Hickcupps that wipe drives out or distroy MBs, such as a PSU failure. If MB is replaced and the Raid controller is not the same - Good chance, Buy-Buy data. I prefer external HDD back-up over Raid1
Sata III SSDs have Not been out long enough that "User" tested raid0 reliability is available. NOT the same as Mauf stated claims.

Bottom line - I don't think that "consummer" SSDs are raidy for Prime-Time Raid0 4x
 

a4mula

Distinguished
Feb 3, 2009
973
0
19,160
SSDs have a much lower failure rate than mechanical hard-drives. They also have better striping scaling. It is quite possible to see sequential reads and writes that scale at near 100%. This is going to be determined by the RAID controller more than the drives themselves.

Hence the problem with trying to get a 4x RAID0 setup going. AMD's 950 southbridge supports up to 6 drives in RAID0 but it's a far inferior controller to Intel's. The problem with Intel's controller is that there is only 2x on any given 1155/2011 board. There are boards out there like the Extreme7 2011 that have the ability to drive 4x SSDs, unfortunately it's done via the Marvel 9128 controller which will cap at about 320Mb/s per drive. Might as well use Sata 3Gb/s.

The only decent solution is to buy a discrete RAID controller or HBA like the LSI 9211-i8. Of course now you're talking about adding a considerable amount of money for a project that really boils down to playing outside of enterprise use.

I strongly considered and researched this route for awhile, at the end of the day however it was just too much money, and too much of a headache for gains that you'll likely never even notice.

Now I'm convinced that SSD caching is the hassle-free way to go and I'll be picking up the OCZ Synapse drive. It turns an entire harddrive into a 500/500 beast for your most used applications. The really great thing about it, is after you install you never have to think about it again. I'm so tired of constantly worrying about my current SSDs.