madmanbmw

Distinguished
Dec 31, 2007
101
0
18,680
I am getting ready to build a new system for myself and I would like to have two hard drives using a RAID 0 config. Probably somewhere around (2) 45 gig IBM 7500 rpm hds. What I have been reading on this board about RAID has confused me. Will running RAID 0 from a RAID mobo cause my system to slow? (1.4 ghz athlon, overclocked that is, 256-512 mb pc133) I do not want to get in the big $$ for this hd config. Would my money be better spent getting a non RAID mobo and a Fasttrack RAID ATA 100 card? What kind of performance difference would I get. The system will be used to vide + audio editing + gaming. Please leave SCSI out of the equation. I want to stay fairly simple.

Thanks

Madison
mmcmajor@apex.net
 
G

Guest

Guest
>>Will running RAID 0 from a RAID mobo cause my system to slow? (1.4 ghz athlon, overclocked that is, 256-512 mb pc133<<

No..

>>Would my money be better spent getting a non RAID mobo and a Fasttrack RAID ATA 100 card?<<

Technically it should be about the same.
(Probably discussions where people bring in the Supertrak among others may be what confuses you. Leave it out of the equation as well).




***Hey I run Intel... but let's get real***
 
G

Guest

Guest
the only time a system is noticably slowed by a raid system if if your running raid at the software level rather than a hardware level. since you mobo has raid function built on, this would be considered hardware level raid and will have minimul impact on system performance.
 
G

Guest

Guest
>>>the only time a system is noticably slowed by a raid system if if your running raid at the software level rather than a hardware level. since you mobo has raid function built on, this would be considered hardware level raid and will have minimul impact on system performance.<<<

that's false. The typical RAID found on the Abit boards and Promise Fasttrak is software based RAID. There is no dedicated processor handling RAID functions. It must be handled by the cpu. Still with the processor planning to be used the software RAID on a 2 disk RAID 0 array is just as good and better in some instances if not all than some of expensive IDE hardware RAID controllers you could buy.

They may (I doubt it) come up with an IDE hardware RAID controller that will perform better than the Fasttrak RAID and 1.4GHz system. But you can bet it would cost serious beaucoup $





***Hey I run Intel... but let's get real***
 
G

Guest

Guest
Any way you set up your RAID 0 without the mirror is very risky. RAID 0 is a striping of your drives, and leaves no redundancy for a drive failure. If you second drive fails, it will be like you had two drives fail since RAID 0 splits the info across the drives. A RAID 0 + 1 is the option you should look at, more expensive since you need two more disks, but at least if you have a drive fail you won't be dead in the water. RAID 1 is the Mirror option. RAID 0, striping, will increase hard disk performance, but you cannot control what info gets copied to a particular drive at all, the OS may be on two drives.
Performance, the RAID built onto your motherboard is a software RAID, the chip on the board just allows this kind of setup, but does none of the processing, your CPU does it. You will see a slight hit in CPU usage, but not dramatic. As for the Promise SuperTrak Controller card, thanks Chord, it does have its own processor, controller, and memory to handle RAID functions. Whether or not this controller is be better than the motherboards own software RAID option has not been confirmed, but it is only sensible to think that it would be slightly better since the main systems CPU and Memory are relieved of these functions and left for the SuperTrak to handle.
 
G

Guest

Guest
How is running two drives in RAID 0 any more 'risky' than running one drive without a backup? In either config, a drive fails, you're hosed.


The glass isn't half empty, it isn't half full. There just isn't enough in it.
 
G

Guest

Guest
Simple, it comes down to MTBF (mean time before failer) lets say that one drive has an MTBF of 500,000 hours then in theory it should run for 500,000 hours before it fails. This is not the case most of the time since there are allot of other factors that can and do effect this predicted life span. Now to the part that is of concern to you, that of the duel being less reliable. If you have 2 drives then the likely hood of a failer is cut in half to 250,000 hours. 3 drives and it becomes about 166,666 hours and so forth. Don't ask me to explain the math behind this, it's a hell of a path to go down and I'd rather not (Even if I could remember it). So the more drives you add to your array the less reliable it becomes hence the reason for a RAID5 array.

Mike
 
G

Guest

Guest
Actually, I've seen some pretty knowledgeable people debate over whether it's 2x or 4x the chance of array failure for 2 drives in RAID 0. They each had their own math to proof it. (I'm sure you'd like to see it, and I'll try to get it but don't hold your breath). Either way you increase the probability of array failure when you increase the number of drives. For a small example, let's say you buy a drive that has a 5% chance of failure within 5 years. Adding another drive doesn't leave that at 5%, because that percentage is for one drive where here either drive failing causes failure. Now whether it increases it to 10% or 20% due to probability I don't know. (I don't think the 20% is correct myself otherwise we'd all find out very quickly that RAID 0 is not practical).


***Hey I run Intel... but let's get real***
 

bw37

Distinguished
Jan 24, 2001
244
0
18,680
I'm no statistician, so judge the following for yourself.

Failure rates for most things have a probability distribution that changes with time. For example: one in a million fails within the first minute, one in 100,000 within the first hour, 1 in 10,000 within the first month, etc. Every individual drive is different, EVEN when used under the exact same conditions. Estimated "mean time between failure" (MTBF) is the average for the whole "population" of that model drive out there. Any one drive could fail at any time, very soon or very late. But on average, they will fail at the "mean time between failure". I don't know how they calculate this, but to test it empirically would require testing a whole lot of drives to failure, to get a statistically valid sample.

By having two drives instead of one (assuming the same conditions otherwise and all that stuff), you effectively double your chance of a drive failure at any point in time, like buying 2 lottery chances instead of one. However, in my estimation, a 2 drive RAID array (identical drives) would have the SAME "MTBF" as a single drive because they're both part of the same population of "identical" drives. The problem is that even though the MTBF is the same, the probability that one drive will fail at any given time is doubled, and since the drives are effectively "in series", you lose the whole array, like x-mas tree lights strung in series vs. parallel.

I don't know if ANY data can be recovered from a busted RAID 0 array. I think it's possible to salvage a good part of the data from a busted single drive. This may be more important than MTBF. How critical is you data?

Any statisticains out there? Set us straight!

the more I learn, the less I'm sure I know... :eek: