slvr_phoenix
Splendid
I've read them all years ago, but I suppose I could try to dig one up. In <A HREF="http://techreport.com/reviews/2002q4/ideraid/index.x?pg=17" target="_new">this</A> review you'll see where even with IDE drives and software XOR processing, RAID10 and RAID5 have about the same write performance with four drives.Show me the reviews.
Do you have onboard RAID or a card? Do you have IDE, SATA, or SCSI? Do you have hardware or software XOR? (If it's SATA/IDE, then it's about 99% likely to be software.) How much memory does your RAID hardware have? (If it's SATA/IDE, it's about 80% likely to be <i>none</i>.) How much of your CPU is used by your RAID5 array?If only my mobo worked properly I'd show you how RAID 5 sux arse.
Chances are that if you're seeing RAID5 suck, it's because your have IDE/SATA RAID with software XOR and little or no memory in your RAID controller. Not only is this slow compared to a proper solution, but it sucks up a CPU's resources like ... well, I'll leave that to your imagination.
A <i>good</i> RAID5 solution (IE. SCSI RAID5 controller with hardware XOR and lots of cache running 15K drives) is expensive, but it kicks the pants off of a cheap SATA RAID5 setup <i>and</i> uses almost no CPU resources while doing it.
The biggest advantage of RAID0, RAID1, and RAID10 are that they take very little processing power and no cache to run, so cheap (most IDE/SATA) solutions using them work well. Where as cheap solutions for RAID5 are awful. But pay the money to do RAID5 right and it's completely different.


<font color=red><i>The Devil made me do it, but I <b>liked</b> it.</i></font color=red>
@ 196K of 200K!