6x 15K Ultra 320 SCSI drives in a Raid 5 done correctly should get you over 300+ MB/s sustained throughput. Although transferring to the Raid 1 will be slower as its throughput will be much lower.
Thanks. I expected about 300MB/sec read throughput and slightly slower writes as a result of parity calculation.
Thanks for the help. I have the specs now that I'm back at work.
I checked the logs (IBM ServeRAID lacks a UI, so it took some time).
Card is a ServeRAID 6M on PCI-X at 133MHz. It has 256MB installed cache. The 2 Mirrors are on channel 1, and the RAID 5 array is on the second channel. The RAID 5 array is actually on an 8K stripe.
I copied from RAID 5 to RAID 1 (on different channels) and I was able to copy a single sequential 938k file in ~35 seconds (~35MB/s).
That seems REALLY low to me. What is this typically?
Is that 938k or 938M? Copying to RAID 1 is going to get you an effective write speed of 1 disk, so 35MB/sec is not bad. Your RAID 1 is your bottleneck here.
What kind of performance problem are you having? If you need very high read speeds in a database, I wouldn't choose RAID 5. It could be a case where a redesign of the tables is required, if the performance has dropped over time.
Whoa. Sorry. 938M (938,000k and change).
But yes, you've exposed the crux of the problem. I need to benchmark the
actual throughput of the RAID 5 array. Granted, I expected a bottleneck with the RAID 1 volume, but I expected more like a 60MB/s-80MB/s sustained transfer rate, as opposed to < 40MB, mainly because it's a sequential write.
I dind't choose RAID 5, but I probably would have had I been involved at the time. 6 physical volumes allow for a hefty abount of parallel reads, which would allow us to saturate the U320 bus with large sequential reads(or so we thought).
Also, we're running at ~400GB for that volume. We would have needed ten drives as opposed to six to get that same storage in RAID 1.
(I appreciate the help, which is why I'm going into so much detail. I appreciate any alternative opinions)
Our database is primarily reads, and most writes are batch-jobs run offline. We're trying to squeeze the entire database into the 7GB memory footprint we have available. We've physically partitioned the table between the RAID 5 and RAID 1 volumes. The temporary database is on the RAID 1 volume, where most random writes will occur. Once the data is process, it's copied sequentially to the RAID 5 volume.
This provides us with the optimal architecture while still keeping costs down.
Now for benchmarking, I tried a program called DiskBench, which is something in .NET that just creates a file with random data, so there's no bottleneck. This is what I got:
RAID 5 ARRAY
----------------------------
Create 1GB File (10x100MB blocks): 10 seconds. ~95MB/sec.
Create 2x1GB File (Simultaneous): 28 Seconds ea. ~75MB/sec.
READ 1GB File (32MB Buffer): 34 Seconds. ~30MB/sec.
READ 2x1GB File (32MB Buffer): 118 seconds ea. ~16MB/sec.
-----------------------------
The array actually appears to perform faster writes than reads (which seems backwards to me). I tried the same program on the RAID 1 Array
RAID 1 ARRAY
----------------------------
Create 1GB File (10x100MB blocks): 18 seconds. ~54MB/sec.
Create 2x1GB File (Simultaneous): 40 Seconds ea. ~50MB/sec.
READ 1GB File (32MB Buffer): 18 Seconds. ~53MB/sec.
READ 2x1GB File (32MB Buffer): 50 seconds ea. ~40MB/sec.
-----------------------------
The RAID 1 array is more in line with what I'd expect, though it still seems a little low. Bear in mind that I have 256MB cache with write-backs enabled, and the server is under no load currently.
It looks like we might be running in wide-mode on both channels perhaps?