RAID Scaling Charts, Part 3: 4-128 kB Stripes ComparedRAID Scaling Cha

Hi,

Great article.
But when you say that the controller is limited to 500MB/s... if we use the new samsung spinpoint f1 disk which can deliver 91MB/s (half more speed than the samsung HD321 disk), we saturate the controller with only 4disks?
so instead of using 8 drives in raid 0, we can achieve the same results with 8 spinpoint in raid 0+1?

and what's appends if we spread the disks between 2 controllers instead-of 1 controller?
specially with 2 opteron CPUs where each one communicate with 1 controller...
 
Nice article Muk thanks...

Sort of re-affirms what i already knew about stripe sizes, hence why i chose 64kb for my RAID-5. Im a little concerned about the limitation of the Areca at 500MB/s as my card is an Adaptec and i would think would limit out at less than that 🙁
 
Nice follow-up to this series.

In the stripe size discussion page, should the filesystem block size be equal to the stripe size ?

- If the FS block size is greater than the stripe size you end up with superflous reads or writes
- If the FS block size is less than the stripe size you end up with suboptimal performance

Or maybe this is now managed by the RAID controller layer ?

I'd love to see a follow up including RAID 3 and 4, with specifics about file size distribution (i.e. < 64k, 1Mb, 100Mb, 4Gb)

Another follow up would be the combination of LUNs for different workloads on a single array.
 
I believe this statement; "The stripe size also defines the amount of storage capacity that will at least be occupied on a RAID partition when you write a file." is incorrect with regard to Windows. The Cluster size sets the file size, not the Stripe size. My two drive RAID 0 set up occupies the same space there as it does when I clone it off to a single drive. Same for all the Window servers at work where we set the data drive cluster size at 64K and the Stripe size under RAID 5 is also 64K. We don’t see sizes vary due to Stripe, just performance. As I understand it, the Stripe size is mostly a logical block of clusters controlled by the XOR parity engine. The drive still reads and writes clusters of data but the Stripe size controls the fetch size for the XOR vertical block of data pulled. It might be interesting to see what happens when the cluster size is varied within various stripe sizes. Most controller performance is limited by the XOR’s ability to maintain parity lock on a certain size of data (4K – 256K) for example. There is more overhead in cluster size management/decoding than the stripe I believe. I’ve not changed my desktop C drive cluster size because that would waste a lot of space and performance but on servers the larger cluster increases performance. Just a thought.
 
The article is falsely stating that a smaller stripe size would conserve disk space if using many small files. This is a common misconception, and Tom's Hardware should not disseminate nonsense like that.

Since the RAID stripe size is in no way known to the OS (it's entirely transparent to it), there is no such thing as a 2kB file occupying an entire 64kB stripe. As it's correctly pointed out in prequels to this article, RAID essentially distributes BITS between several disks, so it will fill up the stripes entirely, with any file sizes it gets from the OS.

This statement would only be true if someone would change the NTFS cluster size to a larger value than 4kB.

The rule of thumb for stripe size in 2007 is: the bigger the better.

You can check out an article written by the Adaptec Storage Advisors about that subject, there is discussion of this very problem in the comments:
http://storageadvisors.adaptec.com/2006/06/05/picking-the-right-stripe-size/

Small stripe size had only one advantage in the past: it required somewhat less calculations by the RAID controller, but that has become already irrelevant a couple of years ago.