Raid 0 vs Raid 10 performance

djrags

Distinguished
Sep 23, 2009
2
0
18,510
Ok, I'm currently in the process of building a new PC. My specs are Asus Rampage ii Extreme MB, 6GB Corsair trip chan mem and i7 920 processor. I have 4ea 750GB WD RE3 drives and 2ea 1TB reg WD drives. My system is mainly used for gaming but I also do work on it as well.

I guess I have 2 questions.
1) Putting redundancy aside, would the performance be the same between a 2disk raid 0 and 4 disk raid 10 setup? I plan on using my 750GB RE3 drives.

2) How much of a performance increase would it be to use my 4ea RE3 drives in a single Raid 0 vs only using 2 disks?

My goal is performance, speed, speed and more speed. I was going to get some SSD's but just couldn't put it in my budget so i got the RE3's instead.

My original idea was to use the 4 RE3 drives in one Raid 0 and then put my 2ea 1TB drives in Raid 1 and use them to backup the Raid 0 every night. I don't need 3GB of space from using the 4 drives in Raid 0 and would only use all 4 drives if the performance increase is that great.

My other idea is to put 2 drives in Raid 0 and 2 drives in Raid 1 and basically have a Raid 10 but separately. then I could use the other 2 1TB drives in a Raid 1 solely for storage of files and data. this is all assuming if a 2disk Raid 0 out performs a 4 disk Raid 10.

I hope i haven't confused you all too much and really look forward to your reply's.

 
For what you are doing, you don't need RAID at all. Put your OS on 1 drive, your games and programs on another, and you will be just fine. Going with RAID is not going to make an iota of difference for what you do. If you really just have to go RAID, put your OS on 1 drive, put 2 more drives in RAID 0, load your games and apps on the array. Use another drive to back up the RAID array. RAID 10 will be slower than RAID 0. But as I said, for what you are doing, you will notice very, very little differenece, no matter what you do, if any all all. Go ahead, set them up a couple of different ways and play with it. You'll see what I mean.
 
I think you'll see a disk I/O performance increase, but because you have an integrated controller, that RAID-10 is going to decrease CPU performance due to the parity calculations. So you'll see a shift in performance vs an increase in overall performance.

The results will be: if you are running anything that uses 3 or more cores, you will leave either 1 or zero cores open for background processes. Any kind of rendering or high-profile photo editing may be affected. Gaming may be hard to address because if you have a lot of RAM and the game is coded well, you probably won't be using a lot of CPU/GPU during texture loads (when you're opening the game or entering a zone). But if the game loads textures during actual gameplay, you may see some increased lag. Also, due to the increase in latency when RAID is implemented, you may experience a performance reduction while reading smaller files.

Generally RAID-10, as you know, is useful for high-performance storage where redundancy is vital. Overall you will notice a performance boost with RAID-0/1 (while reading) but RAID-10 should be reserved purely for storing important data, unless you have a dedicated controller, at which point you might see an overall increase with RAID-10 regardless of parity calculations (which are done on the controller, not your CPU).
 
Backups need to be offline to protect your data from common-mode risks like power hits and theft. Put each of your 1TB drives into an external USB enclosure and backup your data to them on a regular basis (weekly, perhaps). Keep one drive offsite (at work?) and swap them each time you do a backup.

The golden rule for backups is: keep TWO copies of your data OFFLINE, one of which is stored OFFSITE.
 
I'm afraid this is completely untrue. RAID 5 uses XOR parity for data protection, but with RAID 0+1 the protection comes solely from the fact that you have data stored on two different drives - there is NO parity calculation going on and there is NO CPU overhead to calculate parity.

The biggest extra overhead for RAID 0+1 using an integrated controller is that every write operation needs to be issued to TWO drives instead of one. Reads are still done from only one drive or the other. There's a slight bit of extra work to determine which drives to read or write from, but it's very trivial and not something you'd need to worry about for 4 drives.
 
is that an intel ich10r

you could make a 4 drive raid 0 thats only say..200GB.. then partition the rest of the drives as a 0+1 or whatever..

intel matrix raid supports some powerful options.

raid wont speed up games too much except in level/area/map load times.

i'd keep 1 TB drive in the computer and another in an external case.

if you have really critical data you can do an online backup site like mozy or keep a second copy offsite.

also as stated above raid 0+1 is just striped mirrored array..
there is no parity.
 
Thanks all for the info. I would like to state that I really am not concerned about power surges with my PC as I have a Smart UPS hooked up which solves that problem.

I do alot of gaming on my PC but I also use it as my work PC during the day. I store alot of critical data on my PC and am constantly loading up new files, saving, using windows explorer, etc.

Obviously I have several options and what I mainly want is to set up my system to have the best performance possible for all the tasks that I do throughout the day. My original plan was to set up a Raid 0 with the 4 drives and then use my 2 1TB drives in Raid 1 as pure backup. I have Acronis True Image and planned on setting up a nightly backup so I'm always secured if there is corruption with my Raid 0 drives.

So, would there really be a huge performance increase in using 4 drives in Raid 0 vs 2 drives? And from a performance standpoint, would a standard 2 Drive Raid 0 Read/Write faster than a 4 drive Raid 10?

thanks again for all the help.
 
That's very nice, but you're still taking a risk if your "backup" consists of drives that are permanently attached to your computer. If your data is really that important, and since you're going to buy the drives anyway, the best way to mitigate risk is to use them for offline backup, and preferably offsite as well.

So, would there really be a huge performance increase in using 4 drives in Raid 0 vs 2 drives? And from a performance standpoint, would a standard 2 Drive Raid 0 Read/Write faster than a 4 drive Raid 10?
Benchmarks will show that for RAID 0, 4 drives is faster than 2, and that 4 RAID 1+0 drives is faster than 2 RAID 0 drives (with the exception of writes). But whether or not you're actually going to see any difference running games depends entirely on the game and what type of I/O mix it presents to the disks.
 
RAID is great for sustained disk throughput, working with or or moving large files. RAID is not good for sequential reads. Using RAID will make games load a little faster. Using RAID will make Windows load a little faster. Used to be RAID made quite a difference, as drives were painfully slow. This was further complicated by the fact that about 1 gig of memory was the standard, so your PC hit the drives a lot more often. Todays drives are pretty darn fast compared to drives only a few years old. Most people have at least 4 gig of memory, so applications and games can load a lot more information into memory. Your PC simply won't go to the drive nearly as often. An average user or gamer will SEE NO BENEFIT AT ALL using RAID over a couple of single fast drives, other than very slightly improved load times, that is it. As I said, you have the drives, load up Windows, install some games, and try it a few different ways. I think what I am saying here will bear its self to be mostly exactly right. Yes, 4 drives in RAID 0 is faster that 2 Drives in RAID 0, but you will NEVER EVER see any difference. In fact, the overheard of 4 drives will create slightly longer latencies. You might actually see a decrease in performance playing games with 4 drives in RAID.
 
In my opinion, best configuration is:
1. 2xWD750 raid 0 (first partition 80 GB for system) on ICH10
2. 2xWD750 raid 0 (first partition 200 GB for games) on ICH10
3. 1x WD1000 on JMicron internal port, but in SATA IDE mode.
4. Second WD1000 use as external and in this mode is not hot-plug.
I use defragmenter only for system partition, for data partitions I use
technique copy-delete in zigzag from partition to partition with different
volumes. And don't forget Blue-ray and DVD RW on 5. and 6. port on ICH10.
This is my experience for two years with two computers and many HDDs.
 
your 750gb drives have 32mb of cache... for the best speed, i'd suggest you look into short-stroking the 750gb drives and having them in raid0. i believe the raid0 performance benefit goes like this: every drive you add increases speed by almost the same margin, until you add the fifth drive, which adds less than the previous four, and after the fifth the results diminish because you have essentially saturated your sata II max speeds. i used to run 5 7200sata2 drives in raid0 and my personal observance was that it was blindingly fast as hell. i had no level load stutters in games, and in general my pc performed/performs at the level of the best SLC SSD, which is why i have not changed from raid0 conventional drives to ssd... yet.

i heard they started shipping sata3 (sata6gb) drives this week. once the bandwidth limit goes from 3gb/s to 6gb/s i'd need to run way too many conventional drives in raid0 to beat two sata3 slc ssd drives running in raid0.

raid1 (and therefore raid10) normally is a total waste if the reason you are attempting to use raid1 is 'in case a drive fails.' if you need that kind of protection use raid5 (or better yet, run your 750s in raid0 and your 1tb drives as your backup). raid 1/raid10 will just copy corrupt files on two drives. wikipedia it if the technology is still mystifying.. raid5 is slightly slower than raid0.
 
Just to add one thing to the above:

Todays drives are *not* fast - and they never will be. Mechanical harddrives will be stuck at high latencies forever since thats inherent to the mechanical nature of the technology. Even the most expensive 15k SAS drives can't process more than 200 I/O operations per second in a truely random fashion. An SSD has no real limit here; it can go to a million or even many times that - in theory - while mechanical harddrives can never be fast in this type of workload. Mechanical drives will be fine for sequential read/write, but any more random different pattern will drop them to a meager 200 IOps a second. During the time it requires to process one sector (512 bytes) - your CPU already passed 30.000 operations (or 30 kHz).

So i would argue mechanical drives are not suited for any task that focusses on performance instead of data capacity and price-per-GB; two areas where the HDD does continue to dominate over SSDs.