Multiple RAID configurations on a single motherboard?

dawgma

Distinguished
Feb 17, 2007
96
0
18,630
I'm working out the details for my first performance workstation. My initial plan includes using multiple RAID configurations. I would just like to double check that it is possible to create the following setup using a single board:

- 6-drive RAID 01 data array,
- 2-drive RAID 0 OS array,
- Single HDD for page file
- Single HDD for photoshop scratch disk

By the way, if you're wondering what board has 10 SATA connections, I'm looking at this: http://www.newegg.com/Product/Product.asp?Item=N82E16813128037
 

darkstar782

Distinguished
Dec 24, 2005
1,375
0
19,280
Its possible, but you cant span the controllers with an array, unless you go software RAID.

The board has 6 SATA-300 connectors on the NF680i, and 4 on the Jmicron controller.

You need to use the NF680i controller for the 6 drive array, its the only way you can get all the drives on one controller.

You can then use the Jmicron controller for your 2 drive array and your other two hdds.

It is my opinion that you are better off with a 6 drive RAID1/0 array and a 4 drive RAID0 OS/Scratch/Page array however. This would give better performance for all three than seperate drives for scratch and page.

The Jmicron controller sits on a PCIe x1 connector, this will limit your 4 drive array to 250mb/s, which should be enough unless it is 4 Raptors.

The JMB363 controller, which is SUPPOSED to be what this uses, is supposed to support 2*SATA and 1*PATA channel however. I'm a little concerned that the last two SATA ports on this controller are on the PATA channel with SATA->PATA bridge chips. This would limit those two devices drastically in the same way as two PATA drives sharing a channel.
 

TeraMedia

Distinguished
Jan 26, 2006
904
1
18,990
Questions:
- Where are you housing all of those drives? They will generate a considerable amount of heat, and will also (with their cables) reduce your airflow. What kind of case are you planning?
- I'm guessing you're connecting DVD drive via PATA? They're starting to come out in SATA now, and that will be the future direction so something to consider.
- Why not RAID 5 for your data array? If you don't need to worry as much about write-speed, then it's as fast as a RAID 0 (-1 disk) for reads and will increase your data storage (or reduce disk count) by 2 drives.

I run my O/S and other programs on RAID 5 (4 disks) which works well because read-speed is the same as 3-disk RAID 0, and write performance is only an issue when I'm doing a Windows Update or installing new software. And if a drive fails then I have a very easy recovery plan. With your setup, you could put your O/S, valuable data, programs and such on a RAID 5 (4-disk) array (cut back 2 drives from your planned RAID 10 6-disk array), and get essentially the same performance and storage space for reads, a performance hit for writes (when you only write once, not a big deal), and save yourself 2 SATA slots, 3.5" drive bays, and power connectors.

On the same drives I use for RAID 5, I also have RAID 0 (4 disks) for the temp directories and pagefile. This is really fast, but gets a bit unnerving when you lose a disk. As long as you have a recovery plan it's ok. My setup uses ICH7R Intel Matrix; I don't know if the nVidia array has the same matrixing capability.

I setup a separate 2-disk RAID 0 array for non-critical data. In your case, I would put the temp directories, page file, photoshop scratch disk and (if you're comfortable with it) working directories on a RAID 0 (4 disks) array, so that you have 4 disks for "write-once, read-many" stuff, and 4 disks for "intensive read-write" stuff. That should minimize drive-head thrashing (your disks won't have to alternate between loading program code and writing to data files in different tracks), provide you with adequate data security, and you'll get kicking performance.

Caution: Read up on Toms about the 680i. There is a performance issue with the SATA arrays that will constrain your performance. You might consider a board with a ICH8R southbridge (e.g. based on Intel 965) for optimal disk performance.

Other considerations:
- You may need to buy the more powerful version of Diskkeeper software to defrag your partitions. Some defragging software won't go above 500 GB (or 1 TB, not sure where the limit is these days), so that is something to consider.
- If you're using Vista, you might be able to put your pagefile on a USB memory fob for a considerable performance gain. I don't know much about this ReadyBoost feature, but worth considering.
 

Mondoman

Splendid
nVidia disk controllers always seem to have issues. For a big drive-intensive system like yours, I'd stick with a better-performing and less error-prone Intel chipset board.
 
You want a very serious setup there with 10 HDDs. You *could* do it on one board, but I'd suggest against it. There are a few things that you need to consider:

1. Ten HDDs will draw roughly 120-150W in power and make a significant amount of heat. You'll need a darn big server case or some external drive enclosure, as well as a good, solid PSU with at least 10 SATA power connectors delivering clean, stable power.

2. I don't know what drives you'll use in your array, but I'd think that the OS, page file, and scratch disk will be 10,000 rpm units and the RAID 10 data array will be large 7200 rpm units. Figure on the 7200 rpm units pumping out 75 MB/sec in throughput and the 10k units pushing 90. That means that you'll be pushing at most 450 MB/sec through the RAID 01 array, 180 MB/sec through the RAID 0 and 90 MB through each of the pagefile and scratch disks.

3. The six RAID 01 drives need to be on one controller, the two RAID 0 drives also need to be on one controller. The two single disks can sit anywhere.

4. If your board goes belly-up, you need to get another NVIDIA 680 board with a JMicron controller to be able to retrieve any of your data as motherboard-based RAID is not recoverable unless the drives are hooked up to a similar motherboard.

5. NVIDIA southbridges have severe problems in handling a lot of disk I/O. My NForce 4 had trouble running a mere 3-disk RAID 5- read speed was about half of what it should have been and write speed seemed capped at 20 MB/sec. Tom's proved that's still a problem with NVIDIA 600 series chipsets too.

6. RAID 0 is good for pagefile disks and scratch disks, but I'd not put an OS on it unless I had an image right available to reload when one of the two disks dies.

So my recommendations are the following:

1. Get a server-class socket 775 board with at least one PCIe x4 and x8 slot in addition to the PCIe x16 GPU slot. 3 PCIe x16 slots or 2 x16 and an x8 will work equally well too. PCI-X will work if it's a 133 MHz 64-bit slot, but PCIe is faster.

2. Get an external drive enclosure box connected by a high-speed connection like multiple eSATA or SAS connections. Put your six RAID 01 disks in there. Note that these are NOT cheap- a good fast one that won't be bottlenecked like the Promise Vtrak J300s is over $2000.

3. Put your OS on a single HDD as RAID 0 doesn't do that much to an OS. However, RAID 0'ing your scratch or pagefile disk *will* make a good improvement.

4. Put enough RAM in your system so that you don't have to swap much. Swapping *kills* performance. If you need more than the 8GB supported by 1P systems, then bite the bullet and get a real DP server board and stick as much memory in it as you need. Needless to say, you'll need a 64-bit OS.

5. Get a reasonable case that can hold your 3 or 4 OS + scratch disks as well as a good PSU and two RAID cards.

6. Get an external SAS RAID controller card to feed the RAID 01 drives in the external enclosure. You'll want at least two SAS ports, four is optimal. Something like LSI Logic's PCIe x4 LSI0011 or Intel's PCIe SRCSAS144E would do the job. Order two because your data is toast unless you have a second identical unit to reconnect if the original card dies.

7. Get a 4-port PCIe SATA RAID controller for the internal RAID 0 drives. This does not need to be exceedingly expensive or fancy but it will be better than onboard. Again, buy two.

That should make for a MUCH better setup than what you were originally proposing.
 

dawgma

Distinguished
Feb 17, 2007
96
0
18,630
First of all, thanks for all the feedback guys. I will try to address each of your questions and recommendations in this post. Skip ahead to your name if there is too much to read.

It is my opinion that you are better off with a 6 drive RAID1/0 array and a 4 drive RAID0 OS/Scratch/Page array however. This would give better performance for all three than separate drives for scratch and page.

I thought it was best practice to separate the page file and scratch disks from the OS.

I understand that if my page/scratch disks were part of a 4-Disk RAID 0 array this would allow for higher sustained transfer rates – but in the case of page/scratch disks, I thought fast seek times were more important than massive read/write performance. I don’t think the windows page file needs to push 100MBps for any reason. The Photoshop scratch disk on the other hand could benefit from higher transfer rates when working with very large files… but most reads/writes would be less than 50MB anyway, and I would be pleased with the performance of a single drive at 50MBps read/write (especially compared to my current setup: a single 4200RPM laptop drive).

Under normal conditions I would have a number of applications open (Photoshop, Illustrator, browser, music player, Office applications). When all of these programs are running together, my OS, page file and scratch drives will want to access data at the same time. If they were on separate disks they could seek the data concurrently, but in a single RAID 0 array they would each have to wait their turn. Plus, I would be worried about any extra overhead caused by calculations made by the controller.

So, I would assume that including the page/scratch disks within my OS RAID 0 array would create longer apparent seek times. I also believe that it is more important to be able to access smaller chunks of data quickly rather than larger 100MB-500MB pieces. Please correct me if these assumptions are in error.

NOTE: I just realized that the need for a page file may not be as important as I originally thought. Since I will be using at least 6GB of RAM (Photoshop can utilize an upper limit of 6GB on 64-bit XP), Windows will only rarely ever need to access the virtual memory. However, it is commonly recommended not to disable the page file no matter how much RAM you have. So perhaps it would be reasonable to partition the page file on the same drive as the OS, and not expect any sort of performance loss. What do you think? I still believe the Photoshop scratch disk should be on a separate drive… but overall, combining the page file with the OS would reduce the total number of drives by 1.

POSSIBLE NEW HDD ARRANGEMENT:
- 6 7200’s RAID 01 data array,
- 2 7200’s RAID 0 OS array (with partitioned page file)
- Single 10,000RPM for Photoshop scratch disk

Questions:
- Where are you housing all of those drives? They will generate a considerable amount of heat, and will also (with their cables) reduce your airflow. What kind of case are you planning?
- I'm guessing you're connecting DVD drive via PATA? They're starting to come out in SATA now, and that will be the future direction so something to consider.
- Why not RAID 5 for your data array? If you don't need to worry as much about write-speed, then it's as fast as a RAID 0 (-1 disk) for reads and will increase your data storage (or reduce disk count) by 2 drives.

1) I was planning to house all components within a full sized ATX case. Cables will be combined and bound tightly together wherever possible. I’m also working on establishing a more efficient HDD arrangement that requires less disks.

2) I had not thought about it, but I am rethinking my HDD arrangement so there may be some spare SATA connections after all.

3) I am very worried about write speeds. I will be saving my Photoshop files to the data array, and those files are commonly between 50MB-500MB or more. I save files often while working on them and currently it takes about 20 seconds to a minute per save. Sometimes if everything is overloaded a save will take 5 minutes or longer. These times need to be reduced - it is one of the main problems that I must get rid of with my new system.

I was initially considering RAID 5 as a data storage solution. But RAID 5 suffers from serious write penalties due to the parity calculations. The overall write performance of RAID 5 is at least 3 times less than the read performance. And that’s if your lucky. I have read many complaints that write speeds on RAID 5 can regularly fall below half the speed of a single drive if you are using an on-board controller. It is often recommended that you should buy a controller specifically made for RAID 5 (and I would like to avoid that). Also, I have noticed that RAID 5 is not often quoted for performance systems… it is much more common for RAID 5 to be recommended as a storage solution or for databases. So for all these reasons I have so far decided that the write performance of RAID 5 to be slow and unreliable.

Please let me know if you have alternative, reliable information about RAID 5 write speeds. My goal is to have at least 100MBps write speeds. If it is possible to achieve this with 4 or 5 drives in RAID 5 using an on-board controller, then I would be very interested (but at this point, it’s going to take a lot to convince me that I can trust RAID 5). This would reduce the number of drives required for my data and I think it would also increase my overall read performance (I think with a 6-Disk RAID 01 setup, read performance would be 3x a single drive, but a 4-Disk RAID 5 setup would be 4x a single drive?).


With your setup, you could put your O/S, valuable data, programs and such on a RAID 5 (4-disk) array…

I would prefer to keep my OS/Applications separate from my Data disks. If my operating system fails (I tend to screw around with it sometimes) and I am required to format the disk (not just the partition), then I believe it would be better for me to have the OS/Applications on their own disk(s).

Also, I thought it was beneficial to install the OS/Applications on a separate disk from your data because it provided better performance (less defragmentation, and faster seek times because your programs will always be closer to the edge of your HDD). Please correct me if I am wrong, or if the performance gains are actually negligible.


Caution: Read up on Toms about the 680i. There is a performance issue with the SATA arrays that will constrain your performance. You might consider a board with a ICH8R southbridge (e.g. based on Intel 965) for optimal disk performance.

Thanks for the tip. I’m considering waiting for the release of the new Intel Bearlake motherboards. These are most likely to be compatible with Intel’s future 45nm chip designs over the next few years. Also, they sport the new ICH9R southbridge… I don’t know jack about it, but it has to be as good as ICH8R. MSI already showed a working model at CEBIT this week, and they are expected to come to market within 1-2 months.


6. RAID 0 is good for pagefile disks and scratch disks, but I'd not put an OS on it unless I had an image right available to reload when one of the two disks dies.

I’m not worried about redundancy with my OS since I plan on taking images of the drive once a week.


2. Get an external drive enclosure box connected by a high-speed connection like multiple eSATA or SAS connections. Put your six RAID 01 disks in there.

I would prefer to have everything contained within my case, attached to a single motherboard. If this is not possible/recommended with my current setup, then I will have to reduce the number of disks to make it work.


3. Put your OS on a single HDD as RAID 0 doesn't do that much to an OS. However, RAID 0'ing your scratch or pagefile disk *will* make a good improvement.

I would be happy to have the OS on just a single drive. I assumed it would perform much better in RAID 0, but I suppose an OS does not need the higher throughput. Would the OS would benefit more from a single, fast-seeking Raptor? What do you think about combining the page file with the OS drive (considering that there will be at least 6GB of RAM and the page file will not be accessed very often because of that)?

What about adding the Photoshop scratch disk to the OS drive as well? This is probably not a good idea. I’ve always heard that it is much better to have the scratch drive separate from the OS.


5. Get a reasonable case that can hold your 3 or 4 OS + scratch disks as well as a good PSU and two RAID cards.

6. Get an external SAS RAID controller card to feed the RAID 01 drives in the external enclosure. You'll want at least two SAS ports, four is optimal. Something like LSI Logic's PCIe x4 LSI0011 or Intel's PCIe SRCSAS144E would do the job. Order two because your data is toast unless you have a second identical unit to reconnect if the original card dies.

7. Get a 4-port PCIe SATA RAID controller for the internal RAID 0 drives. This does not need to be exceedingly expensive or fancy but it will be better than onboard. Again, buy two.

External HDD enclosures, RAID cards, controllers, SAS ports… all of these are things I know little or nothing about, did not consider, and sound expensive.

My overall budget is no more than $4000 CDN. Preferably between $3000-$3500 CDN. My budget for HDDs/storage solutions is between $1000-$1500 CDN.


Summary

So, I now face a number of considerations:
- If a page file is not very important when there is 6GB of RAM available, then I could combine the page file with the OS drive.

- Combine the Photoshop scratch disk with the OS drive and Page file? This is probably not a good idea, is it?

- If the OS drive will not benefit from the high throughput of RAID 0, then I could substitute the 7200 RAID 0 array for a single drive. Would the faster seek times of a Raptor provide obvious performance gains over a 7200 (considering that the drive would be used for the OS/Applications/Page file)? Or could the Raptor be considered overkill?
(Note: I also plan on creating an image of my OS drive once a week)

- Can it be confirmed that 4 or 5 drives in RAID 5 would be able to provide at least 100MBps write speeds? The on-board controller must be able to provide consistent write performance over long periods and not drop to extremely low levels due to the parity calculations. Purchasing a separate controller is practically out of the question. I would rather spend the money on additional HDDs for a RAID 01 array. I would love to try RAID 5 over RAID 01, but ultimately I will need some good, reliable information for me to trust in RAID 5.
Assuming the best case scenario for each consideration, then my system would look more like this:

- 4-Drive 7200RPM RAID 5 data array
- Single 74GB 10,000RPM for OS/Applications/Page file
- Single 32GB 10,000RPM for Photoshop scratch disk

6 HDDs total, which would be quite reasonable.
 

darkstar782

Distinguished
Dec 24, 2005
1,375
0
19,280
So, I now face a number of considerations:
  • - If a page file is not very important when there is 6GB of RAM available, then I could combine the page file with the OS drive.
 
I thought it was best practice to separate the page file and scratch disks from the OS.

Generally the page file does not get used heavily if you have sufficient RAM, so it's fine to put the page file on the OS's hard drive. In fact, that's default Windows behavior. Having a separate scratch disk from the OS's hard drive is a good idea, especially if you have more than one program open or want to boost the speed of your scratch disk by using RAID or a RAM drive.

I don’t think the windows page file needs to push 100MBps for any reason.

Your RAM runs at gigabytes per second and data written to swap/pagefile is data that had been pushed out of RAM. Your pagefile will thus be accessed as quickly as the hard drive will handle. I've pulled well over 100 MB/sec for my swap partitions as I have them set up in striped mode across several drives (I run Linux.)

The Photoshop scratch disk on the other hand could benefit from higher transfer rates when working with very large files… but most reads/writes would be less than 50MB anyway, and I would be pleased with the performance of a single drive at 50MBps read/write (especially compared to my current setup: a single 4200RPM laptop drive).

A single 4200 rpm laptop drive can generally write at about 20 MB/sec or so, most 7200 rpm desktop drives will be 60-75 MB/sec, and a 10,000 rpm Raptor will be near 90 MB/sec. So get yourself a 74 GB WD Raptor 740ADFD with 16MB cache and be happy with nearly 90 MB/sec read/write speeds :D

Under normal conditions I would have a number of applications open (Photoshop, Illustrator, browser, music player, Office applications). When all of these programs are running together, my OS, page file and scratch drives will want to access data at the same time. If they were on separate disks they could seek the data concurrently, but in a single RAID 0 array they would each have to wait their turn. Plus, I would be worried about any extra overhead caused by calculations made by the controller.

Generally when an application is loaded, it will not need to touch the OS hard drive again as it loads itself into RAM (huge programs like games excepted.) If you have enough RAM, then the programs can be kept in RAM and you'll not need to touch the pagefile. But you will be accessing the scratch disk and the data disk when you save your data and you will always be accessing the data disk when you play music. But playing music makes very minimal demands on a drive- a few dozen KB every couple of seconds.

RAID 0 does not require much in the way calculations as it's simply striped with no parity. RAID 1 is mirrored, which also has no parity. Only RAIDs 4, 5, 6, and any nested RAID with one of those levels has parity. Computing parity takes some CPU, but when you figure that a reasonably modern CPU like my X2 4200+ can checksum 7 GB/sec being run through two software RAID 5 arrays at the same time, it's hardly any stress at all. I never see more than a few percent on one core being used for the XOR calculations when I write to my 3-disk RAID 5 at maximum speed, which is about 70 MB/sec. The OS will use much more CPU time to handle the disk I/O than it will for RAID checksumming, so don't worry about that. And with a hardware RAID controller, all of the checksumming is pushed onto the RAID controller and the host CPU doesn't have to do any of that. It just has to handle the disk I/O, which every single computer with a HDD has to.

So, I would assume that including the page/scratch disks within my OS RAID 0 array would create longer apparent seek times. I also believe that it is more important to be able to access smaller chunks of data quickly rather than larger 100MB-500MB pieces. Please correct me if these assumptions are in error.

Putting the scratch disk on the OS drive does not necessarily increase the apparent seek times. It won't affect them at all if the OS doesn't need to read or write to its drive at that second. But if you're accessing the OS drive at the same time the program is writing to scratch, then it will slow things down considerably. I haven't used Photochop, so I can't tell you about how it uses its scratch disk.

NOTE: I just realized that the need for a page file may not be as important as I originally thought. Since I will be using at least 6GB of RAM (Photoshop can utilize an upper limit of 6GB on 64-bit XP), Windows will only rarely ever need to access the virtual memory.

Correct.

However, it is commonly recommended not to disable the page file no matter how much RAM you have.

Also correct. Windows expects that there is a page file available and complains if there is not one. But if you have enough RAM, the OS will not need to use the page file even though it is there.

So perhaps it would be reasonable to partition the page file on the same drive as the OS, and not expect any sort of performance loss. What do you think?

Sounds good. Just make sure that you have enough RAM, and I think 6GB will be enough to ensure that.

I still believe the Photoshop scratch disk should be on a separate drive… but overall, combining the page file with the OS would reduce the total number of drives by 1.

Yes, it sure will. But you do not need to (and should not) make a separate partition for the page file. Windows expects the page file to be on C:\ and putting it on a separate partition can make Windows act goofy. Putting in a dedicated swap partition would not be any faster than leaving it on C:\.

POSSIBLE NEW HDD ARRANGEMENT:
- 6 7200’s RAID 01 data array,
- 2 7200’s RAID 0 OS array (with partitioned page file)
- Single 10,000RPM for Photoshop scratch disk

The 6-disk RAID 01 sounds good. I'd change the two 7200 rpm disks in RAID 0 to either a single 10,000 rpm disk or two 10,000 rpm disks in RAID. An OS does not need an enormous amount of room, not even Vista. Go for speed over capacity.

I was initially considering RAID 5 as a data storage solution. But RAID 5 suffers from serious write penalties due to the parity calculations. The overall write performance of RAID 5 is at least 3 times less than the read performance.

This is because the data as well as parity is spread out over many disks. When a write happens, then the data plus its parity needs to be updated. Generally you can expect that read speeds will be the sum of the read speeds of all drives less one, and write speeds will be roughly that of a single drive. Your 3x figure would be correct for a 4-disk RAID 5. My 3-disk RAID 5 has read speeds of about twice what the individual disks have and write speed similar to one disk.

And that’s if your lucky. I have read many complaints that write speeds on RAID 5 can regularly fall below half the speed of a single drive if you are using an on-board controller. It is often recommended that you should buy a controller specifically made for RAID 5 (and I would like to avoid that).

That's true as well. I set up a Linux md (software) RAID 5 using my NForce4-based motherboard's SATA ports. Read speeds topped out at about 80 MB/sec and writes were capped at 20 MB/sec. I then bought a relatively inexpensive 4-port SATA RAID controller that uses the PCIe x4 bus and now my speeds are 120 MB/sec read and 60-ish MB/sec read, which is what the speeds should be. Note that I used to controller as a "dumb" controller and didn't use its RAID functions- my OS handles those. Most southbridges have internal bottlenecks when being hammered with data like a RAID can do.

I'd recommend that you buy a separate controller for any RAID that has data on it that you'd like to keep and doesn't use many disks. So a 2-disk RAID 0 is fine for motherboard RAID, but other things need a separate controller unless your OS handles its own RAID. Windows XP handles RAID 0 and 1, and Windows Server 2003 can handle RAID 5 in addition to that, so it's not an option if you want RAID 01 (although Linux will do it.) If your motherboard goes belly-up, then you've lost your data. With an add-in card, you can migrate the card and thus the array from machine to machine and keep your data. Your RAID 0 that holds your OS would be fine using motherboard RAID. Also, get TWO identical RAID controllers for your RAID 01 as the array and data dies if you have to use a different controller.

Also, I have noticed that RAID 5 is not often quoted for performance systems… it is much more common for RAID 5 to be recommended as a storage solution or for databases. So for all these reasons I have so far decided that the write performance of RAID 5 to be slow and unreliable. Please let me know if you have alternative, reliable information about RAID 5 write speeds. My goal is to have at least 100MBps write speeds. If it is possible to achieve this with 4 or 5 drives in RAID 5 using an on-board controller, then I would be very interested (but at this point, it’s going to take a lot to convince me that I can trust RAID 5). This would reduce the number of drives required for my data and I think it would also increase my overall read performance (I think with a 6-Disk RAID 01 setup, read performance would be 3x a single drive, but a 4-Disk RAID 5 setup would be 4x a single drive?).

RAID 5's advantages are that it gives a decent ratio of available storage to the sum of the disk capacities as you only "lose" one disk and has decent read speeds. Write speeds aren't that great, so it is primarily used for file and database storage, where reads are more important than writes and disk space is at a premium. Write speeds on a RAID 5 are about that of a single drive. By adding drives to a RAID 5, you increase capacity and read speed, but do nothing to write speeds. So to get 100 MB/sec write out of a straight RAID 5, you'd need drives that can push 100 MB/sec or more. With 4 or 5 of those, you'd get a staggering 400 or 500 MB/sec read speed.

Without using RAID 0, the very best you're going to do with a single RAID is writing at the speed of one disk. You have to go to a nested RAID that has either a redundant array level like 1 or 5 made out of RAID 0 disks OR a RAID 0 made from a redundant level like 1 or 5. Performance of 01 and 10 are similar as is the performance of 05 and 50. Generally it is suggested to make a RAID 0 of redundant volumes (10 or 50) than a redundant volume of RAID 0s (01 or 05) but both are fine, as long as you replace a dead HDD right when it dies. Assuming your drives have 70 MB/sec read/write speeds, which is typical, RAID 01/10 in your case would give a read speed of about 3 times a single drive and a write speed of a little slower than 3 times a single drive. So the read and write speeds would be roughly 200 MB/sec. A RAID 05/50 has read speeds of 4 times a single drive and write speeds of twice a single drive, so read speeds are 280 MB/sec and writes are 140 MB/sec. RAID 01/10 would give you 3 drives' worth of capacity and RAID 05/50 would give you four drives' worth of capacity. Since you should be buying an 8-port controller anyway, most of those support 01 or 10 and also 05 or 50. So it's just whether write speed or read speed + capacity are more important.

I would prefer to keep my OS/Applications separate from my Data disks. If my operating system fails (I tend to screw around with it sometimes) and I am required to format the disk (not just the partition), then I believe it would be better for me to have the OS/Applications on their own disk(s).

I do this too. Mine is primarily because I need the OS to boot before I can access my array as it's an OS-controlled one, but it's a good idea anyway.

I’m not worried about redundancy with my OS since I plan on taking images of the drive once a week.

Then RAID 0 away. RAID 0 is fine when it's data that's easily replaceable and the down time to reload it isn't an issue.

I would prefer to have everything contained within my case, attached to a single motherboard. If this is not possible/recommended with my current setup, then I will have to reduce the number of disks to make it work.

Then you will want to get a nice server case with at 10 or so HDD bays. Also, make sure to get a good solid PSU with enough SATA power connectors. Server cases generally have slots for redundant PSUs, you might need to buy two PSUs to get enough power connectors. My 500 W PSU only came with four SATA power connectors. Some drives can use Molex connectors as well, especially Western Digital drives.

I would be happy to have the OS on just a single drive. I assumed it would perform much better in RAID 0, but I suppose an OS does not need the higher throughput. Would the OS would benefit more from a single, fast-seeking Raptor? What do you think about combining the page file with the OS drive (considering that there will be at least 6GB of RAM and the page file will not be accessed very often because of that)?

Putting the pagefile on the same drive as the OS is good and was discussed above. With as many HDDs as you want all in one case, by all means put the OS on one 10,000 rpm HDD.

What about adding the Photoshop scratch disk to the OS drive as well? This is probably not a good idea. I’ve always heard that it is much better to have the scratch drive separate from the OS.

Keep them separated, discussed in depth above. Make this drive a 10,000 rpm unit too.

[quote[My overall budget is no more than $4000 CDN. Preferably between $3000-$3500 CDN. My budget for HDDs/storage solutions is between $1000-$1500 CDN.[/quote]

You should be able to pull this off for that amount, which is something like US$3000.

Summary of my recommendations:
1. Get 6 7200 rpm enterprise-grade HDDs for your data volume like Seagate NL or WD Caviar RE2.

2. Put them in RAID 10 for ~200 MB/sec r/w speed and 3 disks' worth of capacity OR in RAID 50 for ~280 MB/sec read and ~140 MB/sec write speeds and 4 disks' capacity.

3. Get two identical 8-port PCIe x4 or x8 SATA RAID controllers that can handle RAID 10 and 50. I'd spring for a hardware RAID controller like the 3ware 9650SE. That will set you back a grand US, but it'll be far better than motherboard RAID. You could get just one for $500, but if your card dies on you (not that common, but it does happen.) you'd be wishing you got two.

4. Get a motherboard of your choice but make sure that it has at least a PCIe x4 slot in addition to a PCIe x16 slot for the GPU. An x16 and an x8 work, as do two PCIe x16 slots. I strongly suggest a workstation board, be it a "desktop" workstation board with a standard socket 775 or AM2 or a dual-socket 771 or 1207 unit. It will be a little more stable on you and the dual-CPU ones allow for many more cores than the single-CPU boards.

5. Get an appropriate processor for your board. If you're going socket 775 Intel, spend the extra few dollars and get a Xeon E30x0 instead of the Core 2 Duos as they run a little cooler. Ditto with the Opteron 1200 series over the Athlon 64 X2s if you want an AM2 board.

6. Get two 10,000 rpm Western Digital Raptors. One for the OS, one for the scratch disk. Keep the pagefile on the OS's disk.
 

WizardOZ

Distinguished
Sep 23, 2006
250
0
18,780
You are overlooking two very critical points here. First, on-board Raid is essentially software control, which is OK as long as you understand the performance consequences. Second, on-board RAID controllers are mutually incompatable not only within brand, but between brands. Even newer revs of a specific model are very frequently unable to recognize or support RAID set-ups created on previous versions of the model. There is no good reason for this, but that's how it is. Equally dubious is that standalone hardware controllers also exhibit this tendency to proprietary implementations. They are portable, however. In your specific case, I would strongly urge you to get a standalone hardware ontroller for the arrray you are ontemplating. You would be wise to seriously consider getting two identical controllers, in case one goes south, or to expand your capabilities.

The suggestion to set up a RAID 5 array is very good and will provide you with an excellent level of data security. Unless you are doing a lot of streaming video work, there is little overall benefit from a RAID 0 set-up, while it significantly increases your loss of data risks. Note that the expected failure rate of any individual hard drive is a statistical probability. When 2 or more drives are combined in something like RaID 0 the overall probability of failure increases - the more drives, the higher the probability. This is because combined probabilities are multiplicative, not additive. In other words, it is 1-(P1 x P2 x ... Pn) NOT 1-(P1+P2+...Pn) where Px is the indivdual MTBF of a given drive. The first case results in a bigger number.

One last detail. RAID is NOT a substitute for a proper data back-up proceedure. That is not the purpose of RAID. Other than level 0, all RAID set-ups are designed and intended to provide real-time protection against data loss due to hardware failure. The higher levels include the ability to hot-swap a defective drive (or two) and rebuild the array on the fly, while continuing operation. This is characteristic is complementary to proper backups of the data in the array.
 

dawgma

Distinguished
Feb 17, 2007
96
0
18,630
MU_Engineer

Thanks again for your reply. This thread has really helped me polish my initial design and narrow down the options.

For the sake of brevity, I will post my thoughts just based on your list of recommendations:


1. Get 6 7200 rpm enterprise-grade HDDs for your data volume like Seagate NL or WD Caviar RE2.

2. Put them in RAID 10 for ~200 MB/sec r/w speed and 3 disks' worth of capacity OR in RAID 50 for ~280 MB/sec read and ~140 MB/sec write speeds and 4 disks' capacity.
I think you’ve convinced me to go with a nested RAID level and spring for some RAID controllers :D

I believe RAID 50 offers everything I am looking for because it meets my basic read/write goals of 150/100MBps and it allows for higher capacity over RAID 10. With 4 drives worth of storage I will be able to go with cheap (~$100CDN) drives that are either 250GB or 320GB each (I am not looking for much more than 1 TB in total).

These look pretty good: 250GB Seagate NL, 250GB 7200.10 Baracuda, 250GB WD Caviar RE.


3. Get two identical 8-port PCIe x4 or x8 SATA RAID controllers that can handle RAID 10 and 50. I'd spring for a hardware RAID controller like the 3ware 9650SE. That will set you back a grand US, but it'll be far better than motherboard RAID. You could get just one for $500, but if your card dies on you (not that common, but it does happen.) you'd be wishing you got two.
Right away I can tell you that my budget has no room for high-end RAID controllers - let alone 2 of them! Based on my original budget of $1000-$1500 for storage, so far I have:
+ $600 for the 6-disk RAID 50 array
+ $220 74GB Raptor
+ $145 36GB Raptor
TOTAL: ~$965This leaves me with about $500 CDN for two RAID cards. So, what I would like to do is buy one mid-range card for now, and a backup card in 4-6 months time.

Perhaps one of these cards would suit my needs for the 6-drive array? HighPoint RocketRAID 2322, or the LSI LOGIC LSI00042-F.


4. Get a motherboard of your choice but make sure that it has at least a PCIe x4 slot in addition to a PCIe x16 slot for the GPU. An x16 and an x8 work, as do two PCIe x16 slots. I strongly suggest a workstation board, be it a "desktop" workstation board with a standard socket 775 or AM2 or a dual-socket 771 or 1207 unit. It will be a little more stable on you and the dual-CPU ones allow for many more cores than the single-CPU boards.
I was hoping that since I am getting a RAID card, I would be able to save some money on the motherboard. I am not interested in a server-class/workstation board. I doubt I would ever fill out more than half the available slots, plugs and sockets on boards like those.

This system is primarily a Photoshop workstation, and my main concern is fast data access and security. Beyond that, I do not need multiple super-computing CPUs or top-of-the-line graphics cards. I would feel like an idiot if I overbuild this system. If I ever decided to move onto video editing in the future, then I would rather build a new system from scratch instead of future-proofing this current system.

Considering that the RAID card can support up to 8 drives, I no longer need a motherboard with 8 or 10 SATA connections. I believe a board with 4-6 SATA connections should be sufficient. Beyond that, I just need a motherboard that can handle:
- the RAID card
- 4 DDR2 slots for 6GB RAM
- 1 PCI-E x16
- Native dual-core support
- Overclocking an E6600 to 3.2GhzPreferably it would also be compatible with Intel’s next generation of chips that will be released in ‘07-‘08.


5. Get an appropriate processor for your board. If you're going socket 775 Intel, spend the extra few dollars and get a Xeon E30x0 instead of the Core 2 Duos as they run a little cooler. Ditto with the Opteron 1200 series over the Athlon 64 X2s if you want an AM2 board.

The plan has always been to go with the Core 2 Duo E6600, overclocked using air to approx. 3.2Ghz. I don't think I want to get into server-class hardware just yet.


6. Get two 10,000 rpm Western Digital Raptors. One for the OS, one for the scratch disk. Keep the pagefile on the OS's disk.

Done, and done!
 
I believe RAID 50 offers everything I am looking for because it meets my basic read/write goals of 150/100MBps and it allows for higher capacity over RAID 10. With 4 drives worth of storage I will be able to go with cheap (~$100CDN) drives that are either 250GB or 320GB each (I am not looking for much more than 1 TB in total).

These look pretty good: 250GB Seagate NL, 250GB 7200.10 Baracuda, 250GB WD Caviar RE.

The Seagate NL and WD Caviar RE drives are made for RAID arrays and have a few features that might help when running RAID arrays with parity (having to do with error correction). So I'd recommend them over the consumer-grade 7200.10.

Right away I can tell you that my budget has no room for high-end RAID controllers - let alone 2 of them! Based on my original budget of $1000-$1500 for storage, so far I have:
+ $600 for the 6-disk RAID 50 array
+ $220 74GB Raptor
+ $145 36GB Raptor
TOTAL: ~$965This leaves me with about $500 CDN for two RAID cards. So, what I would like to do is buy one mid-range card for now, and a backup card in 4-6 months time.

Perhaps one of these cards would suit my needs for the 6-drive array? HighPoint RocketRAID 2322, or the LSI LOGIC LSI00042-F.

The HighPoint RocketRAID 2322 has been reviewed by several sites to be a pretty decent card. However, it has external "mini-SAS" ports which are used to connect to its X4 4-drive and X8 8-drive external enclosures. It has no internal SATA ports. The 2322 + X8 external enclosure might be a good combo, but you want your drives inside the computer, so I'd suggest the 8-port RocketRAID 2330 which has the same PCIe x4 connector and RAID capabilities but 8 internal SATA connectors. It's about $250 or so also. I have a HighPoint RR 2310 used as a "dumb" adapter, where the card is not set up in RAID mode because my OS handles that. It is much, much faster than using onboard motherboard ports and I have to say, HighPoint seems to have pretty decent drivers and keeps them updated, or at least in my experience.

I was hoping that since I am getting a RAID card, I would be able to save some money on the motherboard. I am not interested in a server-class/workstation board. I doubt I would ever fill out more than half the available slots, plugs and sockets on boards like those.

This system is primarily a Photoshop workstation, and my main concern is fast data access and security. Beyond that, I do not need multiple super-computing CPUs or top-of-the-line graphics cards. I would feel like an idiot if I overbuild this system. If I ever decided to move onto video editing in the future, then I would rather build a new system from scratch instead of future-proofing this current system.

Considering that the RAID card can support up to 8 drives, I no longer need a motherboard with 8 or 10 SATA connections. I believe a board with 4-6 SATA connections should be sufficient. Beyond that, I just need a motherboard that can handle:
- the RAID card
- 4 DDR2 slots for 6GB RAM
- 1 PCI-E x16
- Native dual-core support
- Overclocking an E6600 to 3.2GhzPreferably it would also be compatible with Intel’s next generation of chips that will be released in ‘07-‘08.

You can run a RAID card in a standard motherboard if it has the appropriate slots. I run mine in an ABIT KN8-SLi board in the second PCIe x16 slot and it works just dandy. Since you want an Intel CPU, look for a board with an Intel 975X*, NVIDIA 650i SLi, NVIDIA 680i SLi, or AMD (ATi) CrossFire 3200 chipset. An Intel 3000 chipset will work too, but those are in socket 775 server boards that run $200+. These chipsets all have enough PCIe lanes to run a standard PCIe x16 graphics card and then a slot for the RAID card. They also support 8GB RAM and have support for dual and often quad-core chips as well. At least 4 SATA ports will be offered. *Note: Some older Intel 975X board DO NOT support Core 2 Duo/Core 2 Quadro chips. If you buy a 975X board, look for Core 2 support. Other chipsets like Intel's popular P965/G965 only have one PCIe x16 slot for a GPU and then a couple of PCIe x1 slots, which will not fit a PCIe x4 card.

The plan has always been to go with the Core 2 Duo E6600, overclocked using air to approx. 3.2Ghz. I don't think I want to get into server-class hardware just yet.

I'm not a real big proponent of overclocking, especially on a machine with your kind of intended usage. I'd not exactly want my machine to act strange on me while I'm trying to get real work done. The E6600 is faster than my X2 4200+ and I think the Manchester is plenty quick. If you want a faster CPU, then get an E6700, X6800, or go with one of the two Core 2 Quadros. I think that Photoshop is multithreaded and if it is, then you'd see a bigger boost in plopping in a Core 2 Quad Q6600 at stock than anything you'd do overclocking a E6600 that didn't require liquid nitrogen. The Q6600 is about US$850 and the E6600 is a little over US$300, so if it were me, I'd stick with the stock E6600 and not worry a whit about stability problems. But it is your choice.

Good luck with the build! One piece of advice I think you'll need is to put the GPU in the bottom PCIE x16 slot and the RAID controller in the top slot of an SLi board if you get one. Otherwise it will not boot. I learned this one...
 

dawgma

Distinguished
Feb 17, 2007
96
0
18,630
MU_Engineer

I'm going to take everything you said under consideration. You're advice is going to go a long way in helping me create a 'smarter' build.

- I will take a look at the RocketRAID 2330... so far it sounds like a strong contender.

- I will do a little more research on the difference between socket 775 and 975X. If it means saving a few bucks for basically the same performance, 975X sounds good.

- I will reconsider overclocking for now. I should put the system together first and see how it performs before I do anything risky to the CPU. But in all likely hood, any C2D should be blistering fast for Photoshop. So I will hold off on OCing for now.

--------------------------------

Well, thanks for your help. In the end I think we managed to establish a better HDD arrangement than what I had originally posted... and we also optimized a few other things along the way. It's hard to talk about a single aspect of a new system without getting into almost everything else...

I hope this post will be useful to a few other people out there looking for fast data access + security in their next system as well.

Cheers,

Art.
 
The 975X is an Intel chipset FOR socket 775. A chipset is what allows the processor to talk to the attached devices like RAM, hard drives, PCI and PCIe slots, etc. The chipset comes with the motherboard and there are several different kinds of chipsets that work with socket 775. I happened to list several that would do the job. If it were me, I'd look at a socket 775 board with the NVIDIA 650i SLi chipset as it's going to be less expensive than the boards with Intel 975X chipsets.
 

TRENDING THREADS