jhopper

Distinguished
Nov 17, 2003
3
0
18,510
I have waited a long time for the day when an alternative to SCSI RAID cards finally becomes available and can match or defeat the performance of SCSI RAID. I just finished reading the article/review of the new SATA RAID cards and was pretty excited, but there was one huge factor that was left out, and seems to be left out from most of the discussions I see about IDE/SATA vs. SCSI: CPU utilization.

I am a UNIX Systems Engineer for a hosting company and we pay an incredible amount for the I/O subsystem in our servers. Once the 73GB U320 SCSI drives are added in, plus the $400 SCSI RAID card, it makes up about 1/3 to 1/2 the price of the server. But, the performance, as far as I've seen so far, is unparalleled. And with dozens of virtual private servers all running on the same machine, each containing perhaps hundreds of processes, the I/O subsystem must be nothing short of blazing. One of the most important aspects of this is CPU utilization. It would simply be not feasible to manage hosting in the way that we do if CPU utilization from I/O was not minimal.

Herein lies the problem with IDE that I've seen so far. The transfer rates have been competitive with SCSI for some time, but the CPU utilization is monstrious comparatively. Even with a dedicated controller card from 3Ware, the CPU utilization will bring the box to its knees on heavy access.

So, my question to the people on the forum is: Does anyone have any statistical analysis of the CPU utilization of these new SATA cards, or perhaps know of a review on some other site that has such information? I'd rather just read articles on THG. I wonder if they would be interested in amending this information to their article?
 

sjonnie

Distinguished
Oct 26, 2001
1,068
0
19,280
Go to <A HREF="http://www.storagereview.com" target="_new">Storage Review</A> and check out their leaderboard.

From their database I see that the Maxtor Atlas 15K (&3GB Ultra320 SCSI) makes 1,880 requests/s whereas the Western Digital Raptor WD740GD (74GB SATA) makes 2,330. Not so much more. The lowest drive in their table is the Segate Barracuba 7200 (160GB SATA) with 1,730 requests and the highest the Western Digital Caviar WD800AB (80GB ATA-100) with 4,350.

Clearly there is a different in CPU utilization, but it's not so great as to make IDE drives an unfeasible prospect. Having said that, that figure on the WD740G is without tagged command queing, essential in your server. I understand that TCQ will be a firmware implementation in the WD740G as opposed to hardware currently used in SCSI drives. I would expect that to increase the CPU load marginally then, but of course, also the overall performance.

<A HREF="http://www.anandtech.com/myanandtech.html?member=114979" target="_new">My PCs</A> :cool:
 

jhopper

Distinguished
Nov 17, 2003
3
0
18,510
Thank you for the link. I'll be sure to check that out.

From the figures in your post, perhaps there is more to the problem than I'm thinking, because in real-world performance, I have identical servers here where the only difference is the drives and controllers, and the CPU utilization on disk usage seems to be more on the order of 100 to 1 instead of nearly 2 to 1 (1800/4350). My next guess was the difference between the SCSI and IDE instructions, or perhaps TCQ, but it sounds like you took that into account. Perhaps it's FreeBSD's optimization of driver code for SCSI vs. IDE.

Perhaps I can convince the company I work for to buy a sample of these SATA drives and controllers so that I can see the results for myself...
 

jhopper

Distinguished
Nov 17, 2003
3
0
18,510
Actually, one thing I forgot to mention is that I originally was not so concerned about drives but in controller cards. Just a vanilla SCSI system, like those built in to mobos, generally don't get ultralow CPU utilization either. But when you throw on the equivalent of an Adaptec 2100s or 2200s, the utilization goes down tremendously, nearly to 0 despite even the heaviest of loads. This is probably due to all the logic on the card that reduces the workload left for the CPU. Is there any reviews available that would compare SATA controllers vs. SCSI controllers, with a full breakdown of CPU utilization?
 
G

Guest

Guest
That is I guess, unless you want the results of your data to decieve people...
 

jim552

Distinguished
May 1, 2003
171
0
18,680
Here is some information that may enable you to think about this.

Compaq Server with a 500mhz Pentium III CPU, Adaptec 131 RAID controller card, 16gb Compaq 10,000 RPM Hard Drives.

Athlon 1800 Server with Promise SATA TX4 RAID Card, Raptor 36gb 10,000 RPM Hard Drives. Using the Asus A78X Motherboard.

Opteron 246 Server on board Promise SATA RAID, Raptor 36gb 10,000 RPM Hard Drives. Using the Asus SK8N Motherboard.

All systems are using their respective 10/100 onboard Network Interfaces

Test done using a zip file of 617mb, all systems have Windows 2000 service pack 4, all tests run in Windows 2000 attached to the network using Terminal Services. Also all systems had a single pair of mirrored hard drives. (Opteron system was serving at the time.)

Compaq Disk to Disk 210 seconds cpu 10-21% mostly 10%
Compaq Network to Disk 107 seconds cpu 18-23% mostly 22%

Athlon Disk to Disk 52 seconds cpu 18-22% mostly 18%
Athlon Network to Disk 90 seconds cpu 69-74% mostly 69%

Opteron Disk to Disk 47 seconds cpu 7-11 mostly 7%
Opteron Network to Disk 60 seconds cpu 3-9 mostly 9%

So I am interpreting this as follows.

The Compaq is able to save the file from the network about as fast as it's hard drives can work. Notice that disk-to-disk is about twice as long which backs up that assumption since the file must be read from one location and then rewritten into a different location. It is likely that the network throughput is actually better than the hard drive throughput.

The best that can be hoped for VIA network transfer seems to be around 60 seconds. So the Opteron is actually slowed down because of the network throughput. If this theory is correct, if I had a 1gb/s network card then the writing of the file should actually be closer to 24 seconds. (I don't, and won't until next year, so I guess we will have to wait on that one.)

This particular Opteron Server will be tested as well with a Promise SATA TX4 card which will help me out in determining if that card is helpful for me. (I do like the MOST CURRENTLY released software for it though. The previous software wasn't all that good.)

Likely in January, I will have Opteron systems with 2 pairs of mirrored drives. I am waiting for the new Raptor 74gb drives for that.

Our databases are not all that large here. So basically, I had servers with pairs of mirrored drives.

I have been waiting for about 1 month now for the Raptor 74gb hard drives, as they are supposed to offer up to 30% better throughput. (I don't really believe anything until I see it, but that is what I read.)

When the new Raptors come out, I will be testing with 2 pairs of mirrored drives to see if that offers any benefits. (Talked to Western Digital, and they said OEMs will get a batch in late November/December with end user availability after that.)

I hope that helps you in thinking about what your options are, or what you may want to experiment with.

I am a FIRM BELIEVER that you need to use a system for it's intended use in order to see how it actually functions. There are just too many variables otherwise.

I was pretty shocked at the percentages on the Athlon system though. I am attributing that, right now, to the system drivers on the chipset. (I hope to h*** it is NOT the promise card.....) I never really was that interested in measuring performance before since we didn't have any throughput problems. I am in test mode now, because I am justifying to myself in regards to standardizing on Opterons and Raptors.

I hope that helps.....
 

advent

Distinguished
Dec 15, 2003
8
0
18,510
If you intend to use RAID5, make sure the RAID adapter has a hardware XOR engine, otherwise the CPU has to calculate the parity. For example, I tried a 4xRAID5 array with the HighPoint RocketRAID 1640 adapter: the CPU jumps to 100% (!) when writing onto the array. And if you ever loose a disk then the array becomes so slow (5MB/s) that it gets unusable; and even worse when rebuilding the array.

IMHO, the CPU utilization is not of big importance as long as it's reasonable: with part of the money you spare by using SATA instead of SCSI, just buy yourself a slightly faster CPU.

I don't really believe in benchmarks, as they only give a rough idea but don't express what you will experience in your specific application. For example, you might see that CPU usage is 15% when reading a big file (sequential read), but it doesn't tell you what the CPU usage will be when having lots of random accesses. My suggestion: choose an adapter and disks that should match your requirements a priori and then perform tests in real conditions.

My test system for reference:
A7N8X Deluxe PCB1.4, AthlonXP 1800+
2x512MB DDR266 in dual channel, CL2.5
Creative GeForce3 Ti 500 w/ 64MB
Seagate Barracuda V 160GB SATA drives