AHCI - CPU usage among Harddrives

Kewlx25

Distinguished
More of a trivia, but something that has been bothering in that I want to know why.

Back before AHCI, there were many different drivers and implementations, so CPU usage could vary quite a bit. But now that AHCI is used for nearly everything, why does CPU usage vary among harddrives for a given throughput and block size?

Assuming two drives are using the exact same generic built in drivers, what would cause one drive to cause more kernel time than another for a certain IO load?

Thanks :)
 
Chances are that one drive is receiving more I/O requests than the other. Two drives can be transferring data at the same rate yet consume different amounts of CPU time if one drive is, for example, processing 100 reads of 1MB each per second while the other is processing 1000 reads of 100KB per second. The CPU time required to manage the I/O is related to the number of requests per second, not the number of bytes per second.
 

Kewlx25

Distinguished
While I agree with your post, what I don't understand is how harddrive benchmarks can have completely different results on different harddrives.

If the OS requests data block 0xABC123, then the harddrive should return that and nothing else. Unless a given HD may have different prefetch and will return extra data and the OS blindly accepts the extra data.

Example. Some of the HD benchmarks will transfer X amount of data. If you take the Total-MB/s and divide that by kernel time, you will get MB/s/CPU%.

Assuming the exact same amount of data is transferred and both drives use the same block sizes, the amount of IO should be nearly, if not, identical.

This means kernel time per IO varies among AHCI drives.
 
OK, I didn't understand you to be referring identical benchmarks running on the two drives. If you run the same benchmark on both drives and if both are using the same transfer sizes, then I agree that the CPU time should normally be the same. Some of the reasons I can think of that would cause a difference would be:

- if the drives are attached to different controllers then different drivers may be involved.
- if the drives are configured to use motherboard BIOS then different RAID organizations imply different amounts of CPU workload.
- if one of the drives is in PIO mode then it would cause a lot of extra CPU usage to transfer each byte read or written.

I don't really agree with this because caching is internal to the drive. Caching could potentially improve the transfer rate if it's effective, and a faster transfer rate would imply a higher CPU utilization to service the I/O requests (other factors such as transfer size being equal), but it wouldn't cause a higher CPU utilization per transfer.
 

TRENDING THREADS