More of a trivia, but something that has been bothering in that I want to know why.
Back before AHCI, there were many different drivers and implementations, so CPU usage could vary quite a bit. But now that AHCI is used for nearly everything, why does CPU usage vary among harddrives for a given throughput and block size?
Assuming two drives are using the exact same generic built in drivers, what would cause one drive to cause more kernel time than another for a certain IO load?
Thanks
Back before AHCI, there were many different drivers and implementations, so CPU usage could vary quite a bit. But now that AHCI is used for nearly everything, why does CPU usage vary among harddrives for a given throughput and block size?
Assuming two drives are using the exact same generic built in drivers, what would cause one drive to cause more kernel time than another for a certain IO load?
Thanks