Considering that in the real world, nobody turns off their computer after running one item, the efficiency readings should actually be done by measuring all computers at all times, and taking the data based on the slowest. For example, if they finish something in 1,2,3,4,5 minutes, the actual real world efficiency would be 1+4 idle, 2+3idle, 3+2 idle, 4+1idle, 5+0. This would give an actual comparison based on same user workload, and we will likely find that all the computers fit more or less into the same efficiency. While the current workload efficiency method is fine for extremely long work times (3D rendering of a movie, @folding, etc), it does not reflect the typical use of individuals (and IT departments probably could care less).