Blandge :
8350rocks :
That depends on the % ratio of CPUs to GPGPUs...it also depends on the workload, some workloads will use more of the CPUs and some will use more of the GPGPUs. The power draw from the 2 sources can very depending on the benchmark (often dramatically).
GPGPUs may be able to perform more FLOPS than CPUs because of their parallel nature, but they still draw more power. When you're talking about 17.59 petaflops...things are on a different level.
They do not draw more peak power per FLOP when processing the massively parallel workloads that they are designed for, otherwise nobody would use them. What is the point of putting 20000 K20x in Titan if they are not as efficient as CPUs? Why not just add 40000 Opterons and increase efficiency.
The benchmark used in the LINPACK Benchmark is to solve a dense system of linear equations. For the TOP500, we used that version of the benchmark that allows the user to scale the size of the problem and to optimize the software in order to achieve the best performance for a given machine. This performance does not reflect theoverall performance of a given system, as no single number ever can. It does, however, reflect the performance of a dedicated system for solving a dense system of linear equations. Since the problem is very regular, the performance achieved is quite high, and the performance numbers give a good correction of peak performance.
Source
GPUs are good at solving linear equations.
http://www.advancedhpc.com/gpu_computing/tesla_gpu_s2050_s2070.html
4 GPGPUs used in that HPC consume 900W power...while 4 opterons consume an average of 600W.
Why do you think the most efficient HPC on the "green list" doesn't have them...?
The answer is
DENSITY...you can do far more double precision FLOPS with GPGPUs bundled together in a tighter space. It would be more efficient to run the same number of Opterons in terms of power consumption in a raw form. However, depending on the performance you're looking for, GPGPUs become a clear advantage to get to massive numbers, at the cost of power consumption.
If you want to get into FLOPS/watt...that's a different animal, and GPGPUs will do more FLOPS/watt...but...your power consumption increases linearly as you increase performance.
As I said,
it depends on the workload, if GPGPUs were so clearly more efficient, then why have a system with CPUs at all? Why not use some low level coded OS in a DOS type format coded specifically for GPGPUs?
Because GPGPUs are less efficient at some workloads than CPUs. Different benchmarks will stress different components differently.
EDIT: Xeon Phi is not a GPGPU, either...that would be like saying Kaveri is a GPGPU, it's not.