That is likely because you don't work with data centers.
Each Server brought in must be tested for peak and average usage.
Cooling costs, Battery Backup costs (No, not little dinky things, the ones that are pickup-size or bigger with lots of them side by sdie.)
When you have 500+ servers lined up and running and running side-by-side this is critical.
The point is not Intel vs AMD. The power usage of the 5100 is much better than the 5000. However, many data centers were switching to Optys because of the power savings. Those savings are much higher than what was shown in the graphics.
In regards to the 5100 series, the 5100s likely have the advantage over the Optys with 4Dimms and trail with 8Dimms.
If I were to take an average of 4 vs 8 dimms and assumed an average of 6dimms for my 500 servers I would be looking at 66w x 500 = 33,000 Watts.
This article is about Servers. Power Usage is all about Data Centers not single servers. The 33,000 Difference is extreme for cooling needs, battery backups, circuits, etc.....
Mind you this is not a critique of the 5100 since all it shows is that the 5100 system could be a little better or worse than the Opty depending on Config. The point is that it does not crush the Opty in Power as the charts would seem to show.
Before the 5100 I would not have considered a 5000 and in fact we switched to 100% optys. Now that 5100s are out, we would consider a move back. Howerver, much of the work we do is very memory and bus intensive. These tests do not cover that.
The information posted was useful, but it is clear that for the most part the testing was done from the perspective of a desktop system than a server system.