Yeah and there are black dots above the 10^2 mark which means the graph includes CPUs or systems with over 100 cores, like well over 100.
Yeah, a look at the underlying data shows some weird stuff after 2018. Before that, the outliers were both generations of Xeon Phi, which is debatable but fair enough to include. After that, he bizarrely included two data points for HPC GPUs (one AMD, one Nvidia), even though they can't possibly run SPEC int benchmarks!
He also included data points for Apple M1 and M1 Max, which is really strange, as they're laptop SoCs. Basically everything else included is a server CPU. They introduce outliers for no good reason.
So, I've removed those, added Milan, Genoa, Bergamo, Sapphire Rapids, and Grace. This is the result:
I couldn't exactly match the SPEC2017_int scores, used to compute the Single-Thread Perf, so there's an artificial dip in those scores. I also didn't try too hard to find transistor counts for the new samples. The only one I added was Milan, but I'm not even sure if that includes the IOD.
So you are bitter about CPUs with hundreds of cores using more that 100W or some imaginary number that you would like to be the max?!
I wouldn't say I'm bitter. I'd characterize it more as concern about efficiency, which I'd subdivide into two separate concerns. First, I'm concerned about systems being run too far above their efficiency sweet spot. For datacenter systems, I wouldn't say that's a major concern, since I know they're minding TCO and that should probably keep them in the more efficient range. My bigger concern is just that efficiency-scaling seems to be breaking down, overall.
Here is the power data plotted on its own. You can clearly see that it's still on the increase:
The two peaks are Genoa-X (400 W) and Nvidia Grace (144-core superchip), which I had to estimate because the 500 W total I found included all of the LPDDR5X memory.