TerryLaze
Titan
So what's your point?Except that you know as well as I do that if datacenters were only looking to buy machines on the basis of consolidating the same amount of processing power they already have, they'd probably be down to a single rack, by now. The problem is that compute demands go up at least as fast as density increases, which is why they build more & bigger datacenters and keep looking for better ways to cool them.
Compute demands go up anyway and using fewer total systems is still better than using more total systems.
I mean that you only show power going up and that alone doesn't tell you anything, if power increases at the same rate as performance then it's not changing the efficiency scaling.I don't know exactly what you mean, by that.
i7-7700k 91W TDP divided by 4 cores = 22.7W per coreThat basically means you have to make smaller and smaller chips, as you move to each new node, if you don't want power consumption to go up. We actually saw this, back during Intel's "quad core era", which is the last time they achieved consistent TDP reductions from one generation to the next. As soon as they started adding more cores, the efficiency improvements were overwhelmed and TDPs started to go up.
i9-13900k 253W TDP divided by 24 cores = 10.5W per core
You can't take 'TDP for the full CPU going up' as a number in a vacuum for efficiency scaling because just power doesn't give you any number on performance, and efficiency is power needed for performance or power divided by performance, it's not power divided by nothing.
TDP per core has reduced to half from what it was during "quad era" .
If you don't want to use the total of 253W then get a smaller CPU, or run the 13900k at 90W (derbauer video) ,both of these options still exist, nobody, including physics, is forcing you to do anything here.