News Intel Core i9-14900KS alleged benchmarks leaked — up to 6.20 GHz and 410W power draw

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Well sure ya can, I mean I cannot, and maybe you cannot, but some manufacturer somewhere sure can measure thermal losses. It's not that hard, all one needs to do is watch for temperature change either per transistor or per package. Anything above ambient temperature is a thermal loss.
How do you figure?!
Switching uses energy to switch, and then the ones and zeroes are electrons flowing or not flowing (actually both are flowing but one more the other less) , when they flow they spend energy, so even a 100% efficient CPU where 100% of the energy would be used to switch and to produce zeroes and ones would still heat up the same amount as a slab of conductive material that does nothing that you push the same amount of energy through.
 
How do you figure?!
Switching uses energy to switch, and then the ones and zeroes are electrons flowing or not flowing (actually both are flowing but one more the other less) , when they flow they spend energy, so even a 100% efficient CPU where 100% of the energy would be used to switch and to produce zeroes and ones would still heat up the same amount as a slab of conductive material that does nothing that you push the same amount of energy through.
No not even.... Different materials could easily be different worlds apart from one another. But I think I'm beginning to see our division here. You're still looking at 1 watt equals how much work and I'm looking at how much of that 1 watt was lost to resistance/heat. But silicone and tin for example are nothing alike, they have wildly different conductive properties. So the material does matter.

I'm not an engineer, just a lowly technician. So I'm just thinking out loud here below.

To find a 1, you have to find a zero first. I would in theory, slow a CPU down while reducing it's voltage/current along the way. How slow can it go at what voltage while being rock solid stable? Once identified, that would be my ground zero. Now I would slowly begin to speed up until failure while noting voltage/current and temperature along the way.

In this way, a layman could plot an efficiency graph of both work done per watt and thermal waste per watt and we do have tools to monitor this, one of them being Hardware Info.

All of the benchmarks I've seen are just that, benchmarks. They are only comparing one CPU to another. They are not looking at thermal losses at all, they only care about work done per watt in 'comparison' to other CPU's. There may be efficiency improvements in the chip design, but how that equates to improved thermal losses is what I really want to know.
 
All of the benchmarks I've seen are just that, benchmarks. They are only comparing one CPU to another. They are not looking at thermal losses at all, they only care about work done per watt in 'comparison' to other CPU's. There may be efficiency improvements in the chip design, but how that equates to improved thermal losses is what I really want to know.
This is where I am not sure you listened in your high school science class. Almost all cover the basic principles of thermal dynamics.

100% of the watts you put into a cpu chip is converted to heat so they are directly related. The only thing that comes out of a cpu if you want to call it that is information. The benchmarks are trying to put a number to that. It is just stated in watts rather than BTU but there is a direct conversion formula.
 
  • Like
Reactions: bit_user
I'm looking at how much of that 1 watt was lost to resistance/heat.
That would be a characteristic of Silicon and whatever other materials they use to make gates from, which you can look up in textbooks, and as long as everybody uses silicon/materials of the same purity everyone would have the same efficiency numbers.
Because you need a semiconductor to have zeroes and ones you need to have a material that isn't very efficient, otherwise everything would be a one and you wouldn't have a computer.
To find a 1, you have to find a zero first. I would in theory, slow a CPU down while reducing it's voltage/current along the way. How slow can it go at what voltage while being rock solid stable? Once identified, that would be my ground zero. Now I would slowly begin to speed up until failure while noting voltage/current and temperature along the way.

In this way, a layman could plot an efficiency graph of both work done per watt and thermal waste per watt and we do have tools to monitor this, one of them being Hardware Info.

What would be the thermal waste in this example? Any temp above "ground zero" ? Because ground zero already has thermal waste, and saying that a CPU is so and so much hotter than when running at the absolute lowest is saying nothing, at least it doesn't tell you how much of that heat is actual work or not.
It only tells you that a CPU gets hotter the faster it runs.
 
Yes, I think it's clear I'm looking at efficiency from a different point of view here. I see it more like an electrical motor or toaster than a car engine, but same difference I guess.
The difference is that motors and engines convert one form of energy (i.e. electrical or chemical) to kinetic energy. You can make direct, physical measurements of the resulting kinetic energy and heat energy, allowing you to say how much of the input energy resulted in kinetic energy vs. heat. There will be a 1:1 relation between input energy and output energy, so it's just a question of what form.

With a computer, you're emitting heat and radio waves. We don't really care about the radio emissions, but I guess those should comprise some proportion of its output. Maybe there are also chemical transformations, like perhaps electromigration, that store some of that energy in chemical bonds, but that should be a negligible proportion of the total.

The CPU is an electrical switch or series of switches,
A CPU is essentially a device for reducing entropy. The second law of thermodynamics holds that you can't reduce entropy in one system without increasing it in another. The heat output of the CPU can be seen as the latter increase in entropy.

Any amount of energy spent or wasted on internal resistance will be a net summed loss.
It's not "spent", but rather transformed into heat or radiation. Whatever electrical energy isn't lost... isn't lost.

When the CPU does almost the same amount of work on 65 watts as it does on 250 (for example), then where do you think all that extra power is going?
The extra power stays in the source. If you're only consuming 65 Watts vs. 250 Watts, that corresponds to a net reduction in the amount of electron movement in the wires. You talk as if it's going somewhere, but what's really happening is that more of the electrical charge just stays put.

BTW, remember that a Watt is Volts * Amperes; an Ampere is 1 Coulomb of charge per second. So, if you reduce power while holding voltage constant, then you must be reducing current, hence less movement of electrical charge.
 
Last edited:
You cannot physically measure how much energy was used for switching or was waste, you have to look at performance per watt.
image-151.png
The independent variable in that data is frequency (not shown in that plot). They're measuring both the package power and performance at different frequency limits.
 
Well it would be stupid to increase power on a CPU while forcing it to stay at the same clocks.
I'm just explaining the mechanism they're using to cause the power and performance increase, so that people are aware.

For instance, you could increase CPU power dissipation by over-volting. However, unless you use it to enable overclocking, it won't translate into a performance difference. Thus, the relation they appear to show between power & performance is an indirect one. That was my only point.
 
  • Like
Reactions: Nyara
That would be a characteristic of Silicon and whatever other materials they use to make gates from, which you can look up in textbooks, and as long as everybody uses silicon/materials of the same purity everyone would have the same efficiency numbers.
Because you need a semiconductor to have zeroes and ones you need to have a material that isn't very efficient, otherwise everything would be a one and you wouldn't have a computer.

Well yes, the materials used matters greatly in a CPU. But it's still a series of switches, comprised of billions of "transistors" in this case, not resistors.

What would be the thermal waste in this example? Any temp above "ground zero" ? Because ground zero already has thermal waste, and saying that a CPU is so and so much hotter than when running at the absolute lowest is saying nothing, at least it doesn't tell you how much of that heat is actual work or not.
It only tells you that a CPU gets hotter the faster it runs.

Yes, exactly. Anything above zero is where a 'tech reviewer' (for example) could begin to plot a graph. And also yes, the moment a single watt is applied to the first transistor, losses would begin to occur. But again, this testing method I described was just a thought, it was seeing a problem and attempting to find a solution. The point in trying to plot this graph is to see the divergence in performance verses thermal losses.

For example: At what point do you have a 1:1 correlation between performance and heat, then at what point does the scale begin to tip where the CPU just gets hotter without any real net gain. And every point in-between.

I want to see this in a graph, so I thought of a way to do it. Again, it was just an out loud thought, it doesn't have to be correct nor the only way of doing the job. It was and still is, just an out loud thought.
 
Well it would be stupid to increase power on a CPU while forcing it to stay at the same clocks.
Just to expound further on this topic, the shape of these curves is determined almost entirely by GHz/W.

image-151.png


Here's the inverse relation, which I think is a little more interesting.

b9l9lDl.png


Of course, there's a remaining variable, which is how well performance scales with clock speed. It should be fairly linear, but it's a given there will be some dropoff. So, the question is just how much.

SUdXdDM.png


It's a little flatter than I expected, especially towards the top end.

So, to sum it up, perf/W essentially works out to GHz/W * perf/GHz. Since perf/GHz is pretty flat, the shape of the curves in the first plot (theirs) is essentially GHz/W.
 
  • Like
Reactions: rluker5 and Nyara
you broke the code.
I didn't think I was saying anything terribly non-obvious, but the point was to show exactly what the respective graphs actually looked like. My explanation was just supporting the graphs.

In fact, what's even more interesting is to divide out the clock speed from the Y-axis. This shows Golden Cove uses about 3.6 times as much power per clock cycle at 5.0 GHz than at 1.4 GHz!

xvrtwg2.png


That's specific to x264, I should add. With the 7zip workload, the ratio is only 2.8.
 
Last edited:
  • Like
Reactions: rluker5 and Nyara
Status
Not open for further replies.