If that were true we would call it a light bulb, not a CPU.A CPU turns all of it's power into heat, no? What are you talking about?
The scheduled forum maintenance has now been completed. If you spot any issues, please report them here in this thread. Thank you!
If that were true we would call it a light bulb, not a CPU.A CPU turns all of it's power into heat, no? What are you talking about?
What else do they turn it into my man?If that were true we would call it a light bulb, not a CPU.
He said that he did restrict PPT, but the fact that max power still went 30% above TDP so to the normal PPT levels shows that this setting did nothing.That's not it, read the review, he said he set PPT numbers in the bios.
Starting with the peak power figures, it's worth noting that AMD's figures can be wide off the mark even when restricting the Package Power Tracking (PPT) in the firmware. For example, restricting the socket and 7950X to 125 W yielded a measured power consumption that was still a whopping 33% higher.
And these numbers are based on what?!A 400 watt CPU that wastes 200 watts in heat is only 50% efficient.
ComputerBase did that:This whole debate is caused by a fundamental misunderstanding of how to measure efficiency. I hope we all know voltage and clockspeeds don't scale linearly, therefore the CPU that runs at lower power will have a fundamental advantage.
...
And that's why you test efficiency at ISO wattage.
That's only true when you push Zen 4 outside of its efficiency window, which is unfortunately something that happens with the X-series CPUs.ISO wattage testing is all that matters, and when you do that Intel cpus in general are much more efficient than their AMD counterparts.
The main reason for that is they had to dial back the boost clocks, in order to accommodate the 3D VCache die. This had the effect of keeping Zen 4 closer to its efficiency window. Sure, the extra cache helps, but I think you can get similar efficiency by taking a non-X3D equivalent and just limiting its clockspeeds in the same way.The only exception is the 7950x / 7950x 3d which is 5-7% more efficient than the 14900k.
That's a nice graph, haven't seen it before. Sadly it doesn't have 14th gen but it's okay.ComputerBase did that:
However, I took issue with how they presented their data. I think this gives a much clearer sense of what's going on.
Zen 4 doesn't scale up or down very well, but it's very efficient in its sweet spot.
That's only true when you push Zen 4 outside of its efficiency window, which is unfortunately something that happens with the X-series CPUs.
The main reason for that is they had to dial back the boost clocks, in order to accommodate the 3D VCache die.
Thanks! It's nice to hear some positive feedback, since I probably spent a couple hours on it.That's a nice graph, haven't seen it before. Sadly it doesn't have 14th gen but it's okay.
It's not the best data cause as far as I'm aware their graphs don't measure power draw but power limits. These should obviously match for heavy workloads but I think they are using some relatively light workloads in their test suite (agisoft photoscan?) and in those I assume power draw will be a lot lower than the power limit. Still it's the best data we currently have.Thanks! It's nice to hear some positive feedback, since I probably spent a couple hours on it.
: )
I can only work with the data available to me. If you find similar data on Gen 14, please let me know!
If I were them, I'd be automating these plots. Shouldn't be hard for them, given what they already did with their interactive bar charts. That way, we could view each of their metrics in this way. The single metric I plotted is just the overall figure (average?).
Yes, but for multithreaded workloads, my experience tells me those CPUs will have no trouble saturating most of the limits. It's only when the lower core-count CPUs get into the higher-limit territory that they might under-utilize it.It's not the best data cause as far as I'm aware their graphs don't measure power draw but power limits.
That's not a light workload!! It's a photogrammetric 2D -> 3D model generator!I think they are using some relatively light workloads in their test suite (agisoft photoscan?)
Ah, cool, lot of apps I don't know in their testing. I'll check that one out 😁Yes, but for multithreaded workloads, my experience tells me those CPUs will have no trouble saturating most of the limits. It's only when the lower core-count CPUs get into the higher-limit territory that they might under-utilize it.
That's not a light workload!! It's a photogrammetric 2D -> 3D model generator!
Anandtech used to have it in their test suite. I don't know why they dropped it - maybe license fees were an issue or they simply wanted to streamline the testsuite?Ah, cool, lot of apps I don't know in their testing. I'll check that one out 😁
It's sad that they stopped, they are very educational for people that don't understand how efficiency works. This thread is a prime example, everyone ditching the 14900ks when in fact it's going to be an efficiency monster.Anandtech used to have it in their test suite. I don't know why they dropped it - maybe license fees were an issue or they simply wanted to streamline the testsuite?
Their last review it featured in was the i9-12900K, but the Rocket Lake review had more details about it:
They do mention part of it is single-threaded, which I didn't realize. I wonder if that's still true of the version ComputerBase used.
I'd love to see some good sources on this, if you've got any to share.... everyone ditching the 14900ks when in fact it's going to be an efficiency monster.
Just for comparisons sake, the normal 14900k required 80 watts less than the 13900k for same clockspeeds. Locked both of them to 5.5 ghz, the 13900k was struggling and hitting the thermal limit at 330 watts, the 14900k was just chilling at 260-270 watts.
There are some people that have videos between a 13900k and the 13900ks. I'm too bored to do the comparison right now since I have the 12900k installed on my system, the other cpus are sitting on the shelf 😁I'd love to see some good sources on this, if you've got any to share.
The numbers were just for example to make it easy to see these charts are not measuring power efficiency but rather work done per watt.And these numbers are based on what?!
Ok you take the 400W number form the article, but where does the 200W number come from? Or is that just an example?
That's what efficiency is.The numbers were just for example to make it easy to see these charts are not measuring power efficiency but rather work done per watt.
That's what efficiency is.
You are majorly confused. Efficiency is the measure of work done per the energy spent.If a CPU was 100% "power efficient" then it would generate no heat!
I don't know what you are missing or how else to spell this out.
You are looking at it like an engineer looking at a car engine.If a CPU was 100% "power efficient" then it would generate no heat!
I don't know what you are missing or how else to spell this out.
I think you mean they're 100% efficient at turning electrical energy into heat energy? If yes, then we're agreed.In terms of physics, all CPUs are the same. They are big resistors with 100% efficiency.
I think you mean they're 100% efficient at turning electrical energy into heat energy? If yes, then we're agreed.
An interesting aside: I once heard someone claim that CPUs' ability to destroy information is what makes them so inefficient. So, they designed (I'm not sure if this was purely hypothetical or if it was ever experimentally demonstrated) a CPU that would instead essentially act as a sorting machine, with the benefit being that it was supposedly much more efficient. I'm not sure how well that could work, in practice, since it seems like it would need an infinite amount of memory to use at scale.
Yes, I think it's clear I'm looking at efficiency from a different point of view here. I see it more like an electrical motor or toaster than a car engine, but same difference I guess.You are looking at it like an engineer looking at a car engine.
In a CPU the effective power and the waste power are not distinguishable, you can not measure them in different ways.
In an engine the output is the work performed plus the excess heat and they are extremely different and easily to measure separately, in a CPU they are both the same unit and you can only measure them all together.
This makes sense but it's not measurable.Yes, I think it's clear I'm looking at efficiency from a different point of view here. I see it more like an electrical motor or toaster than a car engine, but same difference I guess.
The CPU is an electrical switch or series of switches, it is not a resistor or a space heater even though my 13900K would definitely argue otherwise! It's primary function is to switch, not resist. Electrical resistance is part of every circuit and creates electrical losses because of it.
If 50% of the electrical input power is converted to heat within a CPU due to electrical resistance, then it is at best only 50% power efficient.
It's primary function is to switch. Any amount of energy spent or wasted on internal resistance will be a net summed loss. So for example: 400 watts consumed, 50% goes to actually switching and the other 50% is wasted on resistance or other electrical losses. That means out of 400 watts consumed, only 200 watts of that power goes into actual switching and the other 200 watts is wasted in the form of heat.
Evidence of this has already been proven in some of the charts listed on this thread. When the CPU does almost the same amount of work on 65 watts as it does on 250 (for example), then where do you think all that extra power is going? It's not going into work, but rather wasted heat energy. And that absolutely must be taken into account when talking about power efficiency.
You guys are talking about work done per watt, one CPU compared to another. I have acknowledged this. What I'm talking about is power wasted for work done per watt.
I hope this makes perfect sense now what I'm looking at, I really don't know how to spell this out any better.
You cannot physically measure how much energy was used for switching or was waste, you have to look at performance per watt.When the CPU does almost the same amount of work on 65 watts as it does on 250 (for example),
Well sure ya can, I mean I cannot, and maybe you cannot, but some manufacturer somewhere sure can measure thermal losses. It's not that hard, all one needs to do is watch for temperature change either per transistor or per package. Anything above ambient temperature is a thermal loss. It may not be linear, but it is certainly measurable.You cannot physically measure how much energy was used for switching or was waste, you have to look at performance per watt.