News Intel issues statement about CPU crashes, blames motherboard makers — BIOSes disable thermal and power protection, causing issues

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

TheHerald

Upstanding
Feb 15, 2024
263
64
260
No, because it was technically possible for AMD to have launched the 7700 when they launched the 7700X - they simply chose not to.
It was also technically possible to launch the 7700x at 300$ but they didn't.

That's what the data in my plot shows. If you have other data, feel free to share it. I'm sure there are some workloads where the i5-13600K is faster, but not the multithreaded ones that ComputerBase.de chose to include in their test set.

Doesnt really matter - since the 13600k is indeed more efficient in most wattages even in your graph, but as I've said ive seen a stock 13600k limited to 77w topping the entire corona chart against oced and ln2'd 7700x.
Efficiency is work per unit of energy (approximated here as average power). You can define other metrics, but efficiency allows a slower CPU to win if its power usage decreases by even greater than its speed.


You must not have looked at them very carefully, because the only difference is that they sorted in the reverse order. The one I quoted from the Ryzen 7700 review put the most efficient at the bottom, while the one you quoted put the most efficient at the top. Also, maybe they changed the Cinebench version, because the 7700 went from 105.2 Pts/W to 112.1 Pts/W in yours!

Huh? Your graph didn't include a lot of CPUs, the one I posted is more comprehensive. It doesn't matter what the 7700 does, the 13400 is at the top - easily might I add - while offering the same ST performance as the 7800x 3d.
It's not insane - it's exactly like I said. The 7950X has higher clockspeed limits than the 7700, which allows a single threaded job to push well past the point of diminishing returns, in terms of perf/W. The 7950X also has the disadvantage of the extra compute die, which is just dead weight in any single-threaded efficiency comparison.

I'm sure you can close much of the gap between the 7950X's efficiency and that of the 7700, if you just limited the former to a single-core limit of 5.3 GHz, instead of the 5.7 GHz turbo limit it has by default. This is what one should do, if they wanted the multithreaded performance of the 7950X, but didn't want to take such a hit on single-threaded efficiency.

I said this before: Zen 4 is efficient in its sweet spot. The 7950X's boost limit is well outside that sweet spot.

But none of that matters because the 13400 is leading the charts regardless. No matter what tinkering you do to the 7950x it will still get booty blasted by any Intel CPU at ST efficiency - that much is obvious from the graph I posted. You can limit it to 5.3ghz and it will still lose in both performance and efficiency compared to an Intel CPU limited at 5.3 ghz,.
Your conceptional model is completely broken, here. Any overclocker would tell you that power increases needed to push higher frequencies grow at a nonlinear rate. You can't use an efficiency ratio to predict how much power one or the other CPU would need at a given performance level. It might be that no amount of power could extract equivalent performance from the 7950X, at a given task.

Im an overclocker and that's exactly my point? That if you either boost the 7950x to match 14900k performance, or limit the 14900k to drop to the 7950x's performance, the already huge efficiency advantage of Intel will grow even bigger. The 13700k is already twice as efficient compared to the 7900x, what would happen if you limited it even further to match performance?


The rest of your post is the usual amd defense force for no freaking reason. I don't care about servers, I don't use servers, if AMD cpus are so horrible in ST efficiency because they are made for servers good for them, still doesn't change the results, that they are pretty horrible at that. The whole argument started because you said Intel don't have a ST / watt advantage, now you are telling me they do have the advantage but because AMD is making CPUs for servers. LOL, who cares..

But you since you mentioned it, SPs are focusing on AI and ML workloads due to AMX, so let's see what happens in those workloads against AMD's leading server parts.

image-2024-05-03-143148669.png





image-2024-05-03-143236247.png


Ouch
 
Last edited:
The whole argument started because you said Intel don't have a ST / watt advantage, now you are telling me they do have the advantage but because AMD is making CPUs for servers. LOL, who cares..

Uh... If I tell you AMD has an infinite advantage in AVX512 workloads in both ST and MT, would you care?

Performance is application dependent as it'll come down to how they stream functions and interact with memory.

In games, specially thread-bound ones, AMD's V-Cache'd CPUs have an insane advantage over Intel and that has nothing to do with the micro architecture driving either.

Talking about the efficiency side of a CPU nowadays it's super nuanced and kind of moot, so I completely agree with your closing statement at least (hence the quoted part) of "LOL, who cares".

The Industry knows AMD is more efficient for the vast array of metrics they care about and Intel is suffering for it (hence they are pushing for the "accelerators" shenanigans). What you, someone else or I interpret or nitpick on is absolutely moot.

Regards
 
  • Like
Reactions: bit_user

bit_user

Polypheme
Ambassador
Doesnt really matter - since the 13600k is indeed more efficient in most wattages even in your graph,
We started this whole tangent with you musing about the i5-13600K being limited to lower wattage. The point of my chart is to show that up to the 7700's PPT, it's overall the same or better efficiency. Going above that starts to push Zen 4 out of its efficiency window, meanwhile the Gracemont cores of the i5-13600K are probably still well within theirs.

but as I've said ive seen a stock 13600k limited to 77w topping the entire corona chart against oced and ln2'd 7700x.
There are definitely workloads which favor one CPU vs. the other. The ComputerBase.de scores I used are a composite across many workloads. It didn't include Corona.

It doesn't matter what the 7700 does, the 13400 is at the top - easily might I add - while offering the same ST performance as the 7800x 3d.
The i3-13400 achieves better efficiency by having lower clockspeed limits, lower power limits, and a smaller die. This is another example of the point I made about why the 7700 is more efficient than the 7700X and why they're both a lot more efficient than the 7950X.

However, it's rather silly to compare the i3-13400 against larger CPUs, since it's only a little 4-core model and much lower cost. It makes sense to compare it to other CPUs in that market segment, which would be something like the 8600G, which not only beats the i5-14600K in overall Geomean, but also does it while using just 2/3rds of the power.

Of course, I'm sure AVX-512 is helping out the 8600G. Absent that, I really wouldn't expect it to keep hanging with the i5-14600K.

But none of that matters because the 13400 is leading the charts regardless.
Those charts don't even show all of the models TechPowerUp has tested. Notably, that chart excludes AMD APUs. The proper point of comparison for it is something like AMD's 8500G, which TechPowerUp unfortunately hasn't yet tested.

No matter what tinkering you do to the 7950x it will still get booty blasted by any Intel CPU at ST efficiency
If you really cared just about single-threaded efficiency, then why are you buying a 16-core CPU in the first place?

Server CPUs are also bad at single-threaded efficiency! Why don't you pick on them? That would make almost as much sense as targeting the 7950X!

It's like trying to compare fuel efficiency between a compact car, a sports car, and a truck. You're all over the map, here. These comparisons are only meaningful within a market segment!

The 13700k is already twice as efficient compared to the 7900x,
Single-threaded, only. Again, why buy either of these CPUs, if you're principally compared with single-threaded efficiency?

However, when it comes to multi-threaded workloads, the 7900X is 30.9% more efficient than the i7-13700K, and that's what typically matters to users of these respective CPUs.

efficiency-multithread.png


The rest of your post is the usual amd defense force for no freaking reason.
The reason I discussed details of the microarchitecture is because you went there first. You tried to claim that Zen 4's lack of single-threaded performance meant their cores were simply behind and needed to "catch up". You fundamentally misunderstood what AMD was trying to do with Zen 4, which is not building a single-threaded screamer, but actually create a single microarchitecture that would be adept at mobile and server applications. Desktop performance was largely an afterthought.

if AMD cpus are so horrible in ST efficiency because they are made for servers good for them,
You're overgeneralizing. I already showed you the 7700 has decent single-threaded efficiency. Zen 4 can have good efficiency, as long as you don't try to turn up the clocks too high.

You're trying to take the most extreme examples, which are single-threaded efficiency on the 7900X and 7950X and treat them as if they characterize the entire microarchitecture across the entire product range. They don't. They're outliers.

Yet, for Intel, you cherry-pick the i3-13400 as your representative sample. It's also an outlier, but at the other end of the spectrum.

I really don't know why you'd even expect Intel to be particularly good on efficiency, against Zen 4. Intel has admitted their Intel 7 node is behind TSMC's N5 node. Zen 4 targets efficiency where it counts, which is well below the single-thread clockspeed limits of gaming CPUs.

BTW, if someone really cared about efficiency, what they should actually do is look at the Alder Lake N100. That's 4 Gracemont cores with a PL1 of 6 W. I have a N97 (Intel hater that you believe me to be), and can confirm the performance is viable as a low-end desktop.

since you mentioned it, SPs are focusing on AI and ML workloads due to AMX, so let's see what happens in those workloads against AMD's leading server parts.
GPUs are way better. The only reason anyone even cares about AMX is due to how difficult and expensive GPUs are to source, currently. Over time, I wouldn't be surprised if even Intel turns their back on AMX, just like they're now doing with 512-bit vector instructions.
 
However, it's rather silly to compare the i3-13400 against larger CPUs, since it's only a little 4-core model and much lower cost. It makes sense to compare it to other CPUs in that market segment, which would be something like the 8600G, which not only beats the i5-14600K in overall Geomean, but also does it while using just 2/3rds of the power.
i5-13400 is 6p/4e but the frequencies are significantly lower than the 13600/K
 

bit_user

Polypheme
Ambassador
i5-13400 is 6p/4e but the frequencies are significantly lower than the 13600/K
Ah, yes it's an i5 - that's why. Harold dropped the prefix, so I got mixed up and took it for an i3.

Interestingly enough, ark.intel.com suggests there are variants of it made from both Alder Lake C0 stepping dies and Raptor Lake B0 steppping dies! You'll find this in the "Ordering & Compliance" info. So, I wonder which TechPowerUp actually tested. Chaining Raptor Cove down to 4.6 GHz does sound like a very good way to balance single-threaded performance and efficiency!

Also, that puts the current street price at about $229, which is exactly what the Ryzen 5 8600G is selling for. That's a 4 + 4 setup, with Radeon 760 M and Ryzen AI NPU, made on TSMC N4.
 
Last edited:
Ah, yes it's an i5 - that's why. Harold dropped the prefix, so I got mixed up and took it for an i3.

Interestingly enough, ark.intel.com suggests there are variants of it made from both Alder Lake C0 stepping dies and Raptor Lake B0 steppping dies! You'll find this in the "Ordering & Compliance" info. So, I wonder which TechPowerUp actually tested. Chaining Raptor Cove down to 4.6 GHz does sound like a very good way to balance single-threaded performance and efficiency!
I'm tempted to see if I can borrow my friend's 13600K box to see which is making the bigger efficiency impact the high P or E core clocks because the 13400 seems to be almost in the sweet spot. While I would never put up with those clocks in my own machine the analytic part of my brain wants to mess around with P/E clock scaling. I'm sure in all core heavy loads it's the P cores, but I just wonder about all those in between cases.

Though another thing that has stuck in my head is the efficiency improvements that also happen with APO which makes me wonder how much of the efficiency issues current Intel CPUs have in those lighter multithreaded loads is just scheduling.
 

TheHerald

Upstanding
Feb 15, 2024
263
64
260
The i3-13400 achieves better efficiency by having lower clockspeed limits, lower power limits, and a smaller die. This is another example of the point I made about why the 7700 is more efficient than the 7700X and why they're both a lot more efficient than the 7950X.
Thats how every CPU achieves better efficiency. By having lower clockspeeds, lol. Our disagreement was whether or not Intel had a lead in ST efficiency. According to all graphs posted, they do have - a huge one.
However, it's rather silly to compare the i3-13400 against larger CPUs, since it's only a little 4-core model and much lower cost. It makes sense to compare it to other CPUs in that market segment, which would be something like the 8600G, which not only beats the i5-14600K in overall Geomean, but also does it while using just 2/3rds of the power.
The 13400 IS the larger CPU. It's 6+4 compared to the 7700 8 cores. Which again demonstrates exactly my point, even with the disadvantage of being a bigger CPU, it still freaking demolishes everything AMD has in ST efficiency, which you disagreed with.
If you really cared just about single-threaded efficiency, then why are you buying a 16-core CPU in the first place?

Server CPUs are also bad at single-threaded efficiency! Why don't you pick on them? That would make almost as much sense as targeting the 7950X!

It's like trying to compare fuel efficiency between a compact car, a sports car, and a truck. You're all over the map, here. These comparisons are only meaningful within a market segment!
Did I ever - anywhere - say that I care about ST efficiency? I said that Intel are demolishing AMD on that front, you disagreed, instead of admitting you were wrong you are now pulling a "why does it matter".

But since you asked, 90% of the daily tasks of a home computer are ST tasks. Browsing the web - excel sheets - etc. , I've seen a 7950x hitting spikes of 65w with an average of above 40 compared to single digits on a competing Intel product. That tells me all I need to know about how great AMD cpus are for home usage, lol.

Single-threaded, only. Again, why buy either of these CPUs, if you're principally compared with single-threaded efficiency?
However, when it comes to multi-threaded workloads, the 7900X is 30.9% more efficient than the i7-13700K, and that's what typically matters to users of these respective CPUs.
No it's not 30% more efficient unless you are comparing at a different power limit, which is just fundamentally flawed. But again, since you asked, if you really care about MT efficiency why would you get a 7900x? Im sure a 13700t / 14700t claps it. They won't even be in the same ballpark.
You're overgeneralizing. I already showed you the 7700 has decent single-threaded efficiency. Zen 4 can have good efficiency, as long as you don't try to turn up the clocks too high.

Decent is subjective. Objectively any AMD part (besides the APUs I guess) are getting clapped, being multiple generations behind Intel's counterparts. You have every right to consider them decent - but the fact remains that - if AMD follows the same cadence, it's not going to catch up to Intel until 2040 in ST efficiency. They increased their efficiency by 20% from 2019 to today (3900x to 7900x) while the 13700k is still 100% in the front (while also being faster, lol).
You're trying to take the most extreme examples, which are single-threaded efficiency on the 7900X and 7950X and treat them as if they characterize the entire microarchitecture across the entire product range. They don't. They're outliers.

Yet, for Intel, you cherry-pick the i3-13400 as your representative sample. It's also an outlier, but at the other end of the spectrum.
And now you are literally making stuff up. I compared the 13400 to the 7800x 3d, since those 2 have the same ST performance. I also compared the 7900x to the 13700k, and the latter won by a 100%. I also compared the 7950x to the 13900k, and again, Intel wiped the floor. How are these outliers? What are you even talking about?

I really don't know why you'd even expect Intel to be particularly good on efficiency, against Zen 4. Intel has admitted their Intel 7 node is behind TSMC's N5 node. Zen 4 targets efficiency where it counts, which is well below the single-thread clockspeed limits of gaming CPUs.
Intel is not just particularly good, it's outstanding in efficiency. For the home user that does ST tasks, the monolithic approach has a tremendous lead in efficiency, and for MT workloads the fact that they have a lot more cores in most segments makes them faster / more efficient than their amd counterparts.

As im posting this, while im watching 2 streams (youtube and twitch), the cpu is sitting at 7 watts. I can call my brother and ask him how much his 7950x is pulling for the same task, but I already know it' going to be over 50. I mean my freaking 3700x needs 45. The only thing that can actually compete in efficiency for the home user is AMD's mobile chips, those are actually insane - tying or even beating Intel (I know, I have 4 laptops with them). But the desktop AMD parts, lolno, only the 7950x makes sense if you are doing lots and lots of MT, and that's about it.

After a full days work, with 3-4 hours of gaming, the average power draw (14 hours of hwinfo running) is 26 watts. That's how much the most efficient AMD chip needs at freaking IDLE. If you have one, you know it.
 

TheHerald

Upstanding
Feb 15, 2024
263
64
260
Ah, yes it's an i5 - that's why. Harold dropped the prefix, so I got mixed up and took it for an i3.
I used the 13400 as a comparison simply because it PERFECTLY matches 7800x 3d's performance. So we can see how much power each CPU needs for delivering the same performance. And who would have guessed, Intel is a couple of lightyears ahead.
 
Last edited:

bit_user

Polypheme
Ambassador
Thats how every CPU achieves better efficiency. By having lower clockspeeds, lol. Our disagreement was whether or not Intel had a lead in ST efficiency. According to all graphs posted, they do have - a huge one.
As I said, the data is limited to what TechPowerUp tested. For some reason, they have yet to test any Ryzen 8000G APUs. Traditionally, the APUs are where AMD is competitive on single-threaded efficiency, since they don't have the overhead of a separate I/O die.

What I did show was Phoronix' testing of the 8000G APUs, where they outranked Intel on both performance and power consumption. Unfortunately, Phoronix doesn't classify benchmarks based on single vs. multithreaded.

The 13400 IS the larger CPU. It's 6+4 compared to the 7700 8 cores. Which again demonstrates exactly my point, even with the disadvantage of being a bigger CPU, it still freaking demolishes everything AMD has in ST efficiency, which you disagreed with.
True, I got confused and thought it was an i3, because you didn't bother to say. Something I'm not sure you understand is that Intel uses only 3 different silicon dies for the entire Gen 12, Gen 13, and Gen 14 desktop CPU lineup. The smallest die is Alder Lake stepping H0, which has 6P + 0E cores. This is the die I thought it used, when I mistook it for an i3. The next larger is Alder Lake stepping C0, which has 8P + 8E cores. The largest is Raptor Lake stepping B0, which has 8P + 16E cores. For every single one of the LGA 1700 products, they take one of these dies and disable portions (which often have defects, at least until the process matures).

So, when you see the i5-13400F beat those AMD CPUs on single-threaded efficiency, what you're actually seeing is a (presumably) Raptor Lake B0 die's P-core, limited to 4.6 GHz. You could take any Raptor Lake CPU that's based on a B0 stepping die and test its ST efficiency, with a 4.6 GHz frequency limit, and it would behave similar to that i5-13400F, in both performance and efficiency. What it tells us isn't anything absolute about Raptor Cove's efficiency, but rather how efficient it is at 4.6 GHz. If the other CPUs with that core are using higher limits, then this fact doesn't really help you.

The same is actually true for AMD. The R7 7700 uses one of the same compute dies and the same I/O die as the R9 7950X. However, what's notable about that lineup is that it's missing any AMD models with such a low boost limit. Even the R5 7600 has a Boost clock of 5.1 GHz!

You seem to be trying to assert something about the intrinsic efficiency of the microarchitecture, but the data you're trying to use doesn't support such comparisons. For that, you would need to measure the energy usage of each CPU at different frequencies, like this:

What it actually shows is that Golden Cove's peak efficiency is all the way down at 1.4 GHz. Unfortunately, they never took such measurements of Zen 4, as far as I can find.

Did I ever - anywhere - say that I care about ST efficiency?
To focus on something presumes that it matters to someone. I think it's not an area of much interest to people buying CPUs with lots of cores. What they care about is multi-threaded efficiency, and that's where we begin to catch glimpses of Zen 4's intrinsic efficiency.

I said that Intel are demolishing AMD on that front, you disagreed, instead of admitting you were wrong you are now pulling a "why does it matter".
It's like the analogy I gave of comparing compact cars with trucks. Someone who buys a truck doesn't care that a small car gets better fuel mileage, because they presumably have needs which cannot be met with a small car. So, that makes the comparison irrelevant.

But since you asked, 90% of the daily tasks of a home computer are ST tasks. Browsing the web - excel sheets - etc. ,
Web browsing hasn't been single-threaded for more than a decade. Editing excel sheets tends not to be CPU intensive, so it doesn't even count as a computational workload. To the extent it does, I question whether Excel doesn't actually use multithreading to evaluate large spreadsheets.

More to the point: if someone is just doing such light computational tasks, why even buy a 24-thread or 32-thread CPU? On the flip side, if you do buy such a CPU, then it's probably because you do some heavily-threaded work, and that's where you care about efficiency since now we're talking about some rather large amounts of energy usage.

No it's not 30% more efficient unless you are comparing at a different power limit, which is just fundamentally flawed.
It's right there in the chart:

efficiency-multithread.png


7900X gets 157.0 pts/W, while the i7-13700K (stock) gets 119.9. Divide those two numbers and you get 1.309. So, the 7900X (stock) achieves about 1.31 times as many points per W on CineBench multi-threaded. Another way of saying that is: it's about 31% more efficient!

But again, since you asked, if you really care about MT efficiency why would you get a 7900x? Im sure a 13700t / 14700t claps it. They won't even be in the same ballpark.
That's exactly where this chart comes into play, as it goes all the way down to a power limit 45 W. It shows the R9 7900X still beats an i7-13700K, at that power limit.

cj1qY3F.png


Decent is subjective. Objectively any AMD part (besides the APUs I guess) are getting clapped, being multiple generations behind Intel's counterparts. You have every right to consider them decent - but the fact remains that - if AMD follows the same cadence, it's not going to catch up to Intel until 2040 in ST efficiency.
The multi-threaded efficiency of these CPUs implies single-threaded efficiency. When you run a multi-threaded workload, the amount of power available to each core is reduced, which forces it to run at a lower frequency (assuming power limits are actually being enforced). If Zen 4 wasn't efficient at lower frequencies, it could not deliver superior multithreaded efficiency.

The mere fact that we don't have data from an isolated, single-threaded efficiency test of a Zen 4 processor with a low boost clock limit doesn't mean they're not efficient at lower clocks, it just means we don't have the data to clearly show it.

I compared the 13400 to the 7800x 3d, since those 2 have the same ST performance.
Again, that's like comparing a compact car to a truck, just because they both have the same maximum highway speed. They're in different price and performance categories, even if they happen to line up on a single metric (which I'm sure is also very benchmark-specific).

I also compared the 7900x to the 13700k, and the latter won by a 100%.
Ah, but this drives home the point about clock speeds. The 7900X has ST efficiency of 46.2 points, but just shaving off 200 MHz enables the 7900 to achieve a ST efficiency of 72.8 points! A couple hundred more would certainly bring it right up to the i7-13700K, if not past it.

efficiency-singlethread.png


for MT workloads the fact that they have a lot more cores in most segments makes them faster / more efficient than their amd counterparts.
As that chart I made from the ComputerBase data shows, it doesn't help the i7-13700K beat the R9 7900X on MT efficiency, nor does it help the i9-13900K beat the R9 7950X.

As im posting this, while im watching 2 streams (youtube and twitch),
Maybe you should be paying more attention to what I'm saying, and then we wouldn't have to keep going back-and-forth like this.

I can call my brother and ask him how much his 7950x is pulling for the same task, but I already know it' going to be over 50.
Depends on whether his GPU accelerates AV1 decoding. If you're using hardware decoding and he's using software decoding, that would explain the difference. When software decoding is used in both places, there shouldn't be such a gap in power consumption.
 
Last edited:

bit_user

Polypheme
Ambassador
I used the 13400 as a comparison simply because it PERFECTLY matches 7800x 3d's performance.
Not at all!

So, the i5-13400F is:
  • 78.9% as fast at "Applications"
  • 69.4% as fast at 720p gaming on a RTX 4090 - yes, this is an artificial scenario, but it's designed to show performance in CPU-bottlenecked gaming situations.

So we can see how much power each CPU needs for delivering the same performance.
No, since the performance is not equal, you need to look at a proper efficiency metric (i.e. perf/W or total Joules per defined workload).
 

TheHerald

Upstanding
Feb 15, 2024
263
64
260
As I said, the data is limited to what TechPowerUp tested. For some reason, they have yet to test any Ryzen 8000G APUs. Traditionally, the APUs are where AMD is competitive on single-threaded efficiency, since they don't have the overhead of a separate I/O die.
I agree, the APUs and mobile chips from AMD are great, probably better than Intel. That's why I literally have 4 of them. But they don't have an IO die (and have reduced cache). The desktop parts with the IO die though, oh my god they are drinking power like vampires drink blood for home applications.

So, when you see the i5-13400F beat those AMD CPUs on single-threaded efficiency, what you're actually seeing is a (presumably) Raptor Lake B0 die's P-core, limited to 4.6 GHz. You could take any Raptor Lake CPU that's based on a B0 stepping die and test its ST efficiency, with a 4.6 GHz frequency limit, and it would behave similar to that i5-13400F, in both performance and efficiency. What it tells us isn't anything absolute about Raptor Cove's efficiency, but rather how efficient it is at 4.6 GHz. If the other CPUs with that core are using higher limits, then this fact doesn't really help you.

The same is actually true for AMD. The R7 7700 uses one of the same compute dies and the same I/O die as the R9 7950X. However, what's notable about that lineup is that it's missing any AMD models with such a low boost limit. Even the R5 7600 has a Boost clock of 5.1 GHz!

You seem to be trying to assert something about the intrinsic efficiency of the microarchitecture, but the data you're trying to use doesn't support such comparisons. For that, you would need to measure the energy usage of each CPU at different frequencies, like this:
What are you even talking about my man? Frequencies are irrelevant here. The 13400f at the exact same ST performance as the 7800x 3d wipes it in efficiency. That's what matters.
More to the point: if someone is just doing such light computational tasks, why even buy a 24-thread or 32-thread CPU? On the flip side, if you do buy such a CPU, then it's probably because you do some heavily-threaded work, and that's where you care about efficiency since now we're talking about some rather large amounts of energy usage.
You think people that buy eg. a 13700k are running blender all day? You buy a high end CPU because WHEN you need it to be fast, it will be fast, doesn't mean you are running it maxed out 24/7.
It's right there in the chart:
efficiency-multithread.png

7900X gets 157.0 pts/W, while the i7-13700K (stock) gets 119.9. Divide those two numbers and you get 1.309. So, the 7900X (stock) achieves about 1.31 times as many points per W on CineBench multi-threaded. Another way of saying that is: it's about 31% more efficient!
And according to that chart the 5950x is the most efficient CPU of them all, which is laughably wrong. That's why you don't test efficiency at different power limits. It's at the very least asinine. Both the 7950x and the 13/14900k wipe the 5950x off the face of the Earth in MT efficiency. I don't know why you keep bringing up that chart, it's totally irrelevant.
That's exactly where this chart comes into play, as it goes all the way down to a power limit 45 W. It shows the R9 7900X still beats an i7-13700K, at that power limit.
cj1qY3F.png
You compared a 13700k to a 7900x with out of the box power draw to conclude that the 7900x is 30% more efficient. But if you use that logic the 13700t WIPES it. So now you changed your argument from out of the box to ISO power again....


The mere fact that we don't have data from an isolated, single-threaded efficiency test of a Zen 4 processor with a low boost clock limit doesn't mean they're not efficient at lower clocks, it just means we don't have the data to clearly show it.
If we don't have the data then you can't claim they are more efficient (which you did). But lower clocks won't do anything regardless, The 13400f clearly demonstrates that at 4.6ghz it can match the performance of the higher clocked AMD parts while drawing less power. Dropping the clocks on the AMD parts will just make them lose in performance -in which case you can just drop the clocks even further on the Intel parts.

Ah, but this drives home the point about clock speeds. The 7900X has ST efficiency of 46.2 points, but just shaving off 200 MHz enables the 7900 to achieve a ST efficiency of 72.8 points! A couple hundred more would certainly bring it right up to the i7-13700K, if not past it.
I refuse to accept you still don't understand the problem with your argument. The 7900x is already SLOWER than the 13700k. The 7900 is even further behind. Dropping the clocks even more just makes it a stupid comparison, cause the 7900 is going to be 20%+ behind in ST performance. So what stops you from dropping the clocks on the 13700k by 20% to match the performance? It will still wipe the floor with the 7900.
As that chart I made from the ComputerBase data shows, it doesn't help the i7-13700K beat the R9 7900X on MT efficiency, nor does it help the i9-13900K beat the R9 7950X.
I've already aknowledged that the 7950x is indeed more efficient than the 13900k. But the 7900x doesn't compete with the 13700k. The 7900x had an MSRP of 549$, the 13700k had an MSRP of 409$. It was directly comparable to the 399$ R7 7700x (hence the name, i7 vs R7), and it beat it very handily, as your graph shows. That's exactly what I meant when I said Intel beats AMD in efficiency (even in MT efficiency) due to the core numbers advantage. Their 16 core part was price matching AMD's 8core part.
Depends on whether his GPU accelerates AV1 decoding. If you're using hardware decoding and he's using software decoding, that would explain the difference. When software decoding is used in both places, there shouldn't be such a gap in power consumption.
Have you actually used a 7950x? Just browsing the youtube comments spikes the CPU to 65w constantly. It's insanely, incredibly inefficient for home users.

The only problem that Intel CPUs have are stock bios settings. Other than that, they are insanely efficient in both ST / MT tasks and quite fast to boot. But if you don't know or don't want to tinker with the bios, not a great choice.
 
Last edited:

TheHerald

Upstanding
Feb 15, 2024
263
64
260
No, since the performance is not equal, you need to look at a proper efficiency metric (i.e. perf/W or total Joules per defined workload).
It's you that isn't paying attention. We were comparing ST efficiency. The 13400f has the same ST performance as the 7800x 3d. Here you go, 1796 vs 1817.

https://tpucdn.com/review/amd-ryzen-7-7800x3d/images/cinebench-single.png

So the performance is equal and the 13400f basically cleaned the floor, being 50% more efficient.

It's called IO die and IF, I don't know why we are still arguing about it. It's great for servers, sucks monkey balls for home users (unless you don't care about power draw, in which case it's fine). There is a reason they are not using the desktop chips on laptops (well they do on those desktop replacements huge monsters, but that's about it). It's because they GULP down power even for simple tasks like browsing.
 
Last edited:

bit_user

Polypheme
Ambassador
The 13400f at the exact same ST performance as the 7800x 3d
It doesn't. Maybe in one benchmark (which you have yet to cite), but they're not so precisely matched as you state.

wipes it in efficiency. That's what matters.
Again, I disagree that single-threaded efficiency is so important as you're making it out to be.

You think people that buy eg. a 13700k are running blender all day? You buy a high end CPU because WHEN you need it to be fast, it will be fast, doesn't mean you are running it maxed out 24/7.
Similarly, when not running a multi-threaded workload, people aren't sitting around with a single core maxed, the rest of the time! Most of the time, the CPU in a desktop machine tends to be mostly idle.

And according to that chart the 5950x is the most efficient CPU of them all, which is laughably wrong.
You don't appear to understand what efficiency means. The 5950X has a PPT of only 142 W, so it just needs to perform well relative to that power budget!

That's why you don't test efficiency at different power limits.
Efficiency is efficiency. What you're describing is something different.

I don't know why you keep bringing up that chart, it's totally irrelevant.
It's totally relevant! If you have better data, then show it. If not, then we're done. Unsubstantiated claims are worthless.

You compared a 13700k to a 7900x with out of the box power draw to conclude that the 7900x is 30% more efficient.
Yes... well, stock settings, to be precise. Once you open the doors to tweaking the settings, no definitive conclusion can be drawn, because it's too hard to agree an what settings should be used. Plus, it would make it very hard to find good beanchmarks that are exactly comparable.

But if you use that logic the 13700t WIPES it. So now you changed your argument from out of the box to ISO power again....
The chart I posted with ISO-power data is just another way to compare multithreaded efficiency, except it gives you a view of how efficiency varies over different power limits.

If we don't have the data then you can't claim they are more efficient (which you did).
I'm saying the multithreaded efficiency implies better per-core efficiency, except that we cannot confirm without the detailed measurements, themselves.

What you seemed to be saying is that the lack of any such data proves that Zen 4 is worse. I'm saying that's partially contradicted by the multithreaded efficiency data, but we can't know for sure because we don't have the data.

But lower clocks won't do anything regardless,
They do. I already cited the example of ST efficiency between the R9 7900 and R9 7900X, where 200 MHz lower clocks of the former equates to 57.6% better efficiency! Lowering the boost clock limit even further would yield even better efficiency, just as it does with the i5-13400F!

The 13400f clearly demonstrates that at 4.6ghz it can match the performance of the higher clocked AMD parts
No, you haven't demonstrated that.

Dropping the clocks on the AMD parts will just make them lose in performance -in which case you can just drop the clocks even further on the Intel parts.
You seem so concerned with ST efficiency. Limiting the single-core boost clock, on an AMD part would indeed sacrifice some ST performance in favor of disproportionately larger efficiency gains, however it wouldn't hurt MT performance. Presumably, the reason someone bought a chiplet-based CPU instead of an APU is that they actually needed the additional MT performance. Therefore, it's not a nonsensical thing to do, if someone is as obsessed with ST efficiency as you seem to be.

The 7900x is already SLOWER than the 13700k.
Not at multithreaded workloads, which is the whole reason someone would even buy a 7900X.

Dropping the clocks even more just makes it a stupid comparison, cause the 7900 is going to be 20%+ behind in ST performance. So what stops you from dropping the clocks on the 13700k by 20% to match the performance? It will still wipe the floor with the 7900.
Ah, so here's another one of your invalid assumptions. You think performance on both CPUs will be impaired at the same rate! Take another look at the line chart I keep posting and notice how the shape of the curves differs between the AMD and Intel CPUs. I'm predicting that the ST efficiency of Zen 4 will exceed efficiency of Golden Cove, as you scale back the clocks on each. That's consistent with what we see in the superior efficiency of the Genoa EPYC vs. Sappire Rapids Xeon.

the 7900x doesn't compete with the 13700k. The 7900x had an MSRP of 549$, the 13700k had an MSRP of 409$.
The price difference is justified by the performance difference. In MT workloads, the R9 7900X performs between the i7-13700K and the i9-13900K.

It was directly comparable to the 399$ R7 7700x (hence the name, i7 vs R7),
You read too much into launch prices, but you forget that Zen 4 launched before Raptor Lake! So, it made sense for AMD to start out with higher pricing and then discount them to match Raptor Lake on perf/$.

Their 16 core part was price matching AMD's 8core part.
E-cores don't count the same as P-cores. If anything, you should be comparing on the basis of the number of threads. That at least gets you in the ballpark.

Have you actually used a 7950x? Just browsing the youtube comments spikes the CPU to 65w constantly. It's insanely, incredibly inefficient for home users.
Again, this says nothing without knowing whether you have hardware decoding of AV1. "Just browsing the youtube comments" means you're scrolling the window past other video thumbnails and ads, for instance. So, it's definitely not as if one is just scrolling text.

The 13400f has the same ST performance as the 7800x 3d. Here you go, 1796 vs 1817.

cinebench-single.png
That's one benchmark, though. Not even an average. You can't just show equal performance on one benchmark, and then compare power consumption on a completely different one!
 
  • Like
Reactions: 35below0

TheHerald

Upstanding
Feb 15, 2024
263
64
260
It doesn't. Maybe in one benchmark (which you have yet to cite), but they're not so precisely matched as you state.


Again, I disagree that single-threaded efficiency is so important as you're making it out to be.

That's one benchmark, though. Not even an average. You can't just show equal performance on one benchmark, and then compare power consumption on a completely different one!
Man, are you actually paying any attention, at all? Im using both performance and efficiency metrics on the SAME application. Both the power and the performance measurements from TPUP are from CBR23. Im NOT using different benchmarks, what the hell are you even talking about? Why the heck are you constantly making stuff up?

Similarly, when not running a multi-threaded workload, people aren't sitting around with a single core maxed, the rest of the time! Most of the time, the CPU in a desktop machine tends to be mostly idle.
And guess which CPUs idle at less than 5 watts and which ones idle at 25+...

You don't appear to understand what efficiency means. The 5950X has a PPT of only 142 W, so it just needs to perform well relative to that power budget!
Oh better than you, trust me. The only proper measurement of efficiency is either power or performance normalized. Anything else is a fundamentally flawed testing.

Yes a car going 100kmh is going to be more efficient than a car going 200, but why even compare the two? If you are happy enough with going 100 then drive car B at 100kmh and then compare their efficiency. Apply the same logic to the CPUs and voila.
Yes... well, stock settings, to be precise. Once you open the doors to tweaking the settings, no definitive conclusion can be drawn, because it's too hard to agree an what settings should be used. Plus, it would make it very hard to find good beanchmarks that are exactly comparable.
At stock settings then the t lineup of Intel CPUs are literally - not exaggerating - completely decimating anything AMD has on the desktop. At strictly stock settings, not a single AMD cpu can make it to the top 10 efficiency chart, it's all dominated by the various T models restricted at 35 watts. You don't wanna play that game.

I'm saying the multithreaded efficiency implies better per-core efficiency, except that we cannot confirm without the detailed measurements, themselves.

What you seemed to be saying is that the lack of any such data proves that Zen 4 is worse. I'm saying that's partially contradicted by the multithreaded efficiency data, but we can't know for sure because we don't have the data.
Multithreaded efficiency only implies the obvious, that IO die plays a much smaller role when all cores go full blast, since the percentage of the total power that the IO die uses gets much smaller. But on the contrary, IO die plays the biggest role in ST workloads, that's why AMD is eating so much dust in ST efficiency.

They do. I already cited the example of ST efficiency between the R9 7900 and R9 7900X, where 200 MHz lower clocks of the former equates to 57.6% better efficiency! Lowering the boost clock limit even further would yield even better efficiency, just as it does with the i5-13400F!
Man are you for freaking real? ANY cpu will get better efficiency by limiting the clocks. You can limit clocks ad nauseum. Point is, at SAME performance, AMD eats dust in ST efficiency. The point is freaking obvious, the 7800x 3d at 5.05 ghz matches the 13400f at 4.6 ghz. You can reduce the clocks on the 3d part but you can also reduce them on the 13400f (even more btw, since it's faster per clock). The gap will not get any smaller, in fact it will get larger.
You seem so concerned with ST efficiency. Limiting the single-core boost clock, on an AMD part would indeed sacrifice some ST performance in favor of disproportionately larger efficiency gains, however it wouldn't hurt MT performance. Presumably, the reason someone bought a chiplet-based CPU instead of an APU is that they actually needed the additional MT performance. Therefore, it's not a nonsensical thing to do, if someone is as obsessed with ST efficiency as you seem to be.
The only thing im obsessed with is reality. You claimed Zen 4 parts are more efficient in ST workloads than Intel parts, and every data we currently have shows the exact opposite. That intel is in fact literally lightyears ahead in ST efficiency, being anywhere from 50 all the way to 100% more efficient at ISO performance. Why this keeps going is beyond me, it's absolutely freaking OBVIOUS by the TPU graph.

Ah, so here's another one of your invalid assumptions. You think performance on both CPUs will be impaired at the same rate! Take another look at the line chart I keep posting and notice how the shape of the curves differs between the AMD and Intel CPUs. I'm predicting that the ST efficiency of Zen 4 will exceed efficiency of Golden Cove, as you scale back the clocks on each. That's consistent with what we see in the superior efficiency of the Genoa EPYC vs. Sappire Rapids Xeon.

Your prediction is most definitely wrong but it wouldn't matter even if it's correct. How much are you going to cripple the zen 4 part? The 7800x 3d is ALREADY losing to entry level Alderlake chips from 2021 in ST performance and you are suggesting to make them even slower. So even if your argument is correct, you are basically saying that if you make zen 4 even slower than zen 3 it might be as efficient as Intel in ST performance. Well...that's freaking great!

The price difference is justified by the performance difference. In MT workloads, the R9 7900X performs between the i7-13700K and the i9-13900K.
What? According to TPU the 7900x loses to the 13700k in both applications and games, while being a lot more expensive. It's even losing to purely MT workloads, like Blender, CBR23, corona. Are you just making stuff up now?

https://www.techpowerup.com/review/intel-core-i7-13700k/6.html

You read too much into launch prices, but you forget that Zen 4 launched before Raptor Lake! So, it made sense for AMD to start out with higher pricing and then discount them to match Raptor Lake on perf/$.
They literally launched within 20 days of one another. What other method is there to compare them other than prices and names? AMD decided to use Intel's naming scheme so they seemed to be wanting the direct comparisons. I mean the R7 7700x perfectly namematched and pricematched the i7 13700k. And it gets bootyblasted.

Again, this says nothing without knowing whether you have hardware decoding of AV1. "Just browsing the youtube comments" means you're scrolling the window past other video thumbnails and ads, for instance. So, it's definitely not as if one is just scrolling text.

Man, ill ask again, have you actually used a dual CCD zen 4? From your comments, it appears you haven't. It's not software decoding that's causing it, I just disabled hw decoding on mine, 29 watts for a 4k video, 7 watts with hw acceleration on.

Believe what you want man, I don't care anymore at this point.
 
Last edited:

bit_user

Polypheme
Ambassador
Man, are you actually paying any attention, at all?
Yes, I am quite familiar with concepts of performance and efficiency, as well as practical aspects of measuring them.

Im using both performance and efficiency metrics on the SAME application. Both the power and the performance measurements from TPUP are from CBR23.
You only cited the performance graph. I looked at the Power metrics from the same article and they cited MP3 encoding. So, if you're looking at power data, then you should cite what you're looking at.

Im NOT using different benchmarks, what the hell are you even talking about? Why the heck are you constantly making stuff up?
I'm not making up anything. I'm trying to figure out what you're talking about, which could be solved by you actually citing the data.

The only proper measurement of efficiency is either power or performance normalized. Anything else is a fundamentally flawed testing.
Efficiency is a measure of the energy expended per unit of work. So, when you say "normalized", there obviously must be division involved. Furthermore, the workload must be the same and the energy should be measured in a similar way.

At stock settings then the t lineup of Intel CPUs are literally - not exaggerating - completely decimating anything AMD has on the desktop. At strictly stock settings, not a single AMD cpu can make it to the top 10 efficiency chart, it's also dominated by the various T models restricted at 35 watts. You don't wanna play that game.
Again, you're failing to cite any actual data. I know the PL1 of the T variants is 35 W, but I also know their PL2 is about 3x of that. So, you cannot simply rely on specs. Furthermore, you don't know how much performance loss occurs when scaling down to that power limit. I'm not saying you're wrong about them beating the efficiency of Zen 4 products, but the data actually matters, here.

Edit: Found this:

According to that, the R9 7950X beats the i9-13900K on CB R23, when both are running at 35 W.

I'm also not convinced they would all surpass the efficiency of some of the Ryzen 8000G products, particularly if they were also restricted in either boost frequency or their power limit.

Multithreaded efficiency only implies the obvious, that IO die plays a much smaller role when all cores go full blast, since the percentage of the total power that the IO die uses gets much smaller. But on the contrary, IO die plays the biggest role in ST workloads, that's why AMD is eating so much dust in ST efficiency.
There's another aspect of the chiplet-based CPUs I think you're missing, which is that the multi-die CPUs have a cache coherency domain spanning 3 dies. So, it's not entirely a matter of the I/O die that's holding back single-threaded efficiency.

Man are you for freaking real? ANY cpu will get better efficiency by limiting the clocks. You can limit clocks ad nauseum. Point is, at SAME performance, AMD eats dust in ST efficiency. The point is freaking obvious, the 7800x 3d at 5.05 ghz matches the 13400f at 4.6 ghz. You can reduce the clocks on the 3d part but you can also reduce them on the 13400f (even more btw, since it's faster per clock). The gap will not get any smaller, in fact it will get larger.
You're missing a key detail. I pointed out where the reduction in boost clock from the R9 7900X to the R9 7900 by a mere 200 MHz resulted in a 57.6% increase in efficiency. Yet, that's going from 5.6 GHz to 5.4 GHz. So, a nominal performance deficit of just 3.5%! So, just think about it for a minute. If such a minor reduction in performance yields such a big reduction in power, then maybe you're wrong about clock reductions in Zen 4 gaining efficiency slower than Raptor Cove.

The only thing im obsessed with is reality. You claimed Zen 4 parts are more efficient in ST workloads than Intel parts,
Apparently not this reality, because I never said that. Go ahead and quote where you think I did.

That intel is in fact literally lightyears ahead in ST efficiency,
This doesn't mean what you think it does. It primarily means AMD clocked the CPUs in those TechPowerUp charts well above their efficiency sweet spot. As I've said (and you seem to have agreed), the APUs do a better job of showcasing Zen 4's potential.

being anywhere from 50 all the way to 100% more efficient at ISO performance. Why this keeps going is beyond me, it's absolutely freaking OBVIOUS by the TPU graph.
Because that's an artificial metric. There's no logical or factual basis for saying that you can only compare efficiency between two CPUs if they perform the same on a given benchmark.

Your prediction is most definitely wrong but it wouldn't matter even if it's correct. How much are you going to cripple the zen 4 part? The 7800x 3d is ALREADY losing to entry level Alderlake chips from 2021 in ST performance and you are suggesting to make them even slower.
If someone were as obsessed with single-threaded efficiency as you seem to be, I'm just saying what they could do about it. As we saw with a < 3.5% performance deficit yielding a 57.6% efficiency improvement, in the case of the R9 7900X and R9 7900, clocks vs. efficiency is a highly nonlinear relationship.

What? According to TPU the 7900x loses to the 13700k in both applications and games, while being a lot more expensive. It's even losing to purely MT workloads, like Blender, CBR23, corona. Are you just making stuff up now?
I'm referring to the ComputerBase.de data, which shows the R9 7900X outperforming the i7-13900K at ISO-power on MT workloads. The TechPowerUp games and applications aggregates include only some heavily-threaded samples, but also lightly-threaded ones.

They literally launched within 20 days of one another.
Still, if you launch first, you get to set your price and milk those preorders and initial early adopters. Plus, they won't know exactly how the Intel parts will perform, and it's much easier to drop your prices than raise them. So, it makes sense for them to put their launch prices on the higher side.

The key thing is to look at how they're priced, today.

It's not software decoding that's causing it, I just disabled hw decoding on mine, 29 watts for a 4k video, 7 watts with hw acceleration on.
29 W makes more sense. I'd further suggest you try to ensure you're using the same video, bitrate, and codec to measure it. Also, Windows' power settings should be configured comparably.
 
Last edited:

TheHerald

Upstanding
Feb 15, 2024
263
64
260
Yes, I am quite familiar with concepts of performance and efficiency, as well as practical aspects of measuring them.


You only cited the performance graph. I looked at the Power metrics from the same article and they cited MP3 encoding. So, if you're looking at power data, then you should cite what you're looking at.


I'm not making up anything. I'm trying to figure out what you're talking about, which could be solved by you actually citing the data.
Im talking about this graph, I've already posted it in my first post.

efficiency-singlethread.png
So the 13400 is matching ST performance of the 7800x 3d while being 45% more efficient.
Efficiency is a measure of the energy expended per unit of work. So, when you say "normalized", there obviously must be division involved. Furthermore, the workload must be the same and the energy should be measured in a similar way.
Normalized means that you are running the products you are comparing either at the same speed or at the same power draw. The same way youd compare the fans by normalizing for dba. Any other method of comparing efficiency is just fundamentally flawed.
Again, you're failing to cite any actual data. I know the PL1 of the T variants is 35 W, but I also know their PL2 is about 3x of that. So, you cannot simply rely on specs. Furthermore, you don't know how much performance loss occurs when scaling down to that power limit. I'm not saying you're wrong about them beating the efficiency of Zen 4 products, but the data actually matters, here.
Both the 12900 and the 13900 score around 15.5k @ 35w when undervolted and around 12k at stock. TPU also has a test like that, at 35w nothing gets nowhere near it in efficiency

efficiency-multithread.png

The PL2 is irrelevant, it lasts for a minute. After that the CPU drops to PL1
You're missing a key detail. I pointed out where the reduction in boost clock from the R9 7900X to the R9 7900 by a mere 200 MHz resulted in a 57.6% increase in efficiency. Yet, that's going from 5.6 GHz to 5.4 GHz. So, a nominal performance deficit of just 3.5%! So, just think about it for a minute. If such a minor reduction in performance yields such a big reduction in power, then maybe you're wrong about clock reductions in Zen 4 gaining efficiency slower than Raptor Cove.
The same applies to any CPU though. The very last hundreds of mhz needs an absurdly high amount of power. It applies to both AMD , Intel and any other CPU on planet Earth. Still, it's really not relevant, the 7800x 3d is already 400 mhz lower than the 7900 and it still loses massively vs the 13400.

Apparently not this reality, because I never said that. Go ahead and quote where you think I did.
True, I said Intel has better ST efficiency and you said you aren't sure / dont believe that.
This doesn't mean what you think it does. It primarily means AMD clocked the CPUs in those TechPowerUp charts well above their efficiency sweet spot.
Really? You think AMD is the only one that has clocked their CPUs beyond their efficiency sweet spot? Do you realize that the 14900KS is clocked way beyond anything resembling sanity and it still beats the 7950x and the 7950x3d (and the 7900x) in ST efficiency, while at the same time vastly outperforming them? Doesn't that EASILY drive the point home, that Intel has a huge lead in ST efficiency? What else can possibly convince you even the 14900k freaking S doesn't...

Because that's an artificial metric. There's no logical or factual basis for saying that you can only compare efficiency between two CPUs if they perform the same on a given benchmark.
Of course there is. Actually, one of the fundamental laws of logic dictates that only ISO metrics are useful. Facts dictate that device A going faster will consume more power than the same device going slower. Therefore Device A is more efficient than Device A. That's a logical contradiction, and it's caused by the fact that we are not comparing normalized.
Edit: Found this:
130507.png

According to that, the R9 7950X would still beat the i9-13900K on CB R23 at 35 W.
First of all, the graph is obviously flawed, the author himself says so. He is supposedly setting PPT values on the AMD but the CPU pulls a ton more wattage than it's limits dictate. Go to page 3 of the review and youll find out. Im quoting the reviewer

Starting with the peak power figures, it's worth noting that AMD's figures can be wide off the mark even when restricting the Package Power Tracking (PPT) in the firmware. For example, restricting the socket and 7950X to 125 W yielded a measured power consumption that was still a whopping 33% higher.
There's certainly more performance at 65 W from our compute testing on the Ryzen 9 7950X, but it's drawing more power than it should be. It's also running hotter despite using a premium 360mm AIO CPU cooler, which is more than enough even at full load.


BUT, the IMPORTANT part is - it doesn't matter even if the graph is correct. You are constantly flip flopping and changing your argument. The 7950x doesn't have a stock PL of 35 watts. The 13900t does. You were arguing that only STOCK efficiency is important, and when you realized that not a single AMD Cpu will get on the top 10 of an efficiency chart, you changed your argument again (for the 3rd time) into bios tuning. Dude, just make your mind up and stick with it.

The important part from that graph is the STOCK efficiency (you are the one that brought up stock configurations). A supposed 13900t would core 12700 @ 35w, which ends up at 350 pts / watt. The 7950x at stock needs 230 watts for 38.5k points, which translates to 167 pts per watt. So the Intel cpu is literally more than twice as efficient in MT workloads out of the box, LOL
If someone were as obsessed with single-threaded efficiency as you seem to be, I'm just saying what they could do about it. As we saw with a < 3.5% performance deficit yielding a 57.6% efficiency improvement, in the case of the R9 7900X and R9 7900, clocks vs. efficiency is a highly nonlinear relationship.
And the same applies to the Intel parts, you can drop their clocks as well and still be vastly more efficient than the AMD parts. We went over this 5 times already...
I'm referring to the ComputerBase.de data, which shows the R9 7900X outperforming the i7-13900K at ISO-power on MT workloads. The TechPowerUp games and applications aggregates include only some heavily-threaded samples, but also lightly-threaded ones.
You were talking about PERFORMANCE. You literally said the 7900x justified it's much higher price because it's faster in MT than the 13700k. It isn't.
The key thing is to look at how they're priced, today.
Today 14th gen exists though, the 14700 is more efficient than the 13700 simply because of the extra cores.
29 W makes more sense. I'd further suggest you try to ensure you're using the same video, bitrate, and codec to measure it. Also, Windows' power settings should be configured comparably.
The 29 watts is from my Intel setup. The 7950x is at 40-45w average with spikes up to 65, WHILE using the GPU. My intel part is at 7 watts while using the GPU.
 
Last edited:

bit_user

Polypheme
Ambassador
Normalized means that you are running the products you are comparing either at the same speed or at the same power draw.
No, it doesn't.

Furthermore, the fact that TechPowerUp put efficiency data from multiple disparate CPUs into a single graph shows they also disagree with you. If numbers between CPUs with different performance and power draw weren't comparable, why put them in the same graph?

The same way youd compare the fans by normalizing for dba.
When trying to measure a relationship between two quantities, like airflow per dB (A-weighted), one way to do it is to physically control for the independent variable by adjusting the system so they're all equal. This can make sense if it varies a lot from one sample to the next and the relationship is significantly nonlinear. You could also simply measure and divide out the noise, but with limited predictive value if the way that you're measuring them differs substantially from how they behave under typical use.

Both the 12900 and the 13900 score around 15.5k @ 35w when undervolted and around 12k at stock.
Unless Intel is undervolting the T-series CPUs, that test methodology is fundamentally flawed. The point of the test would be to predict how these 35 W CPUs would compare at their respective stock settings.

TPU also has a test like that, at 35w nothing gets nowhere near it in efficiency

efficiency-multithread.png
...only because the chart contains no other samples tested in the same way.

The PL2 is irrelevant, it lasts for a minute. After that the CPU drops to PL1
Without knowing how long the test is run for, and whether they average in those samples or only take data from after the machine settles at PL1, how can you say PL2 is irrelevant?

I said Intel has better ST efficiency and you said you aren't sure / dont believe that.
Show me the quote. I tend to be pretty careful with my words, whereas you seem given to overstatement and exaggeration. If I expressed doubt, it was almost certainly in the extremity or the precise nature of your characterization.

Really? You think AMD is the only one that has clocked their CPUs beyond their efficiency sweet spot?
No, I think the falloff curve for Zen 4 is steeper than Golden Cove & Raptor Cove. The data I have cited (most recently from Anandtech) supports this.

Here's where a key point I made earlier in the exchange becomes relevant. Remember the table I showed from Chips & Cheese' analysis of Zen 4 vs. Golden Cove? It shows a lot of internal structures that are larger in Golden Cove. Do you have any idea why someone might make a reorder buffer or register file larger? It has a lot to do with the fact that when you run a CPU core at a higher clock speed, code running on that core sees the rest of the world slowing down. In other words, cache and DRAM latency doesn't change in terms of nanoseconds, but because the CPU core is completing more clock cycles per nanosecond, the latencies increase in the number of clock cycles. In order to compensate for that, a CPU core needs to be able to keep more operations in flight, which means all of the out-of-order machinery needs to be scaled up.

When Intel decided to build a really fast core that had "long legs" (i.e. the ability to clock high), they needed to make it bigger and more complex. That comes at a cost in terms of die area and energy usage. The E-cores are there to compensate for these things. Because Ryzen 7000 didn't have any sort of E-core, AMD targeted its efficiency window at a lower clockspeed and sized their out-of-order structures accordingly. That doesn't mean you can't get more performance out of a Zen 4 core at higher clock speeds, but it means that you hit a point of diminishing returns more quickly and the falloff is more rapid.

Still, because AMD used a more efficient manufacturing process, they could afford to set the default clock frequency of these cores above that sweet spot, and still be within power budget. That's why we don't see any AM5 CPUs with a boost limit of 4.6 GHz, like Intel had to do with the i5-13400F.

In this whole exchange, I've never said Zen 4 can match Raptor Cove in single-thread performance. However, when you start talking about efficiency, you're talking perf/W. When applied to an entire microarchitecture, that's not a single datapoint, but rather a curve. So, when you make sweeping statements about Intel being more efficient, I'm pointing out that we don't have those perf/W curves needed to support such a statement, and furthermore that the data we do have on multicore scaling suggests that the efficiency of Zen 4 is indeed quite competitive at lower power levels.

How else can you explain benchmarks like where Genoa EPYC obliterated Sapphire Rapids Xeon, while using the same or less power?

First of all, the graph is obviously flawed, the author himself says so. He is supposedly setting PPT values on the AMD but the CPU pulls a ton more wattage than it's limits dictate. Go to page 3 of the review and youll find out. Im quoting the reviewer

Starting with the peak power figures, it's worth noting that AMD's figures can be wide off the mark even when restricting the Package Power Tracking (PPT) in the firmware. For example, restricting the socket and 7950X to 125 W yielded a measured power consumption that was still a whopping 33% higher.
<snip>
There's certainly more performance at 65 W from our compute testing on the Ryzen 9 7950X, but it's drawing more power than it should be. It's also running hotter despite using a premium 360mm AIO CPU cooler, which is more than enough even at full load.
It's cute how you omitted the very next sentence (where I marked <snip>), which is:

"By comparison, the 13900K exceeded its set limits by around 14% under full load."

Another key point you're glossing over is how they're talking about peak power figures. Anandtech has an annoying habit of reporting peak power figures that occurred even for less than a second, which is how they ended up with a graph showing i9-12900K used 272 W, when even the 95th percentile is much lower.

So, unless they give us more detail about average power usage, I have to discount what they're saying about these CPUs excursions beyond the configured limits, especially since what I've seen elsewhere shows that Ryzen 7000 CPUs are pretty good about respecting the configured PPT limits.

BUT, the IMPORTANT part is - it doesn't matter even if the graph is correct. You are constantly flip flopping and changing your argument. The 7950x doesn't have a stock PL of 35 watts. The 13900t does. You were arguing that only STOCK efficiency is important,
When we don't have performance & efficiency data on a i9-13900T, the best we can do is use approximations like the test I cited. The actual data will certainly differ, as I said in my prior post, and that's why it's important to use actual test data. However, unless/until we have that, tests like the one Anandtech did get us in the ballpark and are a lot closer than we get by simply relying on your imagination.

Dude, just make your mind up and stick with it.
I'm trying to use the available data in the way that best supports the types of conclusions about the claims being made. It has nothing to do with changing my mind about anything. If you don't understand that, then you haven't been paying attention.

The important part from that graph is the STOCK efficiency (you are the one that brought up stock configurations). A supposed 13900t would core 12700 @ 35w, which ends up at 350 pts / watt. The 7950x at stock needs 230 watts for 38.5k points, which translates to 167 pts per watt. So the Intel cpu is literally more than twice as efficient in MT workloads out of the box, LOL
Why are you comparing a 35 W CPU to a 170 W CPU?

In this case, it's like you're comparing a tractor to a big rig truck. Someone who needs a tractor cannot even consider using a truck, instead, and vice versa.

This entire thread can be boiled down to two things:
  1. You're making nonsensical comparisons between products of different classes.
  2. You're making lots of invalid assumptions that aren't supported by any data.

Also, it doesn't help that you don't understand the meaning of terms like normalization.

You were talking about PERFORMANCE. You literally said the 7900x justified it's much higher price because it's faster in MT than the 13700k. It isn't.
Yes, I'll apologize for that. I quickly referred to my ISO-power graph, rather than actually looking up benchmarks of each at stock settings.

The 29 watts is from my Intel setup. The 7950x is at 40-45w average with spikes up to 65, WHILE using the GPU. My intel part is at 7 watts while using the GPU.
But how do you know that it's actually using the GPU? All you know is that it's configured to, but it still might not use it for some codecs. Also, the power management settings should affect how quickly each CPU kicks into "high gear" and how long it stays there before dropping back down. So, I'd also check those.
 
  • Like
Reactions: rfdevil
Feb 2, 2024
33
19
35
Actually, one of the fundamental laws of logic dictates that only ISO metrics are useful. Facts dictate that device A going faster will consume more power than the same device going slower. Therefore Device A is more efficient than Device A. That's a logical contradiction, and it's caused by the fact that we are not comparing normalized.
There is no such thing thing as "a fundamental law of logic", you are really making yourself look very unqualified and it doesn't bold well with your lack of augmentation. In the standard academia logic is a system that studies inferences, that is objectively correct steps in reasoning. Saying that law dictates an ISO metric is also incorrect for it makes assumptions on the nature of the a priori and the a posteri regards to gathering konwledge. Saying "only useful" is at best naive and pragmatic.
I'll cut your vegetables up for you, what your really grasping for is the Principle of Bivalence, but it's use here is invalid. A logical contradiction does not reside in our everyday arguments, only in deductive reasoning, a logical contradiction has to do with necessary truth. But in inductive reasoning conclusions don't logically follow from the premise, they are merely supported by it and can only ever be probably true. And of course there are other systems in Logic which abounded the principle of bivalance.
Again, Logic is the mechanics of reasoning, or more specifically syntax (manipulation of symbols). The law of non contradiction resides inside the stucture of a deductive argument which no such argument was presented here.

Saying "and it's caused by the fact that we are not comparing normalized" is wrong. Because you are confused and don't understand the difference between deductive and inductive reasoning. Aswell causation as we know it has been derived from the senses, and as per Hume we can't possible know whether things really do need a cause.

No, facts don't dictate that device A will go faster, a scientific theory ( a model that includes mathmatics) does, facts are derived from the senses; and even if they are a priori, again as per Hume.
The conclusion "Therefore device A is more efficient than Device A" is a non sequitur.

The 7950x is at 40-45w average with spikes up to 65, WHILE using the GPU. My intel part is at 7 watts while using the GPU.
Ok back to a bit of naive realism and empiricism.
Sorry you are wrong here. My 7950x averages 26-35W average consumption on the desktop with a video paused, some browser tabs opened, and photoshop opened.

Saying "with spikes up to 65" means very little and shows a basic lack of understanding between Power (the rate at which work is done) and energy, something they teach high school physics students.
Yes mine peaks up to 45-55 when alt tabbing but we are not concerned about peaks, they could be be a 100 or more watts, what we are concerned about is the energy (the watthours). Since that is what they bill us for.
So when someone says my Intel peaks at 350W this can give the false impression that it will give a large electricity bill.

I haven't got the will to argue with someone who uses loaded language and talks on behalf of others making blanket assertions. Thats fine, but it ends up being just empty words. You keep using the word efficiency in a careless and equivocating way.
And saying things like "it wipes the floor" is so far from rationality and science I fear you may never understand your errors.
 

TheHerald

Upstanding
Feb 15, 2024
263
64
260
No, it doesn't.

Furthermore, the fact that TechPowerUp put efficiency data from multiple disparate CPUs into a single graph shows they also disagree with you. If numbers between CPUs with different performance and power draw weren't comparable, why put them in the same graph?
Are you asking me why TPUP puts useless data on their graph? I don't know. Ask them.

I've explained I think very thoroughly why non ISO efficiency tests are useless, but I'll do it once more.

CPU A scores 10k at 30 watts and CPU B scores 15k at 2000 watts. CPU A looks obviously more efficient, but if you are happy with buying a CPU that can only score 10k then you need to drop CPU B to that same score and then measure it's efficiency. If CPU B managed to score 10k at 22 watts, why in the world would you buy A?

This comes back to what I said about the 5950x. Yes, in charts it looks more efficient than the 7950x, but if im happy with the 25k score it's getting then I might as well drop the 7950x to 25k and have it be twice as efficient. So how useful is the stock efficiency test at the end of the day? Not much.


Unless Intel is undervolting the T-series CPUs, that test methodology is fundamentally flawed. The point of the test would be to predict how these 35 W CPUs would compare at their respective stock settings.


...only because the chart contains no other samples tested in the same way.


Without knowing how long the test is run for, and whether they average in those samples or only take data from after the machine settles at PL1, how can you say PL2 is irrelevant?
Doesn't need any undervolting, at the anandtech graph you posted a 13900 at 35w gets a score of 12700, easily beating anything amd has in the efficiency department. Pl2 is irrelevant cause MT workloads are run for a long amount of time. After all, thats why youd care about efficiency. If you are running those workloads for 10 seconds then why are we even talking about it? Efficiency is useless since it won't make a difference for that small a workload.
In this whole exchange, I've never said Zen 4 can match Raptor Cove in single-thread performance. However, when you start talking about efficiency, you're talking perf/W. When applied to an entire microarchitecture, that's not a single datapoint, but rather a curve. So, when you make sweeping statements about Intel being more efficient, I'm pointing out that we don't have those perf/W curves needed to support such a statement, and furthermore that the data we do have on multicore scaling suggests that the efficiency of Zen 4 is indeed quite competitive at lower power levels.
But we do have plenty of data of ST efficiency at ISO performance, and Intel is in fact a couple of generations ahead. What exactly don't you like about tpus ST efficiency charts?
How else can you explain benchmarks like where Genoa EPYC obliterated Sapphire Rapids Xeon, while using the same or less power?
Cause the IF scales much better at higher core counts than whatever Intel is using for SR. It's a completely irrelevant comparison cause the SR parts aren't monolithic with a ring bus. The whole reason Intel is smashing AMD in ST efficiency is being monolithic and having a ring bus.
It's cute how you omitted the very next sentence (where I marked <snip>), which is:
"By comparison, the 13900K exceeded its set limits by around 14% under full load."​

Another key point you're glossing over is how they're talking about peak power figures. Anandtech has an annoying habit of reporting peak power figures that occurred even for less than a second, which is how they ended up with a graph showing i9-12900K used 272 W, when even the 95th percentile is much lower.

So, unless they give us more detail about average power usage, I have to discount what they're saying about these CPUs excursions beyond the configured limits, especially since what I've seen elsewhere shows that Ryzen 7000 CPUs are pretty good about respecting the configured PPT limits.
If you've used a 7950x youd know that the performance figures from anandtechs graphs are with PPT, not TDP. I can assure you, the 7950x does not score 31k at 65 watts, lol. If it did, I'd have one. Actually, scratch that, i'd have two. It scores EXACTLY 22k at 65 watts,the same exact score as the 13900k. From what I remember from my testing it also scored 28-29k at 90 watts, 34k at 145watts and it completely dies at 35w scoring something like 7-8k.
When we don't have performance & efficiency data on a i9-13900T, the best we can do is use approximations like the test I cited. The actual data will certainly differ, as I said in my prior post, and that's why it's important to use actual test data. However, unless/until we have that, tests like the one Anandtech did get us in the ballpark and are a lot closer than we get by simply relying on your imagination.


I'm trying to use the available data in the way that best supports the types of conclusions about the claims being made. It has nothing to do with changing my mind about anything. If you don't understand that, then you haven't been paying attention.
Good, and all of the available data shows that Intel has a huge advantage in ST efficiency at whatever performance you wanna target. Also they have a huge advantage in MT efficiency if you want to restrict it to out of the box settings, since basically all of their T lineup of cpus will top the charts.
Why are you comparing a 35 W CPU to a 170 W CPU?

Because you wanted out of the box - stock comparisons. I didn't, but you insisted. I kept telling you that comparing efficiency at different power limits is silly, but you kept on going. Well this is where the stock comparisons take you, to Intel being vastly more efficient at every possible metric. That's exactly why I insisted they are silly...
This entire thread can be boiled down to two things:
  1. You're making nonsensical comparisons between products of different classes.
  2. You're making lots of invalid assumptions that aren't supported by any data.
Or it can be summed up to - Intel is booty blasting AMD in ST efficiency and if someone for whatever reason really cares about STOCK behavior Intel is also booty blasting in MT efficiency.

You don't even need the T lineup honestly, hardwareand.com has tested the non k versions. At h264 the 13700 non k is the 2nd most efficient cpu from the 12v rail and the most efficient when measuring at the wall. We can safely assume that all T cpus would just take the top 10-15 spots in efficiency charts

image-2024-05-05-001707408.png

Honestly it doesn't matter to me cause I find out of box efficiency comparisons absolutely useless, but since you seem to like them, there you go.
 
Last edited:

TheHerald

Upstanding
Feb 15, 2024
263
64
260
There is no such thing thing as "a fundamental law of logic", you are really making yourself look very unqualified and it doesn't bold well with your lack of augmentation.
Imagine how unqualified someone looks like when they are calling someone else unqualified because "there is no such thing as a fundamental law of logic". I hate to break it to you, there are 3 of those. Law of identity , excluded middle and non contradiction. Look em up, cause you literally cannot have a rational thought without them
Ok back to a bit of naive realism and empiricism.
Sorry you are wrong here. My 7950x averages 26-35W average consumption on the desktop with a video paused, some browser tabs opened, and photoshop opened.
Great, and that's 5 times higher than a competing Intel product.....
Saying "with spikes up to 65" means very little and shows a basic lack of understanding between Power (the rate at which work is done) and energy, something they teach high school physics students.
Yes mine peaks up to 45-55 when alt tabbing but we are not concerned about peaks, they could be be a 100 or more watts, what we are concerned about is the energy (the watthours). Since that is what they bill us for.
So when someone says my Intel peaks at 350W this can give the false impression that it will give a large electricity bill.
Did I ever mention anything about power bills? The simple fact that the CPU - as you have admitted yourself spikes to 65w - it does it for a reason. It's not spiking up there because it feels like it, it needs that power to do the specific task that it did while pulling that power. From my experience just clicking on youtube comments makes it consistently spike to 65 watts. Im sure there are plenty other simple tasks that home users do that have the same effect. In general, AMD cpus pull a lot more power for the home user workloads due to IF and the IO die not really being a great idea for simple home computing. That's why they don't use those on laptops. I can't make it any clearer than that.
 

bit_user

Polypheme
Ambassador
Are you asking me why TPUP puts useless data on their graph? I don't know. Ask them.
The answer is obvious: they understand that efficiency is efficiency, and therefore the scores are comparable. While someone in the market for a i5-13400F might not even consider a i9-13900K or vice versa, it does show the efficiency tradeoff between the two. This is potentially useful information for someone, if they're not even sure what class of machine they should buy.

I've explained I think very thoroughly why non ISO efficiency tests are useless, but I'll do it once more.
Your wording is sloppy. You clearly mean to say "non iso-performance efficiency tests".

In my experience, sloppy wording is often accompanied by sloppy thinking. If someone can't take the time to say exactly what they mean, perhaps it's because they're not even sure what it is they're trying to say, or maybe they're afraid of being pinned down on shaky facts.

CPU A scores 10k at 30 watts and CPU B scores 15k at 2000 watts. CPU A looks obviously more efficient, but if you are happy with buying a CPU that can only score 10k then you need to drop CPU B to that same score and then measure it's efficiency. If CPU B managed to score 10k at 22 watts, why in the world would you buy A?
Assuming you're talking about average power, then A has an efficiency of 333 pts/W, while B has an efficiency of 7.5 pts/W and sounds like a mainframe CPU from about a decade ago. The mistake is even putting them in the same category, in the first place. Sure, you can perform the same test on them and even put them in the same table, but it's like comparing an old Raspberry Pi with a Raptor Lake i9. Such a comparison is of academic interest, but not practical because nobody would consider using a Raptor Lake i9 in the same exact application as a Raspberry Pi. They're different sorts of machines, made to suit different purposes.

This comes back to what I said about the 5950x. Yes, in charts it looks more efficient than the 7950x,
It doesn't just "look" more efficient. It is more efficient. If you're not in a hurry and you want to save on energy costs, you could use one for doing some overnight rendering jobs. That's what the score means. That's what efficiency means!

if im happy with the 25k score it's getting then I might as well drop the 7950x to 25k and have it be twice as efficient. So how useful is the stock efficiency test at the end of the day?
It tells you how efficient the CPU is at stock settings, so that you can either buy something else or at least know that you need to spend some time tweaking around with undervolting and power or frequency-limiting to get the efficiency up.

Pl2 is irrelevant cause MT workloads are run for a long amount of time.
The workloads do, but not necessarily the benchmarks. That's why, when looking at benchmarks, you can't dismiss PL2 without some understanding of the methodology used to run the benchmark.

Cause the IF scales much better at higher core counts than whatever Intel is using for SR.
You don't actually know how much of the power is spent on interconnect vs. cores. Without knowing that, you can't jump to such conclusions. However, you can be pretty sure the bulk of the power is expended by the cores and not the interconnect.

It's a completely irrelevant comparison cause the SR parts aren't monolithic with a ring bus.
Again, without knowing how much overhead the interconnect is adding, you can't jump to such conclusions. Emerald Rapids backtracks on chiplets most of the way, but it's still not enough.

The whole reason Intel is smashing AMD in ST efficiency is being monolithic and having a ring bus.
What data do you have to support that?

Because you wanted out of the box - stock comparisons. I didn't, but you insisted. I kept telling you that comparing efficiency at different power limits is silly, but you kept on going.
You're leaving out a key part, which I've stated repeatedly, which is to compare the same class of products! A 35 W CPU is not in the same class as a 170 W CPU. They're not direct competitors.

Omitting a point I've made so many different times, in so many different ways, strongly suggests that you're arguing in bad faith.

this is where the stock comparisons take you, to Intel being vastly more efficient at every possible metric.
Or try comparing a stock R9 7900 to stock i9-13900K and now AMD is more efficient in every metric!
efficiency-singlethread.png


efficiency-multithread.png


efficiency-gaming.png
Do you now see why it's silly to compare products from different market segments?

if someone for whatever reason really cares about STOCK behavior Intel is also booty blasting in MT efficiency.
Again, that's only true if you do the equivalent of comparing fuel mileage of a compact VW car with a Ford pickup truck and then try to say a "Volkswagen is more fuel-efficient than a Ford." It doesn't follow from the data, because it ignores that they're fundamentally different classes of vehicles that were never expected to have comparable efficiency. Therefore, it has no predictive power because one can't use it to accurately infer the efficiency relationship between any other Volkswagen and Ford pairing.

It's statements like these where you give the game away. It's clear that you're making disingenuous arguments, here.

You don't even need the T lineup honestly, hardwareand.com has tested the non k versions. At h264 the 13700 non k is the 2nd most efficient cpu from the 12v rail and the most efficient when measuring at the wall. We can safely assume that all T cpus would just take the top 10-15 spots in efficiency charts
Of course, that's workload-dependent. H.264 encoding is fairly specific and not a broad-based metric. I'm assuming you cherry-picked that result, since you didn't include a link to the review.
 
Last edited:

TheHerald

Upstanding
Feb 15, 2024
263
64
260
The answer is obvious: they understand that efficiency is efficiency, and therefore the scores are comparable. While someone in the market for a i5-13400F might not even consider a i9-13900K or vice versa, it does show the efficiency tradeoff between the two. This is potentially useful information for someone, if they're not even sure what class of machine they should buy.
The scores are useful when your priority is stock operation. If your priority is efficiency the stock results are useless, since you should know or google how to change power limits. Say 14900T costs 550$ right now and 14900k costs 499$, im on the market for an efficient CPU, would you suggest that I should buy the 14900T? That would be silly, I should just buy the 14900k limit it to whatever watts I want and save 50$
Your wording is sloppy. You clearly mean to say "non iso-performance efficiency tests".

In my experience, sloppy wording is often accompanied by sloppy thinking. If someone can't take the time to say exactly what they mean, perhaps it's because they're not even sure what it is they're trying to say, or maybe they're afraid of being pinned down on shaky facts.
Not at all, I said exactly what I mean. ISO tests can be either ISO performance or ISO power, both are fine, I personally prefer ISO power but I won't complain.

Assuming you're talking about average power, then A has an efficiency of 333 pts/W, while B has an efficiency of 7.5 pts/W and sounds like a mainframe CPU from about a decade ago. The mistake is even putting them in the same category, in the first place. Sure, you can perform the same test on them and even put them in the same table, but it's like comparing an old Raspberry Pi with a Raptor Lake i9. Such a comparison is of academic interest, but not practical because nobody would consider using a Raptor Lake i9 in the same exact application as a Raspberry Pi. They're different sorts of machines, made to suit different purposes.
I gave you an exaggerated example but you missed the point completely. The point is - it doesn't matter if it's inefficient while being faster. Let me give you a more practical example so you don't miss the point, CPU A 18k score at 120w, CPU B 35k @300 watts. Both cost 400$, I care about MT efficiency. Buying A contrary to what it looks like would be a mistake. And it would be a mistake cause 1) CPU B can score 18k at 70 watts and 2) In order for CPU A to score 35k and match B it needs 900 watts and ln2 canisters.

There isn't any mt scenario that buying A would be a better choice for efficiency.
The workloads do, but not necessarily the benchmarks. That's why, when looking at benchmarks, you can't dismiss PL2 without some understanding of the methodology used to run the benchmark.
If you implying that the 13900 scored 12700 because it boosted to 106 watt, no, that's not the case. I have tested it myself and anadtechs results are legit, at stock operation all of the i9s (12900 13900 14900) score around 13k at 35w.

You don't actually know how much of the power is spent on interconnect vs. cores. Without knowing that, you can't jump to such conclusions. However, you can be pretty sure the bulk of the power is expended by the cores and not the interconnect.


Again, without knowing how much overhead the interconnect is adding, you can't jump to such conclusions. Emerald Rapids backtracks on chiplets most of the way, but it's still not enough.


What data do you have to support that?
Are you asking me what data do I have to support that being monolithic is better than chiplets for efficiency, and more specifically for ST efficiency? It stands to pure reason that multple dies with interconnect consume more power, but we do actually have data. Namely, AMD's mobile chips and APUs compared to their desktop counterparts.

Also, I've used them. Both. My 6900hs does ST workloads at lower power then what a 5800x idles at. Not even kidding, 6900hs boosts to 22 watts, the 5800x idles at 25....
You're leaving out a key part, which I've stated repeatedly, which is to compare the same class of products! A 35 W CPU is not in the same class as a 170 W CPU. They're not direct competitors.

Omitting a point I've made so many different times, in so many different ways, strongly suggests that you're arguing in bad faith.
The CPUs I compared are direct competitors, both in the naming scheme and price. You are now kinda implying what I said from the beginning, that tests at different power limits are beyond silly. Welcome to my world, that's what I've been saying all this time when you were comparing the 7900x to the 13700k.
Or try comparing a stock R9 7900 to stock i9-13900K and now AMD is more efficient in every metric!
efficiency-singlethread.png


efficiency-multithread.png


efficiency-gaming.png
Do you now see why it's silly to compare products from different market segments?
No, what I find silly is comparing Intel's worst product in efficiency with AMD's best. If you cared about out of the box efficiency sure you would be considering a 7900, but you wouldn't be considering a 13900k, but a 13900 or a 13900t instead. In which case, again, Intel is literally a couple of gens ahead in stock efficiency.
Again, that's only true if you do the equivalent of comparing fuel mileage of a compact VW car with a Ford pickup truck and then try to say a "Volkswagen is more fuel-efficient than a Ford." It doesn't follow from the data, because it ignores that they're fundamentally different classes of vehicles that were never expected to have comparable efficiency. Therefore, it has no predictive power because one can't use it to accurately infer the efficiency relationship between any other Volkswagen and Ford pairing.

It's statements like these where you give the game away. It's clear that you're making disingenuous arguments, here.
In what way is it disginenuous to say that - if I was looking for MT efficient out of the box CPUs my only viable choice would be Intel (either the non k or the T lineup)?

Just to clarify, I AGREE with you - it's silly to compare the two. But it's important that you tell me exactly WHY it's silly, cause then youll realize what I've been saying all along, tests should be done at ISO wattage / performance. Anything else is fundamentally flawed unless the user doesn't have any idea how to change those limits.

Do I have to remind you that most reviewers are doing exactly what you call silly? They compare the efficiency of a fully unlocked power unlimited 14900k (or even KS) that can go up to 350-400 watts vs a thermally and power limited 7950x that draws almost half of that. It's exactly your example of a compact car vs a ford pickup truck.

Of course, that's workload-dependent. H.264 encoding is fairly specific and not a broad-based metric. I'm assuming you cherry-picked that result, since you didn't include a link to the review.
You'd assume wrong. I didn't cherry pick the results, that's the only one that they tested efficiency with. The point wasn't to convince you that the 13700 is the most efficient CPU, it isn't. The point was to show you that dropping power draw obviously increases efficiency, that's why it's silly to compare CPUs with different power limits. Cause it makes CPUs that aren't particularly efficient (like the 13700) top the charts.

This is the review

https://hardwareand.co/dossiers/cpu/test-amd-ryzen-5-7500f

EG1. The H264 is kinda the worst scenario for the 13700. It has much bigger leads in other MT tasks like AV1, cinema4d, arnold, 7zip, blender, it would be topping the chart in both from the wall and 12v efficiency. So it wasn't a cherrypicking at all



These are the main points and I don't know if you still disagree with me or not

1) Intel has a huge advantage in ST efficiency due to probably being monolithic, but frankly the reason doesn't matter.

2) If you are purely interested in out of the box - stock efficiency Intel has a huge lead here too. At every price segment they have a vastly more efficient CPU with the T and non k lineup. I disagree about the importance of stock efficiency but you seem to be adamant about it.
 
Last edited:

TheHerald

Upstanding
Feb 15, 2024
263
64
260
It tells you how efficient the CPU is at stock settings, so that you can either buy something else or at least know that you need to spend some time tweaking around with undervolting and power or frequency-limiting to get the efficiency up.
Knowing how efficient the CPU is at stock settings is useless information though unless I don't know how or I don't think it's possible to change the power limits. And sure in my example (5950x vs 7950x) the 2 cpus are in completely different price segments - but there are CPUs that are in very similar price segments and the analogy still applies. Like 5950x vs 7900x. Platform costs aside, if someone cared about MT efficiency they should buy the 7900x and power limit it to the 5950x levels. There is no freaking reason to buy the 5950x just because it is more efficient at stock.
 

bit_user

Polypheme
Ambassador
Say 14900T costs 550$ right now and 14900k costs 499$, im on the market for an efficient CPU, would you suggest that I should buy the 14900T? That would be silly, I should just buy the 14900k limit it to whatever watts I want and save 50$
That presumes the 14900T isn't binned to support lower voltages. What if it is?

The point is - it doesn't matter if it's inefficient while being faster.
To a lot of people, it does. Just because someone would like more performance doesn't mean they don't care at all about efficiency.

CPU A 18k score at 120w, CPU B 35k @300 watts. Both cost 400$, I care about MT efficiency. Buying A contrary to what it looks like would be a mistake. And it would be a mistake cause 1) CPU B can score 18k at 70 watts and 2) In order for CPU A to score 35k and match B it needs 900 watts and ln2 canisters.
Except:
  1. What if CPU B can't score 18k at 70 W? That data was never given. If you look at something like the Sappire Rapids Xeon W CPUs, it's not at all clear they will scale down like that! In fact, the same can probably be said for ThreadRippers.
  2. What if the user doesn't need 35k worth of performance?

If you implying that the 13900 scored 12700 because it boosted to 106 watt, no, that's not the case.
No, they said it only exceeded TDP by 14%. So, that means it peaked at 40 W.

However, I wasn't talking about their test, but rather the need for one to actually test an i9-13900T, rather than make simple assumptions about how it performs.

I have tested it myself and anadtechs results are legit, at stock operation all of the i9s (12900 13900 14900) score around 13k at 35w.
That's interesting. Why would you do that and how do you have access to all that hardware?

Are you asking me what data do I have to support that being monolithic is better than chiplets for efficiency,
No, I'm asking what data you have to support the specific claim that the Zen 4 EPYCs are much more efficient simply due to the efficiency of their respective interconnects and not the relative efficiency of the cores themselves. To make this assertion, you must know how much power is consumed by those interconnects. So, how much is it and where did you find such information?

In which case, again, Intel is literally a couple of gens ahead in stock efficiency.
This again? Zen 4 is plenty efficient, as you seem to acknowledge (via their APUs). However, it's only apparent in their desktop products when it really counts, which is on MT workloads. If Zen 4 had no window where it could run more efficiently than Raptor Cove + Gracemont, then there's no way the Ryzen 7950X would be competing (and winning) on MT efficiency, yet it does!
cinebench-multi.png

efficiency-multithread.png

Just how you like it: a stock 7950X is nearly the same CB23 MT performance as i9-14900K (just 1.8% below) while being 13.9% more efficient!

if I was looking for MT efficient out of the box CPUs my only viable choice would be Intel (either the non k or the T lineup)?
And not Ryzen 9 7900?

efficiency-multithread.png


Do I have to remind you that most reviewers are doing exactly what you call silly? They compare the efficiency of a fully unlocked power unlimited 14900k (or even KS) that can go up to 350-400 watts vs a thermally and power limited 7950x that draws almost half of that. It's exactly your example of a compact car vs a ford pickup truck.
Of the reviews I've seen, I'd have to say that TechPowerUp's approach is the most sane. Include a test at Intel's recommended settings (and I'd add: with the level of cooling that most people would actually use, not a high-end 480 mm radiator).

2) If you are purely interested in out of the box - stock efficiency Intel has a huge lead here too. At every price segment they have a vastly more efficient CPU with the T and non k lineup.
Needs data. Show efficiency comparisons (package power, not system power) between Intel non-K and corresponding AMD non-X products.
 
  • Like
Reactions: rfdevil

TheHerald

Upstanding
Feb 15, 2024
263
64
260
That presumes the 14900T isn't binned to support lower voltages. What if it is?
Unless you have any actual data, "What if's" are invalid. What if the 14900k i binned to support lower voltages?
To a lot of people, it does. Just because someone would like more performance doesn't mean they don't care at all about efficiency.
Again - that's NOT the point.
Except:
  1. What if CPU B can't score 18k at 70 W? That data was never given. If you look at something like the Sappire Rapids Xeon W CPUs, it's not at all clear they will scale down like that! In fact, the same can probably be said for ThreadRippers.
  2. What if the user doesn't need 35k worth of performance?
1) EXACTLY. And unless that data IS given, it's completely pointless to compare efficiency at stock settings and buying off of that.

2) Then he doesn't need to run CPU B at 300 watts.

Do you know that, for example, the Phanteks T30 are the best fans currently in existence? Both in terms of pure performance and in terms of "efficiency" (performance to noise). You know how we know that? Cause reviewers tested them NORMALIZED (same dba). If they just tested them at stock out of the box, they would rank as one of the worst if not the worst fans in existence in terms of performance / noise. Now apply the same logic to CPUs. When you only compare them out of the box, youll end up thinking the 7700x is more efficient than the 13700k when in fact, it isn't, not even close.

No, they said it only exceeded TDP by 14%. So, that means it peaked at 40 W.

However, I wasn't talking about their test, but rather the need for one to actually test an i9-13900T, rather than make simple assumptions about how it performs.


That's interesting. Why would you do that and how do you have access to all that hardware?
I don't expect the 13900t to have any meaningful difference to the K version at same power. Even if the 13900t is 10% worse than the k (highly unlikely) it will TOP (easily) the efficiency chart.

I don't have access to hardware, I just buy the stuff. Testing and benching is a hobby of mine. I have a 3090, a 4090, 4 AMD laptops, around 10 or more cpus, 10 gpus etc.

No, I'm asking what data you have to support the specific claim that the Zen 4 EPYCs are much more efficient simply due to the efficiency of their respective interconnects and not the relative efficiency of the cores themselves. To make this assertion, you must know how much power is consumed by those interconnects. So, how much is it and where did you find such information?
It's been known for many years that Intel has trouble with scaling core counts due to interconnect issues. And you can tell by reviews, a 16 P core Xeon should be smashing an 8+8 in efficiency, but it doesn't. I've seen a phoronix review where the SR xeons draw more than 100w just being idle - which is an obvious interconnect thing, since the desktop parts sit at 3w or lower.

This again? Zen 4 is plenty efficient, as you seem to acknowledge (via their APUs). However, it's only apparent in their desktop products when it really counts, which is on MT workloads. If Zen 4 had no window where it could run more efficiently than Raptor Cove + Gracemont, then there's no way the Ryzen 7950X would be competing (and winning) on MT efficiency, yet it does!
Haven't I already agreed that the 7950x is indeed more efficient at ISO power than the 14900k? I'ts not a huge difference (around 10%) but it does indeed win. That is not the case with the rest of the segment though, and it's definitely not the case at STOCK since the T lineup of Intel tops the first 10-20 spots on an efficiency chart.
Just how you like it: a stock 7950X is nearly the same CB23 MT performance as i9-14900K (just 1.8% below) while being 13.9% more efficient!
It's still a flawed test - cause take a look at the 7950x 3d. The 7950x needs 60% more power for 4% more performance. So it stands to reason that in order for the 7950x to score 2% higher (in order to match the 14900k) it would require a lot more than just 2% more power. But regardless, I already agreed that in most power limits that make sense the 7950x is more efficient than the 14900k at ISO power.

And not Ryzen 9 7900?.

For stock out of the box efficiency, no, the 7900 is losing compared to Intel's T lineup. You've seen what Intel can do at 35w
Needs data. Show efficiency comparisons (package power, not system power) between Intel non-K and corresponding AMD non-X products.
I've already linked you a review of a non K 13700 measured from the 12v. It stands to reason that the T cpus are even more efficient than the non k.

Are you actually trying to argue that T cpus are not by far the most efficient cpus in existence? Come on now, that's just arguing in bad faith...