News Intel issues statement about CPU crashes, blames motherboard makers — BIOSes disable thermal and power protection, causing issues

Page 6 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

bit_user

Polypheme
Ambassador
Unless you have any actual data, "What if's" are invalid. What if the 14900k i binned to support lower voltages?
My conjecture isn't completely arbitrary. I once assumed the K-series CPUs also required the least voltage of the lineup, but then I read a rather convincing explanation of why they aren't as efficient at lower power limits, and I think it had something to do with being binned to withstand higher voltages rather that requiring lower voltages... or something like that.

So, I'd say that, absent any data, my conjecture is at least as valid as yours. In fact, if you back up a step, my point was just that the i9-14900T's efficiency should be tested, and my question was in support of that conclusion. Given your appeal to data, I'll take that as an agreement that testing is necessary to arrive at any firm conclusions.

Do you know that, for example, the Phanteks T30 are the best fans currently in existence? Both in terms of pure performance and in terms of "efficiency" (performance to noise). You know how we know that? Cause reviewers tested them NORMALIZED (same dba).
Again, you're confusing a point measurement with a relation that's actually a function. Here's Anandtech, doing it right:

Comp1.png

Source: https://www.anandtech.com/show/21198/the-alphacool-apex-stealth-metal-120-mm-fans-capsule-review

See how the slope of the Noctua fan differs from that of the AlphaCool fan? They're fairly similar, but different enough that you can see it - and there's even a cross-over! I have a Noctua low-profile fan that starts buzzing at a certain speed - I'm sure the noise-efficiency curve of that fan looks substantially different!

It's the same thing for CPUs. If you want to characterize the efficiency of a microarchitecture, then you need to measure it across the frequency range, like Chips & Cheese did, when comparing Golden Cove and Gracemont.

See how their efficiency varies over the frequency range? At its worst, Golden Cove is much worse than Gracemont, but if someone did like you said and measured them both at 3.8 GHz, in an attempt to achieve some sort of superficial parity, they would conclude the opposite.

If they just tested them at stock out of the box, they would rank as one of the worst if not the worst fans in existence in terms of performance / noise. Now apply the same logic to CPUs.
Bad analogy. PWM fans are designed to be used across a wide range of speeds, so "out-of-the-box" doesn't characterize it in an analogous way as it does CPUs. CPUs and motherboards are designed to be used out-of-the-box, which is how the overwhelming majority of people use them, whereas a PWM fan isn't expected to be run at 100% duty cycle all the time.

Just to reiterate the underlined point: if most of a publication's readers just use stuff with the out-of-the-box defaults, then it absolutely makes sense to test them that way. Sure, go ahead and include one or two more data points with tuned settings, like TechPowerUp does, but not testing hardware the way most buyers use it is doing them a disservice.

Testing and benching is a hobby of mine.
That's cool. I'd definitely do a bit more reading on the subject, if it were me.

It's been known for many years that Intel has trouble with scaling core counts due to interconnect issues.
Quantify that, please. Also, Intel has been working on their mesh interconnect since Knights Landing (almost 10 years ago!), which seems to me like plenty of time to get it tuned up or switch to something different.

a 16 P core Xeon should be smashing an 8+8 in efficiency,
Why would you expect so? You can see from the above curve that Gracemont is typically more efficient than Golden Cove. As long as you scale up their frequencies intelligently, having E-cores in the mix should always yield higher efficiency.

I've seen a phoronix review where the SR xeons draw more than 100w just being idle - which is an obvious interconnect thing, since the desktop parts sit at 3w or lower.
If you're talking about this article:

It shows a 1P 8490H with a min power of 78.41 W, while the 8380 has a min power of 31.32 W. They both use a mesh interconnect, but Sapphire Rapids uses DDR5 and PCIe 5.0, as well as having more PCIe lanes. How do you know that doesn't have anything to do with it? Maybe PCIe ASPM is disabled or broken? The thing about Interconnect is that its power consumption should scale with use and not just burn tons of power at idle.

What you noticed does indeed look like an anomaly, but instead of jumping to conclusions, it usually pays off to dig deeper. I'm sure others have noticed as well, so maybe you can find out more about its idle power problems with a little bit of digging! Perhaps it's even been fixed or mitigated, since that review was performed.

Haven't I already agreed that the 7950x is indeed more efficient at ISO power than the 14900k? I'ts not a huge difference (around 10%) but it does indeed win.
Okay, so in order for that to be true, the efficiency curves of the cores must cross! Because, you're arguing that its I/O die puts it at a disadvantage, which you claim is demonstrated in the ST efficiency score, but then it's able to overcome that, when hit with a MT workload. Mathematically, the only way that can happen is if the per-core efficiency of Zen 4 is better than the i9-14900K's cores, under those conditions!

It's still a flawed test - cause take a look at the 7950x 3d. The 7950x needs 60% more power for 4% more performance. So it stands to reason that in order for the 7950x to score 2% higher
You can't just take two CPUs that differ in ways beyond power & frequency limits and use data from one to extrapolate anything about the other. Even if they did just differ in terms of power & frequency limits, because the perf/W curve is nonlinear, you still couldn't accurately extrapolate from two data points! I'm quite frankly amazed at how, after all the times we've gone around and around, you still don't see that!

Are you actually trying to argue that T cpus are not by far the most efficient cpus in existence?
Well, it depends on whether you mean at the microarchitecture level, or if you're talking about stock settings and what's available to buy off-the-shelf. I would agree that a 35 W Raptor Lake is probably more efficient than a 65 W AMD processor that's priced and performs similarly, but I'd rather see the data (including details about the testing methodology) than speculate.
 
  • Like
Reactions: rfdevil

TheHerald

Upstanding
Feb 15, 2024
263
64
260
My conjecture isn't completely arbitrary. I once assumed the K-series CPUs also required the least voltage of the lineup, but then I read a rather convincing explanation of why they aren't as efficient at lower power limits, and I think it had something to do with being binned to withstand higher voltages rather that requiring lower voltages... or something like that.

So, I'd say that, absent any data, my conjecture is at least as valid as yours. In fact, if you back up a step, my point was just that the i9-14900T's efficiency should be tested, and my question was in support of that conclusion. Given your appeal to data, I'll take that as an agreement that testing is necessary to arrive at any firm conclusions
I've read that kind of nonsense many times (about high leakage chips turning into K and not being able to drop voltage). It's pure nonsense. Most of the time the K have better binning and can indeed underclock / undervolt better. There is some limited data supporting that, usually tests between K and non K cpus, at similar power the K is always faster and can hold higher clocks.
See how their efficiency varies over the frequency range? At its worst, Golden Cove is much worse than Gracemont, but if someone did like you said and measured them both at 3.8 GHz, in an attempt to achieve some sort of superficial parity, they would conclude the opposite.
The problem is the Ecores are more efficient at clocks you wouldn't use on a desktop CPU, ie. below 3ghz. Also there is a huge delta between integer and vector workloads
Bad analogy. PWM fans are designed to be used across a wide range of speeds, so "out-of-the-box" doesn't characterize it in an analogous way as it does CPUs. CPUs and motherboards are designed to be used out-of-the-box, which is how the overwhelming majority of people use them, whereas a PWM fan isn't expected to be run at 100% duty cycle all the time.
Just like CPUs, most people use fans at their motherboard defaults. Are you telling me people know how to tune fan curves in the bios but don't know how to change their power limits? And no, never said 100%, but most mobo defaults have the fan curve pretty high. The T30 would be the worst / one of the worst fans if used stock out of the box.

Just to reiterate the underlined point: if most of a publication's readers just use stuff with the out-of-the-box defaults, then it absolutely makes sense to test them that way. Sure, go ahead and include one or two more data points with tuned settings, like TechPowerUp does, but not testing hardware the way most buyers use it is doing them a disservice.
I agree, but I don't think it is the case. Im fairly confident that people that actually participate in tech forums and read these kinds of reviews surely know that changing their power limits is a thing, just like XMP is a thing.

But at the end of the day, what most people do is kinda irrelevant. If I told you I bought X CPU because it's the most efficient CPU with this power limit you can't then post a graph at stock showing it isn't. For example my 14900k hits a wall at 90w during games at the heaviest game in terms of power draw at 1080p with a 4090. That's the absolute max I see in any game, most of the time it's actually sitting at below 50 watts. You can't then tell me "why did you get the 14900k, it draw 400 watts in games". Which is what's going on in forums a LOT.

Quantify that, please. Also, Intel has been working on their mesh interconnect since Knights Landing (almost 10 years ago!), which seems to me like plenty of time to get it tuned up or switch to something different.
Im not following server news closely cause I really don't care about it but I've read lots of articles over the years that Intel has some serious trouble with scaling that AMD doesn't have with their IF approach.

Why would you expect so? You can see from the above curve that Gracemont is typically more efficient than Golden Cove. As long as you scale up their frequencies intelligently, having E-cores in the mix should always yield higher efficiency.
The graph compares efficiency in relation to clockspeeds which is frankly a little bit useless. Clockspeeds are really useless, it should be comparing efficiency in relation to performance. A Pcore can offer the same performance as an ecore at much lower frequency. 8P cores would always win at ISO power against 8Ecores, it won't even be close, even at integer workloads. Especially considering how highly clocked ecores are on ALD and RPL, they are way outside their efficiency range anyways. 8P cores at 65w score 16-17k in CBR23, 8ecores score half of that. They are really not comparable.

If you're talking about this article:

It shows a 1P 8490H with a min power of 78.41 W, while the 8380 has a min power of 31.32 W. They both use a mesh interconnect, but Sapphire Rapids uses DDR5 and PCIe 5.0, as well as having more PCIe lanes. How do you know that doesn't have anything to do with it? Maybe PCIe ASPM is disabled or broken? The thing about Interconnect is that its power consumption should scale with use and not just burn tons of power at idle.

What you noticed does indeed look like an anomaly, but instead of jumping to conclusions, it usually pays off to dig deeper. I'm sure others have noticed as well, so maybe you can find out more about its idle power problems with a little bit of digging! Perhaps it's even been fixed or mitigated, since that review was performed.
As I've said, im not very interested in servers, so I just gloss over these reviews. It's impressive what AMD has managed in the server space, but it's something I sincerely have no interest for.

Okay, so in order for that to be true, the efficiency curves of the cores must cross! Because, you're arguing that its I/O die puts it at a disadvantage, which you claim is demonstrated in the ST efficiency score, but then it's able to overcome that, when hit with a MT workload. Mathematically, the only way that can happen is if the per-core efficiency of Zen 4 is better than the i9-14900K's cores, under those conditions

Of course, Zen 4 cores >>>> Ecores, by a huge margin.

From my testing a 12900k with 8ecores off is basically matching a 7700x in efficiency. The 7700x has a lead at lower power limits, the 12900k has a lead at higher power limits. I fully expect the RPL P cores to be winning on every power limit vs zen 4. Of course that comparison involves the IO die, but since that's a thing, that's how we have to compare. Purely comparing the cores I think they are both very close, with the disadvantage that P cores take up a lot more space.
You can't just take two CPUs that differ in ways beyond power & frequency limits and use data from one to extrapolate anything about the other. Even if they did just differ in terms of power & frequency limits, because the perf/W curve is nonlinear, you still couldn't accurately extrapolate from two data points! I'm quite frankly amazed at how, after all the times we've gone around and around, you still don't see that!

True, I can't precisely extrapolate, but it stands to reason that the 7950x is pushed at the edge already, pushing it even further to hit 40+k would need extraordinary amounts of extra power. Don't even need to compare it to the 3d, we have power charts of the 7950x. How much performance do the last 100 watts give it? Going from 150 to 230 watts it gains 5.5% performance. It needs another 5% performance to go from 38k to 40k, and that will take most likely more than 80 watts.

Well, it depends on whether you mean at the microarchitecture level, or if you're talking about stock settings and what's available to buy off-the-shelf. I would agree that a 35 W Raptor Lake is probably more efficient than a 65 W AMD processor that's priced and performs similarly, but I'd rather see the data (including details about the testing methodology) than speculate.
See those kinds of cherrypicks is what bothers me, the internet is full of them. When people say "the 14900k is inefficient blablabla" they are clearly talking about the stock out of the box operation and I haven't seen ANYBODY asking them if they talk about the microarchitecture level. But when it comes to Intel smashing it with the T lineup, of course we do have to make the distinction to protect our beloved AMD, protector of the poor. Come on man...

Look at the 13900k, the CPU literally smashed it out of the park in terms of efficiency, yet everyone was calling it a space heater. The 13900k was 50 to 80% more efficient than the 12900k, that increase happened within a single year at the same process node. Some actual numbers, 12900 @ 125w = 24k, 13900@ 125w = 32k. At 65w the 13900k was matching the 12900k @ 100w. That's an absolutely insane efficiency increase, yet no one mentioned it, they were all focused at the "remove all power limits, amp limits, temp limits and let's hit the blender button".

At the end of the day, as I've said multiple times, reality is Intel is more efficient at ST and MT workloads in the majority of segments whether you want to compare at ISO power or out of the box. Is that because they have a better microarchitecture? Yes and no. Yes in the sense that they incorporated ecores allowing them to offer a lot more cores per segment, yes they do have a better one, in the sense of P cores vs Zen 4 cores, not sure.
 

bit_user

Polypheme
Ambassador
The problem is the Ecores are more efficient at clocks you wouldn't use on a desktop CPU, ie. below 3ghz.
First, why not? And second, why would you only use the E-cores in their most efficient range? All you need to do is keep them in a range that's at least as efficient as how hard you're driving the P-cores. For a scalable workload, that's a definite win.

It's basically a variation on the theme of being more efficient to use more cores than fewer. It still applies, even if some of the cores are less powerful. The shape of those graphs should tell you why.

Also there is a huge delta between integer and vector workloads
No, not huge. When Anandtech tested the i9-12900K, they found a single E-core was 64.5% as fast at integer workloads as a P-core and 54.1% as fast at floating point.

Im fairly confident that people that actually participate in tech forums and read these kinds of reviews surely know that changing their power limits is a thing, just like XMP is a thing.
I think the typical reader of CPU reviews is likely to look at some of the gaming benchmarks and the conclusion. They don't read the whole article, they don't pour over all of the benchmarks, and more often than not they probably buy a prebuilt machine.

And yes, there are even people in these forums who don't know they have to enable XMP. I've seen it, myself.

But at the end of the day, what most people do is kinda irrelevant.
Uh, not to Intel or AMD. Or Toms. Or those people, for that matter.

The graph compares efficiency in relation to clockspeeds which is frankly a little bit useless. Clockspeeds are really useless, it should be comparing efficiency in relation to performance.
If you bothered to read the article, you'd have seen they did that, too!

image-151.png


That graph looks deceptive, though. If you don't think too hard, you might glance at it and fail to see the point of E-cores.

A Pcore can offer the same performance as an ecore at much lower frequency. 8P cores would always win at ISO power against 8Ecores,
But you don't have to allocate the same power to each. In fact, the right way to allocate power is on the basis of efficiency!

Again, your own reductive thinking is working against you.

it won't even be close, even at integer workloads. Especially considering how highly clocked ecores are on ALD and RPL, they are way outside their efficiency range anyways.
They don't have to be in their peak efficiency range. They just have to be more efficient than putting more power into the P-cores, which the graph in my previous post showed was guaranteed at 4.5 GHz, but the threshold shifts even lower, when you aren't running the E-cores at full tilt.

See those kinds of cherrypicks is what bothers me, the internet is full of them. When people say "the 14900k is inefficient blablabla" they are clearly talking about the stock out of the box operation and I haven't seen ANYBODY asking them if they talk about the microarchitecture level. But when it comes to Intel smashing it with the T lineup, of course we do have to make the distinction to protect our beloved AMD, protector of the poor. Come on man...
I have to make the distinction, because AMD doesn't offer a 35 W product. So, when you compare a 35 W CPU with a 65 W one, the 65 W is at a natural disadvantage. It's one thing if a person is doing this comparison because they want to figure out which to buy. It's something different is someone is saying the Intel i9-14900T is more efficient, and then uses that one datapoint to make sweeping generalizations about Intel's efficiency.

The 13900k was 50 to 80% more efficient than the 12900k, that increase happened within a single year at the same process node. Some actual numbers, 12900 @ 125w = 24k, 13900@ 125w = 32k. At 65w the 13900k was matching the 12900k @ 100w.
A big part of that had to do with doubling the number of E-cores.

Intel is more efficient at ... MT workloads in the majority of segments whether you want to compare at ISO power or out of the box.
That's not what I'm seeing. Comparing CPUs of similar price and performance, Zen 4 tends to be more efficient at MT workloads.
 
  • Like
Reactions: rfdevil

TheHerald

Upstanding
Feb 15, 2024
263
64
260
First, why not? And second, why would you only use the E-cores in their most efficient range? All you need to do is keep them in a range that's at least as efficient as how hard you're driving the P-cores. For a scalable workload, that's a definite win.
Because with how Intel cpus work right now (single rail voltage) you have to clock them high, since they are going to follow the voltage curve of P cores regardless. Which is something I've mentioned as a problem as far back as alderlake
I think the typical reader of CPU reviews is likely to look at some of the gaming benchmarks and the conclusion. They don't read the whole article, they don't pour over all of the benchmarks, and more often than not they probably buy a prebuilt machine.

And yes, there are even people in these forums who don't know they have to enable XMP. I've seen it, myself.
If that is the case then reviewers should be testing the non k and T cpus, since these people don't need an unlocked CPU. Reviewers are doing them a disservice. But the need to show AMD as leading in efficiency doesn't allow that to happen.

Gnexus is a prime example of that. He supposedly wanted to show how inefficient Intel is at ISO power, but he only tested a single MT workload at ISO power - the 14700k was leading the entire chart in efficiency in that workload, but then he proceeded to downplay it by claiming that it loses too much performance. Like who cares, it's still faster than the CPU you compared it to - while also being more efficient, what the actual heck??

That graph looks deceptive, though. If you don't think too hard, you might glance at it and fail to see the point of E-cores.
Every graph can be deceptive if you don't understand what you are looking at. The unlimited power graphs are also deceptive since people don't or pretend that they don't understand that performance doesn't scale with power, therefore the fact that the 14900k uses twice the power for 10% more performance doesn't mean it's inefficient.

But you don't have to allocate the same power to each. In fact, the right way to allocate power is on the basis of efficiency!

Again, your own reductive thinking is working against you.
That's the problem, with the current design you do, explained why previously. Cache / PCores / Ecores all use one rail. Which is disappointing for people like me that try to fine tune their CPU for specific workloads.

I have to make the distinction, because AMD doesn't offer a 35 W product. So, when you compare a 35 W CPU with a 65 W one, the 65 W is at a natural disadvantage. It's one thing if a person is doing this comparison because they want to figure out which to buy. It's something different is someone is saying the Intel i9-14900T is more efficient, and then uses that one datapoint to make sweeping generalizations about Intel's efficiency.
AMD doesn't offer a power and amp unlimited CPU either but no one made that distinction. The internet is clearly fully biased against Intel and it shows. When you are comparing a CPU that can pull 350w vs one that is locked at 230, clearly the first one has a huge disadvantage. That didn't stop reviewers from concluding how terribly inefficient the first one is.
A big part of that had to do with doubling the number of E-cores.

Indeed, but does it matter? MT performance is MT performance no matter where it come from.

That's not what I'm seeing. Comparing CPUs of similar price and performance, Zen 4 tends to be more efficient at MT workloads.
Then you are blind. The 13700k / 13700 / 13700t cost the same as the 7700x, whether you compare at ISO or stock out of the box the 7700x has no hope. The 13600k / 13600 / 13600t cost the same as 7600x. Again, hopeless . 14700k / 14700 / 14700t cost the same as the 7800x 3d. Not a chance. 13900 / 13900k / 13900t cost less than the 7950x, still not a chance. Going by naming schemes and MSRP though the 13900k / 13900 / 13900t compares to the 7900x, which it also beats easily.


The only comparison that sees AMD winning is if you ignore stock configuration (which you said you are against) and you test the 7950x at ISO power against the 13th gen i9.
 
Last edited:

35below0

Commendable
Jan 3, 2024
1,148
513
1,590
The problem is the Ecores are more efficient at clocks you wouldn't use on a desktop CPU, ie. below 3ghz.
Desktop CPUs spend much of their time below 3Ghz. Jumping up and down as needed. And you know this.
Are you telling me people know how to tune fan curves in the bios but don't know how to change their power limits?
Yes.
If I told you I bought X CPU because it's the most efficient CPU with this power limit you can't then post a graph at stock showing it isn't.
If you're happy doing that. But a CPU is not locked to a power limit, and very few people would want to impose a limit on a CPU. You'd have to really know what you're doing and why.
For 99.99999999999999999999999999999999999% just dropping the CPU in is enough. It can auto boost on it's own and do whatever magic it does.

More importantly, pointing out that efficiency is graded on a curve and your X CPU is efficient at various points, but inefficient at others, is not itself arguing in bad faith or in circles.
"why did you get the 14900k, it draw 400 watts in games". Which is what's going on in forums a LOT.
The 14900K is known as a super hot CPU because headline after headline said so, and because testers wanted to see how much power they can get through it.
But i think @bit_user was arguing the opposite. I really don't know what you're saying here.
8P cores would always win at ISO power against 8Ecores, it won't even be close, even at integer workloads. Especially considering how highly clocked ecores are on ALD and RPL, they are way outside their efficiency range anyways. 8P cores at 65w score 16-17k in CBR23, 8ecores score half of that. They are really not comparable.
What is this?
Look at the 13900k, the CPU literally smashed it out of the park in terms of efficiency, yet everyone was calling it a space heater. The 13900k was 50 to 80% more efficient than the 12900k, that increase happened within a single year at the same process node. Some actual numbers, 12900 @ 125w = 24k, 13900@ 125w = 32k. At 65w the 13900k was matching the 12900k @ 100w. That's an absolutely insane efficiency increase, yet no one mentioned it, they were all focused at the "remove all power limits, amp limits, temp limits and let's hit the blender button".
I agree. You are being very clownish about it though. You take it personally.
If that is the case then reviewers should be testing the non k and T cpus, since these people don't need an unlocked CPU
This is a very bad assumption. You are ignoring initial MSRP, and assuming it will not change, and assuming total availability forever. The specs of a CPU don't change, but everything else does. If it didn't, then everyone could just do the math, or read a review that did it already, and proceed to buy the exact CPU that suits them.

Let's say for the sake of argument that right now that ideal CPU is the 13600. Not the K, just the 13600.
Nuts. Can't find it.
And when you could, it cost more.

Even if on specs alone, it was the best CPU, paying for it is a different story.
This happens a lot with Alder/Raptor K CPUs.

In addition Ks and non-Ks don't perform the same. So it's not all down to K's being for people who need an unlocked CPU. That's outdated thinking.
The internet is clearly fully biased against Intel and it shows.
?
Indeed, but does it matter? MT performance is MT performance no matter where it come from.
Only in that you were dismissing e cores as crap.


Then you are blind. The 13700k / 13700 / 13700t cost the same as the 7700x, whether you compare at ISO or stock out of the box the 7700x has no hope.

The 13600k / 13600 / 13600t cost the same as 7600x. Again, hopeless.
13600K has 8 extra e cores, and 8 extra threads. It's TDP is 20w higher.
It ought to be winning this fight; It's better.

But they don't cost the same. 7600X is almost half the price, and the difference grows when shopping outside the USA.
14700k / 14700 / 14700t cost the same as the 7800x 3d. Not a chance. 13900 / 13900k / 13900t cost less than the 7950x, still not a chance. Going by naming schemes and MSRP though the 13900k / 13900 / 13900t compares to the 7900x, which it also beats easily.
In all of these, you are arguing the intel is better value if the CPUs are similarly priced. Which they are not. AMD is cutting the price and increasing the cache to crawl up the gaming benchmarks and save some dignity.

The argument started about efficiency, and then efficiency at forced voltages. This is about value.

Ultimately you are trying to argue that intel is really really good. I think people know...

Even the poor saps who don't know how to undertwat their feltchside wattage on the overcomplicator helix-bound snatchgrabfumbeler in deltas, not synods... good grief, even them.

Reason they buy Ryzens is mony
And this is again value and nothing to do with efficiency. They buy them because they can go a tier higher for the same money, at least as far as gaming is concerned.
Youtube, Neflix, web, etc. is fine either way (AMD/Intel).

I don't really care that you're right or wrong. You do make some wrong assumptions, and seem to insist that your own data is the template everyone should follow. You can't ignore different needs and interests, nor can you dismiss them.
As for efficiency, if you want to get back to that please do. It's getting hard and harder to follow it, that's all.
 
  • Like
Reactions: bit_user

TheHerald

Upstanding
Feb 15, 2024
263
64
260
Desktop CPUs spend much of their time below 3Ghz. Jumping up and down as needed. And you know this.
At idle. Not when active

If you're happy doing that. But a CPU is not locked to a power limit, and very few people would want to impose a limit on a CPU. You'd have to really know what you're doing and why.
For 99.99999999999999999999999999999999999% just dropping the CPU in is enough. It can auto boost on it's own and do whatever magic it does.
I'd argue the same about fans. Yet ALL fan reviews test at ISO noise levels. ALL of them.
This is a very bad assumption. You are ignoring initial MSRP, and assuming it will not change, and assuming total availability forever. The specs of a CPU don't change, but everything else does. If it didn't, then everyone could just do the math, or read a review that did it already, and proceed to buy the exact CPU that suits them.

Let's say for the sake of argument that right now that ideal CPU is the 13600. Not the K, just the 13600.
Nuts. Can't find it.
And when you could, it cost more.

Even if on specs alone, it was the best CPU, paying for it is a different story.
This happens a lot with Alder/Raptor K CPUs.
When you want to compare technologies names and MSRP are important. There is a reason why AMD basically straight up copied Intel's naming scheme. They wanted to make it painfully obvious which of their CPUs competes against which one from Intel. The R5 600x is named and priced against the i5 600k. Can't be more apparent that AMD wanted them to go against each other. Intel won that segment and AMD was forced to drop prices. So as a buyer, yes - obviously you care about current prices. But if you are comparing technologies, current prices are irrelevant. Intel decides to drop the 14900ks to 99$, does that mean im going to compare it to a 4core AMD cpu? Of course not.

Only in that you were dismissing e cores as crap.
I'm not. Im stating facts. Ecores aren't good in performance / watt or in straight up performance against a P core. They are good cause they are space efficient. If you remove the die space out of the equation obviously Ecores are crap, I'd take a 16P core cpu over a 16E core CPU. But that can't be an option since the die space required is completely different.

Even the poor saps who don't know how to undertwat their feltchside wattage on the overcomplicator helix-bound snatchgrabfumbeler in deltas, not synods... good grief, even them.

Reason they buy Ryzens is mony
And this is again value and nothing to do with efficiency. They buy them because they can go a tier higher for the same money, at least as far as gaming is concerned.
Youtube, Neflix, web, etc. is fine either way (AMD/Intel).

I don't really care that you're right or wrong. You do make some wrong assumptions, and seem to insist that your own data is the template everyone should follow. You can't ignore different needs and interests, nor can you dismiss them.
As for efficiency, if you want to get back to that please do. It's getting hard and harder to follow it, that's all.
I don't care why people buy what they buy. This conversation started with the ST efficiency. Intel wins that easily. Then guy brought up only stock comparisons as valid. Great then, Intel wins the MT efficiency too in that case. Im not sure why we keep going for so many posts for something that is so blatantly obvious
 

35below0

Commendable
Jan 3, 2024
1,148
513
1,590
At idle. Not when active
Not when idle. When <3Ghz is enough for the job.

You dodged the question. Why are you ignoring the full range of efficiency? Is it because you only care about one zone? Or one point?
When you want to compare technologies names and MSRP are important. There is a reason why AMD basically straight up copied Intel's naming scheme. They wanted to make it painfully obvious which of their CPUs competes against which one from Intel.
Not what i said, not what is being discussed.
The R5 600x is named and priced against the i5 600k. Can't be more apparent that AMD wanted them to go against each other.
AMD marketing is not the subject.
Intel won that segment and AMD was forced to drop prices.
Not sure anything was won. You yourself argue AMD only tried to make it look as if the products were equal.
Also, AMD had lower prices. I don't know for sure, but they usually come in under intel.
Also, also, not the subject.
So as a buyer, yes - obviously you care about current prices. But if you are comparing technologies, current prices are irrelevant. Intel decides to drop the 14900ks to 99$, does that mean im going to compare it to a 4core AMD cpu? Of course not.
You are comparing unequal technologies based on marketing naming schemes?
Before you see this as a personal attack, read the question again, and answer it honestly.
I'm not. Im stating facts. Ecores aren't good in performance / watt or in straight up performance against a P core.
They are what made the difference in 13600K beating the 7600X handily. So, are they good or aren't they?
On their own they are not as good as P cores, but they don't work in isolation. They never were meant to work in isolation.
I don't care why people buy what they buy. This conversation started with the ST efficiency. Intel wins that easily. Then guy brought up only stock comparisons as valid. Great then, Intel wins the MT efficiency too in that case. Im not sure why we keep going for so many posts for something that is so blatantly obvious
But you actually posted the internet is biased against intel.

When it comes to intel vs AMD, intel wins some and AMD wins some. To make sweeping decisions about winning easily requires evidence that doesn't exist. So you can't produce it when asked. So you call it blatantly obvious, and turn your head the other way when other data is shown.
That's why it keeps going around.

Again, if you're happy setting lower limits on your i9s and getting the right kind of performance out of them, so be it. It may be useful.
But you're as good as saying that AMD need not even exist because it has nothing to offer.
 
  • Like
Reactions: rfdevil

TheHerald

Upstanding
Feb 15, 2024
263
64
260
This is relevant for... Whatever is being discussed now:

View: https://www.youtube.com/watch?v=bHILyzooR58


Agree to disagree, I'd say?

Regards.
Yeah, that's the kind of trash comparison youtubers make for the clicks, it's quite known that trashing on Intel / Nvidia and favoring amd gets you more views (lots of content creators have said this). Who in their right mind would buy a 14900k just for games and run it stock on top of that.That's like comparing the 13600k to the 7950x only in games and being like "it's faster, cheaper, more efficient blabla". Oh really?
 

TheHerald

Upstanding
Feb 15, 2024
263
64
260
Not when idle. When <3Ghz is enough for the job.

You dodged the question. Why are you ignoring the full range of efficiency? Is it because you only care about one zone? Or one point
I'm not ignoring it, but 3w per core (that's when ecores are more efficient) would get you to below 30w for the whole 8ecore cluster. Usually desktop cpus aren't running at that low power.

AMD marketing is not the subject.
How else are we going to compare man? 2 CPUs come out at the same time, with the same name and the same price. Aren't they direct competitors
You are comparing unequal technologies based on marketing naming schemes?
Before you see this as a personal attack, read the question again, and answer it honestly.
As I've said above, same time, same name, same price.

They are what made the difference in 13600K beating the 7600X handily. So, are they good or aren't they?
On their own they are not as good as P cores, but they don't work in isolation. They never were meant to work in isolation.
They are good in terms of performance / die space.
But you actually posted the internet is biased against intel.

Of course it is. You can see it on reddit, you can see on twitter, even content creators have said this. A LOT of them, even ones that mostly make AMD content. Daniel owen specifically said that he can't make the content he wants to make cause unless it's praising amd and pooping on everyone else it's not getting clicks. Same with techuk etc, there are plenty of people that have to do the above or not get views.

When it comes to intel vs AMD, intel wins some and AMD wins some. To make sweeping decisions about winning easily requires evidence that doesn't exist. So you can't produce it when asked. So you call it blatantly obvious, and turn your head the other way when other data is shown.
That's why it keeps going around.

Again, if you're happy setting lower limits on your i9s and getting the right kind of performance out of them, so be it. It may be useful.
But you're as good as saying that AMD need not even exist because it has nothing to offer.
No that's not what I said. I said that with the parameters set by bit_user (stock out of the box) Intel is more efficient in every segment in every workload due to the T lineup being locked to 35w, even on kamehameha motherboards.
 
Yeah, that's the kind of trash comparison youtubers make for the clicks, it's quite known that trashing on Intel / Nvidia and favoring amd gets you more views (lots of content creators have said this).
They're comparing top of the respective gaming stack which the X3D will always win perf/eff curves no matter how you tune the CPU (APO might shift this a bit, but it's still limited use). Now one could rightfully point out that they're not using the 7950X3D, but that's because AMD fumbled the implementation so it doesn't work as well as the 7800X3D. Their testing is accurate, it's a valid way to test, and they emphasize they're just talking about gaming.
Who in their right mind would buy a 14900k just for games and run it stock on top of that.
I hate to break it to you, but there's a lot of people who buy what they think is the best and want it to just work. This isn't a decade ago where most people dropping bank on CPUs were enthusiasts who learned the ins and outs of their hardware.
That's like comparing the 13600k to the 7950x only in games and being like "it's faster, cheaper, more efficient blabla". Oh really?
It's really not unless you literally only look at the dollar amount of the CPU and ignore the context of the performance for gaming where they are in their respective SKU stacks.
 
  • Like
Reactions: Phaaze88

TheHerald

Upstanding
Feb 15, 2024
263
64
260
They're comparing top of the respective gaming stack which the X3D will always win perf/eff curves no matter how you tune the CPU (APO might shift this a bit, but it's still limited use). Now one could rightfully point out that they're not using the 7950X3D, but that's because AMD fumbled the implementation so it doesn't work as well as the 7800X3D. Their testing is accurate, it's a valid way to test, and they emphasize they're just talking about gaming.

I hate to break it to you, but there's a lot of people who buy what they think is the best and want it to just work. This isn't a decade ago where most people dropping bank on CPUs were enthusiasts who learned the ins and outs of their hardware.

It's really not unless you literally only look at the dollar amount of the CPU and ignore the context of the performance for gaming where they are in their respective SKU stacks.
Dude the 7700k, the 8700k, the 9700k, the 9900k,the 10900k, all of them beat or were on par with amds more expensive cpus in gaming. Most of those CPUs (with the exception of the 8700k) were just mediocre to bad. Suddenly we treat the 5800x 3d and the 7800x 3d like the second coming.

The same Techtuber never bothered to make the same comparison, a 9700k vs a 3950x ONLY for gaming and comparing price to performance. That never happened, now he makes a video every other week comparing a 7800x 3d to the 14900k, like they are even in the same ballpark. Jesus god

You'd be surprised by how low power you can tune the 14900k to run at. 20 to 50w for most games, 90w for the heaviest game in existence at 1080p. Yes, without losing performance. HT on its own drops power draw by 50 - 60w while increasing performance.
 
Yeah, that's the kind of trash comparison youtubers make for the clicks, it's quite known that trashing on Intel / Nvidia and favoring amd gets you more views (lots of content creators have said this). Who in their right mind would buy a 14900k just for games and run it stock on top of that.That's like comparing the 13600k to the 7950x only in games and being like "it's faster, cheaper, more efficient blabla". Oh really?
Because that's how Intel themselves have positioned their CPUs to gamers?

Also, the better binned CPUs from Intel are indeed the best gaming CPUs due to how their caching segmentation works.

I have to say, at this point, I'm not sure if you're trolling or missing the forest for the trees.

Regards.
 

TheHerald

Upstanding
Feb 15, 2024
263
64
260
Because that's how Intel themselves have positioned their CPUs to gamers?

Also, the better binned CPUs from Intel are indeed the best gaming CPUs due to how their caching segmentation works.

I have to say, at this point, I'm not sure if you're trolling or missing the forest for the trees.

Regards.
How did intel position themselves? What do you mean?

On amds website the 7950x is marketed as the fastest for gamers.

Yes, no company will tell you to just buy the cheaper product cause it's just as good.
 
How did intel position themselves? What do you mean?

On amds website the 7950x is marketed as the fastest for gamers.

Yes, no company will tell you to just buy the cheaper product cause it's just as good.
View: https://www.youtube.com/watch?v=Cbyl4q3QFYA


And

View: https://www.youtube.com/watch?v=RQ6k-cQ94Rc


Those two are what I'm talking about. I'm sure there's other outlets that have found much the same in their testing.

As for the Intel marketing:

https://www.intel.com/content/www/us/en/gaming/resources/gaming-cpu.html

They don't touch on the Cache, but I can't find anything from their marketing material using the i9s as their "gaming kings", but I'm pretty sure there's some places you can trust that have those slides available. This is what I think should be somewhat believable:

https://www.reddit.com/r/intel/comm..._intels_marketing_seems_like_theyre/#lightbox

Regards.
 
  • Like
Reactions: bit_user

TheHerald

Upstanding
Feb 15, 2024
263
64
260
View: https://www.youtube.com/watch?v=Cbyl4q3QFYA


And

View: https://www.youtube.com/watch?v=RQ6k-cQ94Rc


Those two are what I'm talking about. I'm sure there's other outlets that have found much the same in their testing.

As for the Intel marketing:

https://www.intel.com/content/www/us/en/gaming/resources/gaming-cpu.html

They don't touch on the Cache, but I can't find anything from their marketing material using the i9s as their "gaming kings", but I'm pretty sure there's some places you can trust that have those slides available. This is what I think should be somewhat believable:

https://www.reddit.com/r/intel/comm..._intels_marketing_seems_like_theyre/#lightbox

Regards.
So similar to AMD...?


As I've said, AMD marketed the 7950x as the best for gamers when we all know that both the 7950x and the 14900k are just not the smartest choice for gaming unless you already have a 4090 and free money to spend.

What I'm saying is, the 14900k is so much faster in everything else compared to the 7800x 3d that comparing them in terms of value in just gaming is beyond silly.
 

bit_user

Polypheme
Ambassador
Because with how Intel cpus work right now (single rail voltage) you have to clock them high, since they are going to follow the voltage curve of P cores regardless. Which is something I've mentioned as a problem as far back as alderlake
That's an interesting claim. Do you have a source to backup the claim that all cores run at the same voltage?

Every graph can be deceptive if you don't understand what you are looking at. The unlimited power graphs are also deceptive since people don't or pretend that they don't understand that performance doesn't scale with power, therefore the fact that the 14900k uses twice the power for 10% more performance doesn't mean it's inefficient.
The approximate area of E-cores to P-cores is 4:1, by area. If you increase the number of cores, what happens is you divide up the power or thermal budget across more of them, which forces each core to clock lower. Those lower clocks put them in a greater efficiency window. So, you sacrifice a little performance on each core, but that's more than compensated by having more of them, and they're each operating more efficiently.

That's the problem, with the current design you do, explained why previously. Cache / PCores / Ecores all use one rail. Which is disappointing for people like me that try to fine tune their CPU for specific workloads.
In a heavily-threaded workload, I think it doesn't really matter so much.

Then you are blind. The 13700k / 13700 / 13700t cost the same as the 7700x, whether you compare at ISO or stock out of the box the 7700x has no hope.
You're comparing across different market segments, again, which is how you reach these silly conclusions. You should compare i7-13700K vs. R7 7700X, i7-13700 vs. R7 7700, and AMD doesn't play in the 35 W space with any retail desktop CPUs, AFAIK. Actually, I see the Ryzen 7 8700GE listed at 35 W, but the "E" signifies that it's intended as a specialty embedded product.

The only comparison that sees AMD winning is if you ignore stock configuration (which you said you are against) and you test the 7950x at ISO power against the 13th gen i9.
I already showed you the R9 7950X beating the i9-14900K at roughly ISO performance in this post:

You make far too much of a 1.8% performance discrepancy. Go ahead and take a few W off of the i9-14900K so it's exactly ISO-performance and you'll still see the Ryzen beating it.
 
  • Like
Reactions: NinoPino

TheHerald

Upstanding
Feb 15, 2024
263
64
260
That's an interesting claim. Do you have a source to backup the claim that all cores run at the same voltage?
I have the cpus in question, there is only a single rail. Both the cache and the e/p cores have one raii. The cache and the P cores have their own voltage curve as well, not sure about the ecores. For example if you set the P cores at 1.2 volts and then clock the cache to 4.7 ghz, since the caches voltage curve is 1.3v at 4.7ghz the P core voltage (and the ecore) will ignore the 1.2 you set and run at 1.3 volts regardless.

Before even the release of the 13900k I was one of the few people complaining about ecores being clocked high and what an impact that would have in gaming power draw. I knew beforehand, but the thing is, I was expecting Intel to separate the rails on 13th gen. They didn't.
I already showed you the R9 7950X beating the i9-14900K at roughly ISO performance in this post:

You make far too much of a 1.8% performance discrepancy. Go ahead and take a few W off of the i9-14900K so it's exactly ISO-performance and you'll still see the Ryzen beating it.
?? Haven't I already agreed that the 7950x is the only product that beats Intel in efficiency? Why are you repeating it? Thats agian if you ignore stock configuration, since at stock 13900t says hello

You're comparing across different market segments, again, which is how you reach these silly conclusions. You should compare i7-13700K vs. R7 7700X, i7-13700 vs. R7 7700, and AMD doesn't play in the 35 W space with any retail desktop CPUs, AFAIK. Actually, I see the Ryzen 7 8700GE listed at 35 W, but the "E" signifies that it's intended as a specialty embedded product.
Im comparing ISO wattage. Both the i7 and the i5 (700k / 600k) are miles ahead of their competition in efficiency. And so is the 13900k since it actually came to compete against the r9 7900x (hence the name and the price). The 7950x was released with an MSRP of 699$, So yeah, the 7950x was the most efficient but also the most expensive CPU of the bunch, if you just compare against price competitors at iso wattage Intel was winning at every single segment they were competing in efficiency.
 
Last edited:

bit_user

Polypheme
Ambassador
Before even the release of the 13900k I was one of the few people complaining about ecores being clocked high and what an impact that would have in gaming power draw.
Maybe one reason the E-cores clock high is that Alder Lake's ring bus is affected by their frequency. One of the benefits of disabling the E-cores is supposedly that it lets the ring bus clock higher. I think Raptor Lake was supposed to do something about that, but I'm foggy on the details of exactly what.

I knew beforehand, but the thing is, I was expecting Intel to separate the rails on 13th gen. They didn't.
Yeah, it seems like there were several improvements that didn't make the cut for Raptor Lake, and it's a real shame they didn't just keep working on them for Gen 14, but then I know that was meant to be Meteor Lake-S and not another Raptor Lake.

?? Haven't I already agreed that the 7950x is the only product that beats Intel in efficiency? Why are you repeating it? Thats agian if you ignore stock configuration, since at stock 13900t says hello
My link shows it beating in stock config.

if you just compare against price competitors
I don't. I look at multiple factors, when considering how products fall into different market segments. Price, power, and performance are the main ones. As discussed, you read too much into launch pricing, rather than current street pricing. The latter is a much better guide.

I think you're just looking at which AMD models Intel beats, and then working backwards from that answer to find some justification for the specific matchup or test parameters.
 

TheHerald

Upstanding
Feb 15, 2024
263
64
260
Maybe one reason the E-cores clock high is that Alder Lake's ring bus is affected by their frequency. One of the benefits of disabling the E-cores is supposedly that it lets the ring bus clock higher. I think Raptor Lake was supposed to do something about that, but I'm foggy on the details of exactly what.
12th gen yes, cache can clock up to 4.7 ghz with ecores off. It can barely exceed 4 ghz with them on. No longer the case on 13th and 14th.
My link shows it beating in stock config.
You have a link where the 7950x beats the 13900t in efficiency? Can you link it again please.
I don't. I look at multiple factors, when considering howproducts fall into different market segments. Price, power, and performance are the main ones. As discussed, you read too much into launch pricing, rather than current street pricing. The latter is a much better guide.

I think you're just looking at which AMD models Intel beats, and then working backwards from that answer to find some justification for the specific matchup or test parameters.
A company can reduce prices because they ended up losing to the competition, their sales not reaching the targets etc. That doesn't change what they are competing against.

For example the 10850k / 10900k ended up costing as much as the 5600x. I can't then argue that Intel beat amd in efficiency, cause the actual competitor was the 5900x.
 

bit_user

Polypheme
Ambassador
You have a link where the 7950x beats the 13900t in efficiency? Can you link it again please.
Again, those are in different market segments. You're comparing trucks with tractors. They're mismatched on all three components I listed: price, performance, and power.

You cherry-pick, in your comparisons. For efficiency, you want to use a 35W T-version, but then for performance you want to use the K-version. This is not an honest attempt to see how the architectures compare, but rather some kind of campaign to misinform.

A company can reduce prices because they ended up losing to the competition, their sales not reaching the targets etc. That doesn't change what they are competing against.
No, I would argue that current street prices best reflect actual market segments and consumer choice.
 
  • Like
Reactions: NinoPino

TheHerald

Upstanding
Feb 15, 2024
263
64
260
Again, those are in different market segments. You're comparing trucks with tractors. They're mismatched on all three components I listed: price, performance, and power.

You cherry-pick, in your comparisons. For efficiency, you want to use a 35W T-version, but then for performance you want to use the K-version. This is not an honest attempt to see how the architectures compare, but rather some kind of campaign to misinform.
No, not cherry picking at all. Im totally against stock out of the box comparisons, you are the one that insists on them. My position is that efficiency should be measured at iso wattage, in which case the 7950x has a slight advantage. Your position is it should be done stock, in which case 14900t has a huge lead.
No, I would argue that current street prices best reflect actual market segments and consumer choice.
Consumer choice, of course. If you are about to buy a cpu, current pricing is the only price that matters. But we are comparing technology here. Like if amd dropped the 790xtx price to 200$ would you argue that amd is leading in RT performance cause it beats the 4060? Of course not, at least I hope you wouldn't.
 

bit_user

Polypheme
Ambassador
Your position is it should be done stock, in which case 14900t has a huge lead.
No, my position is that you shouldn't compare product competing in different market segments. Nobody who's considering the purchase of a R9 7950X is weighing it against a i9-14900T. The alternative they would be considering is an i9-14900K.

I've said this like a dozen times, so far. If you continue to mischaracterize my position, I will have to conclude that you're trolling.

Consumer choice, of course. If you are about to buy a cpu, current pricing is the only price that matters. But we are comparing technology here.
If you really want to compare technology, then microbenchmarks are what you want. Those are what I've posted a couple times already, from Chips & Cheese. Frustratingly, I don't see any comparable data from them on Raptor Lake or Ryzen 7000.

As I've also said, numerous times, truly characterizing the architectural efficiency needs too look at the perf/W curve, not the single-datapoint comparisons you favor. The perf/W curve data I have is what I've posted up from Anandtech and the plot I made from ComputerBase.de. They both paint Ryzen 7900X and 7950X in a favorable light, efficiency-wise. Neither dataset is perfect, but they're both good enough to be insightful.

However, for consumers who are simply trying to make a purchasing decision, the side-by-side comparisons that are relevant should compare products in the same market segment and with the most typical settings (one of which should be stock). If you want to include the extreme gamerzzz' config in the comparison of top-end CPUs, like TechPowerUp tends to do, I think that's okay. However, one big problem with including tuned configurations is the silicon lottery, especially if reviewers aren't even buying their own test hardware but instead rely on hardware provided by the manufacturers.

Product reviews best serve their audience when they characterize the product in the way that you'd experience it. So, if they're testing a cherry-picked system provided to them by Intel + ASUS with max-performance settings, it wouldn't optimally serve readers, because the performance they measure is something the typical buyer is unlikely to achieve.
 
  • Like
Reactions: NinoPino and Ogotai

35below0

Commendable
Jan 3, 2024
1,148
513
1,590
Consumer choice, of course. If you are about to buy a cpu, current pricing is the only price that matters. But we are comparing technology here. Like if amd dropped the 790xtx price to 200$ would you argue that amd is leading in RT performance cause it beats the 4060? Of course not, at least I hope you wouldn't.
You've done this before. You are not willing to compare like for like. T models and K models, the cheaper and inferior AMD CPU to a more expensive Intel just because of AMD's naming convention. And pretending it's even.

You aren't comparing technology, not in good faith.