News Intel offers new guidance on 13th and 14th Gen CPU instability — but no definitive fix yet

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

bit_user

Titan
Ambassador
Man are you serious? The 7950x at stock pulls 89 watts in gaming. It's already slower than the 14900k pulling 80 watts, so no amount of playing around with the power limits will make it more efficient.
As you know, the perf/W curve is not linear. Due to its spiky and lightly-threaded nature, gaming can turn up even more nonlinearities than compute tasks. For instance, let's consider this case:

130551.png


We see that restricting power has no significant impact on either CPU, until you hit 35 W. At that point, it cuts into the frequency budget of the i9-13900K but not the 7950X.

Also, Zen 4 has a narrower efficiency window, where cutting off the top few % of frequency can yield a greater power savings than Raptor Cove, which has a wider frequency window and a more linear perf/W curve. So, it can be difficult to guess just how performance would be impacted by scaling back on power.

Again, I think it's telling that you seem to be afraid of having more data. Benchmarking is a data-driven exercise. You say it's one of your hobbies, so I find it surprising that you seem to place so little value on testing hypotheses and quality data.
 
Last edited:

TheHerald

Notable
Feb 15, 2024
1,041
299
1,060
As you know, the perf/W curve is not linear. Due to its spiky and lightly-threaded nature, gaming can turn up even more nonlinearities than compute tasks. For instance, let's consider this case:
130551.png

We see that restricting power has no significant impact on either CPU, until you hit 35 W. At that point, it cuts into the frequency budget of the i9-13900K but not the 7950X.

Also, Zen 4 has a narrower efficiency window, where cutting off the top few % of frequency can yield a greater power savings than Raptor Cove, which has a wider frequency window and a more linear perf/W curve. So, it can be difficult to guess just how performance would be impacted by scaling back on power.

Again, I think it's telling that you seem to be afraid of having more data. Benchmarking is a data-driven exercise. You say it's one of your hobbies, so I find it surprising that you seem to place so little value on testing hypotheses and quality data.
But there is no power draw on that graph, just power limit. You don't even know if the results are gpu bound or not. Is it the same review that had the 7950x consuming over it's TDP? What is even the point of that?

What you don't get is that since the 7950x is SLOWER while consuming more power, cutting down the power to the same 80w that the 14900k draws will still mean it's SLOWER and therefore less efficient. Even if it loses 0% performance by the power limit that will still be the case.

I'm not afraid of anything, I'm just stating the obvious, the data we have is already enough to conclude what's happening. The 7950x is already slower while consuming more power. Cutting down the power even further won't make it any faster than the 14900k, therefore it won't be more efficient either.

The single CCD 7700x and 7600x are more efficient than the 7950x at any power level you wanna test, and since these 2 are less efficient than the 14900k, it's a done deal. Why are we even arguing I don't understand.
 

bit_user

Titan
Ambassador
But there is no power draw on that graph, just power limit. You don't even know if the results are gpu bound or not. Is it the same review that had the 7950x consuming over it's TDP? What is even the point of that?
Even if you ignore the 7950X data in that graph, just looking at the i9-13900K data should tell you that adjusting power limits don't necessarily behave how you would expect.

What you don't get is that since the 7950x is SLOWER while consuming more power, cutting down the power to the same 80w that the 14900k draws will still mean it's SLOWER and therefore less efficient. Even if it loses 0% performance by the power limit that will still be the case.
But you said the i9-14900K would be faster and more efficient when the two are run at any power. We don't have the data to support that.

For instance, let's look at what happens with the R9 7900X vs R9 7900. The X version only performs 3.5% better:
relative-performance-games-1920-1080.png

However, the X version requires 34.4% more power, to do that:
power-games.png

As a result, basically cutting the 7900X down to 65 W (88 PPT) results in a whopping 24.7% better fps/W:

efficiency-gaming.png


I'm not afraid of anything, I'm just stating the obvious, the data we have is already enough to conclude what's happening. The 7950x is already slower while consuming more power. Cutting down the power even further won't make it any faster than the 14900k, therefore it won't be more efficient either.
That doesn't make it true across the entire range.

The single CCD 7700x and 7600x are more efficient than the 7950x at any power level you wanna test, and since these 2 are less efficient than the 14900k, it's a done deal. Why are we even arguing I don't understand.
The above data on the 7900, which is also multi-CCD, shows that big gaming efficiency gains are possible even with multiple CCDs.

There are plenty of flattering things you can say about Raptor Lake. It's just when you make these sweeping statements (which generally aren't well-supported by the data), that we tend to run into disagreements.
 
Last edited:

TheHerald

Notable
Feb 15, 2024
1,041
299
1,060
But you said the i9-14900K would be faster and more efficient when the two are run at any power. We don't have the data to support that.
If you don't think we have the data to support it then you really can't read the graphs, I'm sorry. Let me try once again.

Since the 14900k is already faster while pulling LESS power, restricting the power on the 7950x will NOT make faster than the 14900k, and therefore it will still be slower and less efficient. That's really just common sense. In order for that NOT to be the case the 7950x will need to gain performance while you are limiting it's power, which is simply impossible. Yes?
For instance, let's look at what happens with the R9 7900X vs R9 7900. The X version only performs 3.5% better:
It doesn't matter what happens with the 7900. It really doesn't. You just don't get why, and I don't get why you don't get it. Obviously restricting power limits will make any CPU more efficient.

The above data on the 7900, which is also multi-CCD, shows that big gaming efficiency gains are possible even with multiple CCDs.

There are plenty of flattering things you can say about Raptor Lake. It's just when you make these sweeping statements (which generally aren't well-supported by the data), that we tend to run into disagreements.
Nobody said big gaming efficiency gains aren't possible with multiple CCDs. Im saying that the 14900k will be more efficient regardless. It's really not complicated.

There aren't any non flattering things you can say about Raptor lake except the out of the box settings of the K lineup I guess. Everything else is a big W. ST efficiency? Excellent. MT efficiency? Astounding. Gaming efficiency? To die for. Maximum performance when you don't care about efficiency? Whooping
 

bit_user

Titan
Ambassador
If you don't think we have the data to support it then you really can't read the graphs, I'm sorry. Let me try once again.
You're not listening.

Since the 14900k is already faster while pulling LESS power, restricting the power on the 7950x will NOT make faster than the 14900k, and therefore it will still be slower and less efficient.
That's only true when you drop the 7950X to match the power of i9-14900K @ 95 W TDP. What do you suppose happens when you drop them both to a 65 W TDP? Because when the 7900X got dropped to 65W, it only lost 3.4% performance. When the i9-14900K got dropped from a TDP of 95 W to 65 W, it lost 9.8% performance, putting it at 225.4 fps. If the 7950X only lost 3.4% from 235.5 fps, that would put it at 227.5 fps, which is faster than the i9-14900K at 65 W!

Now, I'm not saying that would definitely happen, and I'm also not saying it would use less power than the i9-14900K's 48 W at that TDP, but it's easy to see how it could at least be faster!

There aren't any non flattering things you can say about Raptor lake except the out of the box settings of the K lineup I guess. Everything else is a big W. ST efficiency? Excellent.
You call it "excellent", when the best it can do is 6th place?? It even gets beat by some Alder Lake models!

efficiency-singlethread.png


MT efficiency? Astounding.
How is it "astounding", when you have to kneecap its performance just to get its MT efficiency above the 7000X3D models??

efficiency-multithread.png


Gaming efficiency? To die for. Maximum performance when you don't care about efficiency?
Again, you just conveniently act like the 7800X3D doesn't exist!

efficiency-gaming.png


Do you know what fps the i9-14900K gets at 35W? 151.4 fps. Do you know what the 7800X3D gets? 266.3 fps, which is 79.5% higher! You'd have to suffer at 43.2% lower FPS, just to beat the 7800X3D on gaming efficiency. That's not "to die for" - it's just plain "to die", which you'd probably be doing a lot more of at so many fewer FPS!
average-fps-1920-1080.png

...is what just happened to your precious. Again, you just can't resist the overreach and it inevitably leads to falling completely on your face.

You had a good point about what the i9-14900K achieved at 95 W. I think that would've been a good place to end this exchange and go out on a high note, but that's just me.
: )
 
Last edited:

TheHerald

Notable
Feb 15, 2024
1,041
299
1,060
You're not listening.


That's only true when you drop the 7950X to match the power of i9-14900K @ 95 W TDP. What do you suppose happens when you drop them both to a 65 W TDP? Because when the 7900X got dropped to 65W, it only lost 3.4% performance. When the i9-14900K got dropped from a TDP of 95 W to 65 W, it lost 9.8% performance, putting it at 225.4 fps. If the 7950X only lost 3.4% from 235.5 fps, that would put it at 227.5 fps, which is faster than the i9-14900K at 65 W!

Now, I'm not saying that would definitely happen, and I'm also not saying it would use less power than the i9-14900K's 48 W at that TDP, but it's easy to see how it could at least be faster!
But you already have data on the SINGLE CCD CHIPS man. They are less efficient than the 14900k, there is no way the 7950x will be more efficient than the single ccd chips in gaming.

You call it "excellent", when the best it can do is 6th place?? It even gets beat by some Alder Lake models!
It beats everything above it in performance even at 35w. If you can't see how that's not just excellent, but some kind of black freaking magic, it's just your bias. It's competitor is sitting like 50 places below it.

How is it "astounding", when you have to kneecap its performance just to get its MT efficiency above the 7000X3D models??
Kneecap how? It's still insanely fast at 125 watts.

Again, you just conveniently act like the 7800X3D doesn't exist!
It doesn't. For anyone looking for a 14900k the 7800x 3d literally doesn't exist. It's so far behind in both MT and ST performance (and efficiency) that nobody cares. The 7950x 3d is a much more realistic alternative. I'm talking as an owner of an actual 14900k, the 7800x 3d wasn't even on the radar for me. The 7950x 3d was a lot more lucrative and i'd probably buy it if it didn't require a motherboard swap. The 7800x 3d would be a downgrade even from my 3 year old 12900k.
...is what just happened to your precious. Again, you just can't resist the overreach and it inevitably leads to falling completely on your face.

You had a good point about what the i9-14900K achieved at 95 W. I think that would've been a good place to end this exchange and go out on a high note, but that's just me.
: )
Don't act like a pidgeon. Self proclaim yourself the victor and poop all over the board. Bro please.
 

bit_user

Titan
Ambassador
But you already have data on the SINGLE CCD CHIPS man. They are less efficient than the 14900k, there is no way the 7950x will be more efficient than the single ccd chips in gaming.
Again: you're not listening. You said it would be more efficient and faster at all power levels. I just showed a plausible argument where it would at least lose on the speed part.

It beats everything above it in performance even at 35w.
Oh, so now you finally see the light! I kept telling you that you can't compare efficiency across product segments, but now that it supports your narrative you're suddenly all about that life!

...and no it does not. I already showed where the 7800X3D beats it on both gaming efficiency and performance!

Kneecap how? It's still insanely fast at 125 watts.
At 125 W, a 7950X3D is already 14.3% faster on Cinebench MT, while still beating it on efficiency.
cinebench-multi.png

It doesn't. For anyone looking for a 14900k the 7800x 3d literally doesn't exist. It's so far behind in both MT and ST performance
Again, you suddenly see the light! Now you seem to get it - that you can't compare efficiency across product segments. I'm sure that, in the very next thread, you'll have completely forgotten this point and will be back to your old tactics.

...but, for legions of gamers, you're sadly mistaken. I think many of them do indeed consider both the i9-14900K and 7800X3D as viable options for gaming... unless you're admitting the i9 is simply the worse choice for that.

Don't act like a pidgeon. Self proclaim yourself the victor and poop all over the board. Bro please.
Oh, so Mr. "Whooping" suddenly decides to act all high and mighty?

I didn't poop all over anything. I just pointed out that you made a good point about that 95 W gaming performance and would be well-advised to learn how to take a win when you have one. As I've said many times before, what always seems to cause these debates to flare up is when you make broad, sweeping statements not supported by the data (if not also the facts). Take the advice or leave it, but you can definitely leave the insults out.
 

TheHerald

Notable
Feb 15, 2024
1,041
299
1,060
Again: you're not listening. You said it would be more efficient and faster at all power levels. I just showed a plausible argument where it would at least lose on the speed part.
Bruh, you realize that - at ISO power - if a product is more efficient it also HAS to be faster as well, and vice versa. Right?

Oh, so now you finally see the light! I kept telling you that you can't compare efficiency across product segments, but now that it supports your narrative you're suddenly all about that life!

...and no it does not. I already showed where the 7800X3D beats it on both gaming efficiency and performance!
What? We were talking about ST in this one, stop jumping around man.

Again, you suddenly see the light! Now you seem to get it - that you can't compare efficiency across product segments. I'm sure that, in the very next thread, you'll have completely forgotten this point and will be back to your old tactics
What are you talking about? Of course you can compare across product segments.
...but, for legions of gamers, you're sadly mistaken. I think many of them do indeed consider both the i9-14900K and 7800X3D as viable options for gaming... unless you're admitting the i9 is simply the worse choice for that
Of course the i9 is simply the worse choice. Any high end cpu is the worse choice for gaming, lol. For gaming only purposes the cheaper CPU is always the better choice unless you go uber cheap and you buy a CPU with short legs. Especially since most gamers do not have a 4090 GPU, I'd argue even the 7800x 3d for the majority of it's lifetime was a terrible choice for gaming. Not as bad as the 14900k, but still bad. Like caps lock BAD. The 7600 or the 13600k are the go to for games, and now that the price has dropped majorly on the 7800x 3d, that's also a decent option.

Oh, so Mr. "Whooping" suddenly decides to act all high and mighty?

I didn't poop all over anything. I just pointed out that you made a good point about that 95 W gaming performance and would be well-advised to learn how to take a win when you have one. As I've said many times before, what always seems to cause these debates to flare up is when you make broad, sweeping statements not supported by the data (if not also the facts). Take the advice or leave it, but you can definitely leave the insults out.
There is nothing wrong with broad generalized statements. That's what a better product is. It's better in a broad generalized way. Yes if you nitpick here and there it might lose some but on the general picture, the 14900k (or the ks even better) is the goat CPU this gen. Wins more than loses in a broad area of applications and games in both performance and efficiency. It's called the veil of ignorance - which CPU would you buy to end up with the fastest if you didn't know exactly what you'd be doing with it. The 14900k just takes the cake on that one.
 

bit_user

Titan
Ambassador
Bruh, you realize that - at ISO power - if a product is more efficient it also HAS to be faster as well, and vice versa. Right?
Yes, I was talking about equal TDPs. I spelled it out quite clearly, in my example of scaling down the 7950X along the same lines as the 7900X -> 7900. If you aren't reading my full posts, then there's no way we can have a productive discussion.

Not to say you're not right about efficiency. You can reformulate my hypothetical 7950X scaling and it should work out that way, assuming what I said actually holds. Of course, without the actual data, we'll never know for sure.

What? We were talking about ST in this one, stop jumping around man.
Maybe you need to be more clear when you're talking about what, because you have a tendency to make very broad statements and it's not always clear when you're speaking broadly or narrowly.

Of course the i9 is simply the worse choice. Any high end cpu is the worse choice for gaming, lol.
Not until the X3D CPUs came along. Prior to that, the most expensive mainstream desktop CPU was always the fastest gaming CPU.

For gaming only purposes the cheaper CPU is always the better choice unless you go uber cheap and you buy a CPU with short legs. Especially since most gamers do not have a 4090 GPU,
Until now, you haven't been talking about perf/$. Concerning just power and efficiency, this is already interminable.
 
Status
Not open for further replies.