News Intel offers new guidance on 13th and 14th Gen CPU instability — but no definitive fix yet

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

TheHerald

Upstanding
Feb 15, 2024
409
99
260
Then you really, really, really don't care about producing numbers of actual consequence to real users. Then, I'm left to ask why bother? The only thing left to fight over would be Internet Points, which are even more worthless than they sound.
What's a real user? I think I'm one of them no?

I mean you yourself sat down and made an efficiency curve cause you realize yourself it's more important than "press the cinebench button".
 

TheHerald

Upstanding
Feb 15, 2024
409
99
260
And do you routinely run your CPU at settings that come as close as possible to matching the performance of a competing CPU, which you didn't buy?
In a way, yes. I'm running my CPU capped at a specific power limit - and I'd use that same power limit with any CPU i'd have.
 

bit_user

Polypheme
Ambassador
In a way, yes. I'm running my CPU capped at a specific power limit - and I'd use that same power limit with any CPU i'd have.
But you're using that power limit because you want to limit any CPU to that power limit, not because you want to match it up to the power that would be used by a different CPU.

More importantly, that doesn't address your instance on iso-performance testing. Again, please show me who is buying one CPU instead of its competitor, but tuning it to try and make it exactly match the performance of that competitor!
 
But you're using that power limit because you want to limit any CPU to that power limit, not because you want to match it up to the power that would be used by a different CPU.

More importantly, that doesn't address your instance on iso-performance testing. Again, please show me who is buying one CPU instead of its competitor, but tuning it to try and make it exactly match the performance of that competitor!
Everybody that buys an OEM or simple/normal prebuild system....
You have told us yourself many many times that you always get systems from your work that are limited by the cooling/system itself.
You are not doing it yourself but the CPU is still limited to 150W or whatever the cooling can handle/was specked for.
 

bit_user

Polypheme
Ambassador
You have told us yourself many many times that you always get systems from your work that are limited by the cooling/system itself.
The Dell compact desktops we use actually implement Intel's stock power limits for the CPUs they have (i9-12900: PL1=65W, PL2=202W). Their cooling setup is adequate to prevent thermal throttling under those limits.

The only way I got it to thermally throttle is when I used a utility to poke the MSRs to increase the power limits. Dell's BIOS provides no way to modify them.
 

TheHerald

Upstanding
Feb 15, 2024
409
99
260
But you're using that power limit because you want to limit any CPU to that power limit, not because you want to match it up to the power that would be used by a different CPU.

More importantly, that doesn't address your instance on iso-performance testing. Again, please show me who is buying one CPU instead of its competitor, but tuning it to try and make it exactly match the performance of that competitor!
I didn't suggest iso performance but iso power. I think a lot of people do, and a lot more people would if they released that's something that can be done in a second. That's like the whole reason there is a 7900x and a 7900, cause most want a lower power cpu but they don't know you can do that in seconds.

Yes I want to use my cpu at a specific power so I'd like to see which is the fastest cpu at that power on my budget. That's why for example I'd buy a 14700k over a 7800x3d any day of the week, cause it's just way faster.

The question is, who that actually knows they can change the power, even from within windows, cares about out of the box efficiency? I'd argue nobody, except the usual fans of a specific company cough cough
 

bit_user

Polypheme
Ambassador
I didn't suggest iso performance but iso power.
Yes you have. You were previously talking about closing a 1.8% performance gap between R9 7950X and i9-14900K, on one benchmark, by boosting the AMD CPU to try and make it perform the same as the i9.

I think a lot of people do,
But at what power? These perf/W functions are curves, not straight lines. So, if you're going to pick some common power limit, how do you know which is the right one? Better test several. Oh, but then you'd have full perf/W curves, like I've been saying!

the whole reason there is a 7900x and a 7900, cause most want a lower power cpu but they don't know you can do that in seconds.
I think there's a good chance it's because not all of the compute dies would bin high enough to qualify for use in X-series CPUs.

The question is, who that actually knows they can change the power, even from within windows, cares about out of the box efficiency? I'd argue nobody, except the usual fans of a specific company cough cough
Okay, so now you're arguing for testing them at out-of-box defaults, like I've also been saying!
 

TheHerald

Upstanding
Feb 15, 2024
409
99
260
Yes you have. You were previously talking about closing a 1.8% performance gap between R9 7950X and i9-14900K, on one benchmark, by boosting the AMD CPU to try and make it perform the same as the i9.
Because those 2 are actually very close in performance, you can match them. Usually it's not the case, Intel CPUs are much faster than amd ones so you can't really match them in performance - not unless you are going to push thousands of watts into them

But at what power? These perf/W functions are curves, not straight lines. So, if you're going to pick some common power limit, how do you know which is the right one? Better test several. Oh, but then you'd have full perf/W curves, like I've been saying!
Doesn't matter. The most efficient at 50w is still going to be a chart topper at any sane power draw. Take the 14700k vs the 7800x 3d, is there really any point in the curve that the 7800x 3d is more efficient?
Okay, so now you're arguing for testing them at out-of-box defaults, like I've also been saying!
No, im not. Im asking who cares about out of the box when they know they can change the out of the box behavior?
 

bit_user

Polypheme
Ambassador
Because those 2 are actually very close in performance, you can match them.
It's still not what people actually do!

Also, the fact that they're already close in performance means you don't need to try and exactly match them! You're just mad that the AMD CPU beats it on efficiency and looking for a way to take that away from it.

Usually it's not the case, Intel CPUs are much faster than amd ones so you can't really match them in performance
LOL, oh lame joke of an argument, again. Only with absurd matchups that don't match the reality of what market segments these CPUs fall into.

Doesn't matter. The most efficient at 50w is still going to be a chart topper at any sane power draw.
LOL, pretending as though this data doesn't exist.

cj1qY3F.png


Sure, the i9-13900K was most efficient at 50W, but didn't top the chart much after that!

Take the 14700k vs the 7800x 3d, is there really any point in the curve that the 7800x 3d is more efficient?
There you go again, with a joke matchup. You're comparing a more expensive CPU with way more cores/threads to one that's older, cheaper, and more narrowly-targeted at gaming.

No, im not. Im asking who cares about out of the box when they know they can change the out of the box behavior?
Most people do one of two things. Either they take the out-of-the-box defaults, or they do some sane tuning to maximize the performance of their setup. The never tune one CPU to match the performance or the power profile of another CPU.

Your whole approach is designed to fool people not smart enough or not invested enough to see through your smoke screen. If it's not, then you should see the logic in my argument and accept the fact that you're wrong about this.
 

TheHerald

Upstanding
Feb 15, 2024
409
99
260
It's still not what people actually do!

Also, the fact that they're already close in performance means you don't need to try and exactly match them! You're just mad that the AMD CPU beats it on efficiency and looking for a way to take that away from it.
Not at all. I've said multiple times that the 7950x beats the 13/14900k in MT efficiency. Why would I be mad about it, lol.

What my argument was is that it doesn't make sense to claim the 14900k uses 50% more power for 5% more performance cause in reality the 7950x will also need a LOT more power to be 5% faster. That's what the argument was all about.
LOL, oh lame joke of an argument, again. Only with absurd matchups that don't match the reality of what market segments these CPUs fall into.

Is it? The 7800x 3d was price matched with the 13700k and the 14700k for the majority of it's existence. Even right now, as we speak, it's at it's lowest price ever and it's still more expensive than a 13700 / 13700kf / 13700f.

The 7700x is still price matched with the 13600k etc.
LOL, pretending as though this data doesn't exist.
cj1qY3F.png

Sure, the i9-13900K was most efficient at 50W, but didn't top the chart much after that!
What do you mean man, it's still the 2nd most efficient desktop CPU in existence. Maybe im using the phrase wrong, isn't a chart topper something that retains high ranking but not necessarily the highest?

There you go again, with a joke matchup. You're comparing a more expensive CPU with way more cores/threads to one that's older, cheaper, and more narrowly-targeted at gaming.
No, not really. The 7800x 3d for the majority of it's existence was between 380 and 450€. It was only the last 3 months it dropped below 380.

The 14700 and all it's version (f, k,kf ) were also at that price point. So it's only the last 3 months the 3d is cheaper, but it's still on par or more expensive than the 13700 and all it's version.
Most people do one of two things. Either they take the out-of-the-box defaults, or they do some sane tuning to maximize the performance of their setup. The never tune one CPU to match the performance or the power profile of another CPU.

Your whole approach is designed to fool people not smart enough or not invested enough to see through your smoke screen. If it's not, then you should see the logic in my argument and accept the fact that you're wrong about this.
Maximize performance - you mean overclocking it? Well these people don't care about efficiency then, so who cares? The are not a part of the efficiency discussion. I'm talking about people that actually CARE about efficiency. These people either don't know they can change the power limits - or they do know in which case out of the box is meaningless to them.
 

bit_user

Polypheme
Ambassador
What my argument was is that it doesn't make sense to claim the 14900k uses 50% more power for 5% more performance cause in reality the 7950x will also need a LOT more power to be 5% faster. That's what the argument was all about.
What makes sense to claim is what will actually happen, in the real world!! You can argue whether it makes sense for the CPU to be configured like that, but that's a separate discussion than a fair portrayal of what behavior people should anticipate occurring (unless they adjust, accordingly).

You can't have it both ways. You can't have the i9-14900K's peak performance numbers and then measure its efficiency in some different configuration, with lower peak performance numbers but better efficiency. Unless you present a full set of data for both configurations, that is.

Is it? The 7800x 3d was price matched with the 13700k and the 14700k for the majority of it's existence.
Nope. Check PcPartPicker's price history for these CPUs. The i7-14700K has consistently been more expensive than the R7 7800X3D by about $50 or so. Compared with the i7-13700K, it's been a little more neck-and-neck.

Even right now, as we speak, it's at it's lowest price ever and it's still more expensive than a 13700 / 13700kf / 13700f.
That's only because the i7-13700's pricing has recently plummeted. That said, it's still a weird comparison since those CPUs aren't as good at gaming. The 7800X3D is more of a specialty product and everyone knows this.

The 7700x is still price matched with the 13600k etc.
Not "still", but yes it's currently a little cheaper. If you set the history window out to a year or more, you can see that's because the i5-13600K has also dropped a lot, this calendar year. It spent most of last year at or above $300.

Maybe im using the phrase wrong, isn't a chart topper something that retains high ranking but not necessarily the highest?
IMO, "topper" means one who sits on top. There is only one top. If you mean "high-ranking", then say what you mean.

No, not really. The 7800x 3d for the majority of it's existence was between 380 and 450€. It was only the last 3 months it dropped below 380.

The 14700 and all it's version (f, k,kf ) were also at that price point. So it's only the last 3 months the 3d is cheaper, but it's still on par or more expensive than the 13700 and all it's version.
I can't speak to European pricing, but I've already said what's happened with the $USD prices, not to mention that the matchup is weird for other reasons.

I'm talking about people that actually CARE about efficiency. These people either don't know they can change the power limits - or they do know in which case out of the box is meaningless to them.
Fine, but then pick a point where each CPU starts to level off. That's what most efficiency-minded users would do, if they do any tuning at all. The "knee" in that curve is a lot lower for the Zen 4 CPUs, as you can see from the ComputerBase and Anandtech data (even accepting the flaws in the latter). To arbitrarily pick a given wattage to compare them at is still an artificial comparison.
 
What makes sense to claim is what will actually happen, in the real world!! You can argue whether it makes sense for the CPU to be configured like that, but that's a separate discussion than a fair portrayal of what behavior people should anticipate occurring (unless they adjust, accordingly).

You can't have it both ways. You can't have the i9-14900K's peak performance numbers and then measure its efficiency in some different configuration, with lower peak performance numbers but better efficiency. Unless you present a full set of data for both configurations, that is.
Get over yourself, the amount of people that will buy a hand crafted system that will allow a 14900k to run at the power levels shown in reviews is tending towards ZERO.
It's the 1% that are super enthusiasts and to then push the agenda that those people have no idea how to change settings in the bios so that they will be using out-of-the-box is hilarity itself.

Also there are plenty of reviews that show default (253W ) and limitless 330+W to have like 2-3% difference in performance....
Look at the review that you post all the time in form of that graph.
253W vs 315W = 1% difference in performance, and that's in full hardcore multithreading apps.
https://www.computerbase.de/2022-10...bschnitt_leistung_in_apps_bei_reduzierter_tdp
 

bit_user

Polypheme
Ambassador
Get over yourself,
Same to you.

the amount of people that will buy a hand crafted system that will allow a 14900k to run at the power levels shown in reviews is tending towards ZERO. It's the 1% that are super enthusiasts and to then push the agenda that those people have no idea how to change settings in the bios so that they will be using out-of-the-box is hilarity itself.
If you actually read what I wrote, then you wouldn't be saying this. I didn't say the testing had to be done with only out-of-the-box defaults. Among other things, I said you could compare using sane settings, like real people would actually use, but that whatever settings are used to measure performance should also be used to measure efficiency (and vice versa)!

Also there are plenty of reviews that show default (253W ) and limitless 330+W to have like 2-3% difference in performance....
Look at the review that you post all the time in form of that graph.
253W vs 315W = 1% difference in performance, and that's in full hardcore multithreading apps.
https://www.computerbase.de/2022-10...bschnitt_leistung_in_apps_bei_reduzierter_tdp
It depends on the app, of course. Cinebench R23 is 4%.
 
I said you could compare using sane settings, like real people would actually use,
But then you are only testing how dumb people are and not the efficiency of the CPU....
Reviews should be showing people results at the sweet spot of the efficiency curve for every and any CPU they test, and whatever else point they want in addition to that, so that people can make informed decisions, like you always say you like.

Also sane settings would be the TDP each company tells you and not the absolute highest limit they allow you.
And at TDP of 125W for the 14900k the 7950x uses 40% more power to get like 5% more performance than the 14900k ....
That would be sane settings that real people would be using, and not extreme settings that enthusiasts and benchmark chasers would use.
https://www.techpowerup.com/review/...ke-tested-at-power-limits-down-to-35-w/8.html
D5TipA9.jpg
 

bit_user

Polypheme
Ambassador
But then you are only testing how dumb people are and not the efficiency of the CPU....
The CPU has no absolute efficiency. Its efficiency is based on how it's configured and used.

Reviews should be showing people results at the sweet spot of the efficiency curve for every and any CPU they test, and whatever else point they want in addition to that, so that people can make informed decisions, like you always say you like.
If you want to include a data point for such a sweet spot, then that configuration also needs to be included in all of the performance tests, so that people would understand the performance tradeoff of configuring it like that.

Also sane settings would be the TDP each company tells you and not the absolute highest limit they allow you.
Unless the motherboard defaults to the absolute highest limit, in which case that should be one of the configurations tested.

And at TDP of 125W for the 14900k the 7950x uses 40% more power to get like 5% more performance than the 14900k ....
Are you talking about PL1=PL2=125W or PL1=125W; PL2=253W?

Unlike what Toms & ComputerBase do, that TechPowerUp "Application Average" mashes together single-threaded and multi-threaded apps. I think that's misleading. I'd want to know how it behaves on heavily-threaded apps and lightly threaded ones, separately. Otherwise, when I fire up a rendering job or a big software compilation job, I might be in for an unpleasant surprise.

It's like how cars have their fuel efficiency rated for city and highway driving. If you just gave one overall number that encompassed both, it would presume a person does some particular blend. For people who do a lot more highway driving, they could end up buying the wrong car if it turns out not to have nearly as much range as they were lead to believe.
 

TheHerald

Upstanding
Feb 15, 2024
409
99
260
Unlike what Toms & ComputerBase do, that TechPowerUp "Application Average" mashes together single-threaded and multi-threaded apps. I think that's misleading. I'd want to know how it behaves on heavily-threaded apps and lightly threaded ones, separately. Otherwise, when I fire up a rendering job or a big software compilation job, I might be in for an unpleasant surprise.
They have that as well

cinebench-multi.png

efficiency-multithread.png



And it's kinda telling, the only segment AMD has a lead in efficiency (7950x / 7950x 3d) - it turns out it's really not that big. It really defies any rhyme or reason when people claim AMD is more efficient, like what the hell. The whole reason im avoiding Ryzen desktop parts is their atrocious efficiency, yet here we are people claiming otherwise.
 
And it's kinda telling, the only segment AMD has a lead in efficiency (7950x / 7950x 3d) - it turns out it's really not that big. It really defies any rhyme or reason when people claim AMD is more efficient, like what the hell. The whole reason im avoiding Ryzen desktop parts is their atrocious efficiency, yet here we are people claiming otherwise.
The vast majority of testing is done without power limits which skews people's view. Before tweaking even at stock settings AMD tends to have an absolute efficiency lead it's just not very big, and varies by workload. The 200W 14900K results are a pretty good indicator of what can be done with current Intel CPUs if one is willing to spend some time to figure out what works for specific use cases.

When it comes to overall gaming efficiency though AMD generally has Intel beat, and the X3D parts absolutely do unless you start disabling parts of the Intel CPU (or accept sub par performance for the sake of efficiency).
Unlike what Toms & ComputerBase do, that TechPowerUp "Application Average" mashes together single-threaded and multi-threaded apps. I think that's misleading. I'd want to know how it behaves on heavily-threaded apps and lightly threaded ones, separately. Otherwise, when I fire up a rendering job or a big software compilation job, I might be in for an unpleasant surprise.
TPU does individual application graphing they just don't put it in comparison graphs:
power-per-application.png

https://www.techpowerup.com/review/intel-core-i9-14900k/22.html
 

TheHerald

Upstanding
Feb 15, 2024
409
99
260
The vast majority of testing is done without power limits which skews people's view. Before tweaking even at stock settings AMD tends to have an absolute efficiency lead it's just not very big, and varies by workload. The 200W 14900K results are a pretty good indicator of what can be done with current Intel CPUs if one is willing to spend some time to figure out what works for specific use cases.

When it comes to overall gaming efficiency though AMD generally has Intel beat, and the X3D parts absolutely do unless you start disabling parts of the Intel CPU (or accept sub par performance for the sake of efficiency).

TPU does individual application graphing they just don't put it in comparison graphs:
power-per-application.png

https://www.techpowerup.com/review/intel-core-i9-14900k/22.html
Ι agree that the 3ds are more efficient in gaming, but there are some caveats here.

1) It only applies to the 3d chips. The non 3d chips are not, at least not at (yeah, im going to play that drum again) ISO wattage testing. In fact im fully confident that intel chips are going to be both faster and more efficient in gaming than ryzen chips (the non 3d parts).

2) The difference in gaming efficiency is just not something that could ever possibly register on any radar. Even though I personally fully admit that a 14900k if left completely uncheck will draw a ***load amount of power in games - when we are talking about 4k and 4090 PC's, the end result will be a less than 10% power draw difference. A PC + monitor with a 4090 will draw what, 400 for the GPU and another 150 for the rest of the PC and the monitor, so that's 700w from the wall after PSU loses. So okay, the 14900k will draw 100w at 4k vs 50w for the 7950x 3d, so 700 vs 750 is just not that big a deal imo. And again, thats' with a stock out of the box 14900k that is not the proper way to run it if your interest is purely gaming.
 

bit_user

Polypheme
Ambassador
They have that as well

cinebench-multi.png

efficiency-multithread.png
I know, but another beef I have with TechPowerUp is that they give you no multithreaded and single-threaded averages (unlike Toms & ComputerBase). The reason people benchmark multiple different programs is that they don't all correlate perfectly.

So, we're put in this awkward situation of taking a mixed, muddled single + multi-threaded average, or taking just single-program single- and multi-threaded data. It really wouldn't be hard for them to separate out their averages like Toms and ComputerBase do.

And it's kinda telling, AMD has a lead in efficiency (7950x / 7950x 3d) - it turns out it's really not that big.
Oh, but it is. You already forgot what I've been saying, which is that it's silly to compare across product segments, which is exactly what that chart does. It puts the R9 7950X3D next to the i3-13400F. That would be like having a fuel economy chart that mixes together big 4x4 pickup trucks with small economy cars. Nobody in the market for one would consider the other, because the way you use them is fundamentally different as are the reasons for buying them.

Not only that, but it shows Intel CPUs running at various customized settings against AMD running at stock.

It really defies any rhyme or reason when people claim AMD is more efficient,
They tend to be efficient within their respective product segments and especially when you put a leash on them (i.e. the 65 W or X3D versions). The one defying rhyme and reason is you, with the bad jokes that are the product matchups you're putting forward.
 

TheHerald

Upstanding
Feb 15, 2024
409
99
260
Oh, but it is. You already forgot what I've been saying, which is that it's silly to compare across product segments, which is exactly what that chart does. It puts the R9 7950X3D next to the i3-13400F. That would be like having a fuel economy chart that mixes together big 4x4 pickup trucks with small economy cars. Nobody in the market for one would consider the other, because the way you use them is fundamentally different as are the reasons for buying them.
What 13400f are you talking about? The chart has the 14900k at 125w being as efficient as the 7950x 3d and at 200w beating it in performance. If you extrapolate the data the 14900k at around 160w should be performance matching the 7950x 3d which is running at 140. That's not a big difference.
Not only that, but it shows Intel CPUs running at various customized settings against AMD running at stock.
How does that change anything? What would running AMD cpus at various customized settings achieve?
They tend to be efficient within their respective product segments and especially when you put a leash on them (i.e. the 65 W or X3D versions). The one defying rhyme and reason is you, with the bad jokes that are the product matchups you're putting forward.
But they aren't, not even at 65w. Low loads, single threaded, idle (that's what youd probably care about most with 65w 8core parts) shows incredible inefficiency. I mean they are basically losing - by a big 40-50% margin to alderlake parts (13400 is alderlake based btw).
 

bit_user

Polypheme
Ambassador
The vast majority of testing is done without power limits which skews people's view. Before tweaking even at stock settings AMD tends to have an absolute efficiency lead it's just not very big, and varies by workload. The 200W 14900K results are a pretty good indicator of what can be done with current Intel CPUs if one is willing to spend some time to figure out what works for specific use cases.
Efficiency @ 200 W still isn't looking that great for it. To get good efficiency out of it, you have to dial it all the way back to 125 W, but that comes at a 19.2% performance penalty and closer to the 7900X than the 7950X. I'm not sure people are going to do that, when they could just get one of those CPUs.

TPU does individual application graphing they just don't put it in comparison graphs:
power-per-application.png

https://www.techpowerup.com/review/intel-core-i9-14900k/22.html
Yes, and that shows exactly what I'm talking about. You have this odd mix of lightly-threaded and heavily-threaded programs muddled together in their overall average.
 

TheHerald

Upstanding
Feb 15, 2024
409
99
260
Efficiency @ 200 W still isn't looking that great for it. To get good efficiency out of it, you have to dial it all the way back to 125 W, but that comes at a 19.2% performance penalty and closer to the 7900X than the 7950X. I'm not sure people are going to do that, when they could just get one of those CPUs.
Yeah - that's called double standards. The 7950x will be consuming 80% more power at that point, so wth are you even talking about man? You are saying efficiency doesn't look great at 200w, but it's MORE efficient than the 7950x at that point.

Come on now....let's try not to be biased, right?
 

bit_user

Polypheme
Ambassador
What 13400f are you talking about? The chart has the 14900k at 125w being as efficient as the 7950x 3d and at 200w beating it in performance.
We've just been through this. You can't cherry-pick a performance measurement in one configuration and an efficiency measurement in another! For each configuration (125W, 200W, etc.), you have to take performance and efficiency together!

Another bad thing that graph does is to compare tuned Intel CPUs against stock AMD. Anyone who's going to tweak around with their Intel CPU would also certainly tweak with their AMD CPU, if they got one! So, how does it make any sense to compare against only the competition at stock!

That's the beauty of this data from ComputerBase. It lets you see how each CPU performs, according to comparable tuning.

cj1qY3F.png


How does that change anything? What would running AMD cpus at various customized settings achieve?
The above ComputerBase data answers that quite nicely.

But they aren't, not even at 65w. Low loads, single threaded, idle
Ah, so you're back to basing sweeping statements on specific cases.

The efficiency picture is mixed. That's all you can really say.

(that's what youd probably care about most with 65w 8core parts) shows incredible inefficiency. I mean they are basically losing - by a big 40-50% margin to alderlake parts (13400 is alderlake based btw).
First, that's one of your classic bad-joke matchups, since even the R5 7600X outperforms the i5-13400F on CineBench MT.

Second, the i5-13400 and i5-13400F are actually mixed. There are product codes based both on Alder Lake (C0) and Raptor Lake (B0) dies, as you can see from the Ordering and Compliance information tabs, on their Ark pages. I can't link directly to those tabs, you'll have to click them in list at the left side of the page.

If someone doesn't know/say which one they tested, we can't really make any assumptions.
 

bit_user

Polypheme
Ambassador
Yeah - that's called double standards. The 7950x will be consuming 80% more power at that point, so wth are you even talking about man?
Oh, you want to talk about double-standards? How about not comparing Intel CPU in modified configuration against AMD in stock! Again, if someone is going to buy a CPU and tune it for better efficiency, why wouldn't they base that decision on a comparison of both CPUs running with modified settings, like ComputerBase did?
 
  • Like
Reactions: helper800
Status
Not open for further replies.