News Core Ultra 9 285K is slower than Core i9-14900K in gaming, according to leaked Intel slide — Arrow Lake consumes less power, though

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
I have a hunch that the main way they improved efficiency was just by failing to scale frequencies as much as they'd hoped, due to a thermal bottleneck from that small CPU tile, not unlike what AMD has been dealing with since Zen 4 and especially in Zen 5.
Just to prove that my opinion does not flip flop (unlike other people) - if the only way ARL is more efficient than RPL is just by being restricted to lower power, then im already declaring it, it's not more efficient. If the 285k limited to 150w ain't faster than a 14900k limited to 150w then it's not more efficient, period. Lowering power limits is just an obfuscation method to pretend you have more efficient CPUs than you actually do.

I'm saying this cause this is the main disagreement we've been having in this forum whenever I say Intel has much more efficient chips than AMD. My method of comparison won't change just because Intel might flop with ARL.

Regarding gaming performance, gains come from memory tuning, not ocing the core. 14900k is faster than the 7800x 3d (and so is the 7950x 3d) when the user knows what the heck he is doing, but I don't see the 285k being faster than the 9800x 3d.
 

bit_user

Titan
Ambassador
Just to prove that my opinion does not flip flop (unlike other people) - if the only way ARL is more efficient than RPL is just by being restricted to lower power, then im already declaring it, it's not more efficient.
Well, I adhere to a strict definition of efficiency, which is defined by perf/W (or Joules per defined workload). It's not contradictory to say something like: CPU B is both slower and more efficient than CPU A. It this point, the consumer must decide which tradeoff better matches their needs & priorities.

If the 285k limited to 150w ain't faster than a 14900k limited to 150w then it's not more efficient, period.
Why pick a point measurement, though? And without defining a workload, either.

If you're going to do point to point comparisons, then it shouldn't be at an arbitrary threshold. It should be stock settings, peak overclock, or some other configuration that people would realistically use, along the lines of how TechPowerUp tests.
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
Well, I adhere to a strict definition of efficiency, which is defined by perf/W (or Joules per defined workload). It's not contradictory to say something like: CPU B is both slower and more efficient than CPU A. It this point, the consumer must decide which tradeoff better matches their needs & priorities.
I disagree. I think it's incredibly misleading. For example - an obvious one - is PC fans. When testing at 100% the T30 is one of the worst fans in terms of efficiency (performance / noise) ever made. They are freaking horrible. Since the DB scale is logarithmic, the T30 are as much as 20 times more noisy for around 30% more performance compared to your average fan. Looking at a chart like that, nobody should ever buy them if they care about efficiency. Yet in reality, they are the most efficient fan since at the same noise level they are performing the fastest.

Why don't we apply the same methodology to CPUs?

Why pick a point measurement, though? And without defining a workload, either.

If you're going to do point to point comparisons, then it shouldn't be at an arbitrary threshold. It should be stock settings, peak overclock, or some other configuration that people would realistically use, along the lines of how TechPowerUp tests.
The 150 was just an example, but usually when we are talking about big CPUs with lots of cores, the one that is more efficient at that 125-180w range will be more efficient at most reasonable power limits. If the 14900k is more efficient than the 285k at 150w it will most likely be more efficient at 125 and 100 as well, or at least on par. The change in power draw per core between 150 and 100 is just too small to make any meaningful difference

I find stock settings for efficiency comparison completely useless. Mainly because if someone does in fact care about efficiency - he would be willing to change those stock power limits. If he is not willing he doesn't care about efficiency in which case the whole efficiency discussion is moot.

Again going back to the Fan example, if you told you noise efficiency is your number 1 priority and that's why you bought X fans - cause they are most noise efficient out of the box, I'd be bleeding from the ears, and I'd suggest you to return them, get a couple of T30's and limit them to X% rpms. It just doesn't make sense in my mind caring about efficiency but only about stock power limit (or rpm) efficiency.
 
It still shows the inversion between the 9700X and 7700X, when both are tested with the same updates. However, I'll bet the 9700X was still running at 65W. They don't mention PBO and this was before AMD officially blessed running the9700X at 105W.
Yeah I'm still hoping someone does a test suite run with the 105W profile to see if that makes any difference since AFAIK all of the PBO testing done by everyone originally was maximum.
 
  • Like
Reactions: bit_user

YSCCC

Commendable
Dec 10, 2022
569
462
1,260
Just to prove that my opinion does not flip flop (unlike other people) - if the only way ARL is more efficient than RPL is just by being restricted to lower power, then im already declaring it, it's not more efficient. If the 285k limited to 150w ain't faster than a 14900k limited to 150w then it's not more efficient, period. Lowering power limits is just an obfuscation method to pretend you have more efficient CPUs than you actually do.
That is a stupid way to compare efficiency, you compare the product efficiency, not nit picking a point for specific processor to hit the efficiency sweet spot, what is important is what the fan is designed to be used at for most ppl, i.e. the stock form, where most user pay for the performance they wanted, then compare the efficiency, that is meaningful, nit picking a your specific point is beyond flip flop or moot. It is as stupid as calling prius inefficient compared to a porsche because at 200kph prius drinks more gas, which is a situation 99% of ppl won't use it for.

Regarding gaming performance, gains come from memory tuning, not ocing the core. 14900k is faster than the 7800x 3d (and so is the 7950x 3d) when the user knows what the heck he is doing, but I don't see the 285k being faster than the 9800x 3d.
Specify and demo this (not in a nit picked single game, demo it across the board), cache is always much faster than ram, some testings like
are using DDR5 7200 for 14900k vs DDR56000 for 7800X3D, no matter how you tune you cannot make ram running faster than the extra cache.
 
  • Like
Reactions: bit_user

YSCCC

Commendable
Dec 10, 2022
569
462
1,260
I have a hunch that the main way they improved efficiency was just by failing to scale frequencies as much as they'd hoped, due to a thermal bottleneck from that small CPU tile, not unlike what AMD has been dealing with since Zen 4 and especially in Zen 5.

People who use direct-die waterblocks on Arrow Lake will probably unleash some meaningful gains that put it past the reach of the 9800X3D, but now we're talking about like 0.01% of the market, if that.
Sorry if I may misunderstand the meaning as english is not my first language, I think you mean is they go efficiency route as they can't scale frequency as high as they want to make the performance gain?

If I understand correctly as stated above, that could be the reason for the power draw vs performance figures it seems to be having. It actually looks quite Deja Vu to me as in the P4 era where they hoped to go to 10ghz for netburst, then hit the hard wall of physics and after then going to more core count route until now.

FWIW it feels like for bothe Intel and AMD, and also Apple and Nvidia, the physics of shrinking the nodes and the get around methods of FinFET or Gate all around are very close to what physical limits of the materials can do, and further performance gain would be really difficult to not ruin things.
 

YSCCC

Commendable
Dec 10, 2022
569
462
1,260
Well, looking back at recent leaked rumors, the efficiency argument may or may not be important considering the fact that Intel has a 295W profile for ARL.

https://www.tomshardware.com/pc-com...mance-and-extreme-profiles-for-next-gen-chips

Intel has to show the performance lead in gaming with similar power draw as Ryzen 9000 series.
That will be concerning for quite some ppl... although they kept the PL1, but with PL2 close to the 14900KS is............. hope they won't go the Boeing route.
 
  • Like
Reactions: bit_user

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
That is a stupid way to compare efficiency, you compare the product efficiency, not nit picking a point for specific processor to hit the efficiency sweet spot, what is important is what the fan is designed to be used at for most ppl, i.e. the stock form, where most user pay for the performance they wanted, then compare the efficiency, that is meaningful, nit picking a your specific point is beyond flip flop or moot. It is as stupid as calling prius inefficient compared to a porsche because at 200kph prius drinks more gas, which is a situation 99% of ppl won't use it for.
Why would I care about how other people are using their PC's? That doesn't make sense. Most people might be running their T30 fans at 100% rpm, does that mean I shouldn't buy them? You are not making sense

Specify and demo this (not in a nit picked single game, demo it across the board), cache is always much faster than ram, some testings like
are using DDR5 7200 for 14900k vs DDR56000 for 7800X3D, no matter how you tune you cannot make ram running faster than the extra cache.
That's using XMP, xmp is slow.
 

bit_user

Titan
Ambassador
I disagree. I think it's incredibly misleading. For example - an obvious one - is PC fans. When testing at 100% the T30 is one of the worst fans in terms of efficiency (performance / noise) ever made. They are freaking horrible. Since the DB scale is logarithmic, the T30 are as much as 20 times more noisy for around 30% more performance compared to your average fan. Looking at a chart like that, nobody should ever buy them if they care about efficiency. Yet in reality, they are the most efficient fan since at the same noise level they are performing the fastest.

Why don't we apply the same methodology to CPUs?
I'm not entirely sure whether you're agreeing with me or not. For comparing CPU architectures, I've always said it's more informative to look at perf/W curves than point measurements. But, if you're going to look at point measurements, try using the points that people actually use, rather than artificial ones. As for comparing specific products, using those real world configurations as comparison points is the way to go.

Regarding your claim of "20 times more noisy", sound is measured on a logarithmic scale precisely because that aligns with human perception. By comparing them on a linear scale, you're providing a misleading impression of just how much noisier a person would perceive them to be. The correct way to compare loudness is by subtracting on the dB scale, and reporting the difference in dB.
 
Last edited:

bit_user

Titan
Ambassador
Well, looking back at recent leaked rumors, the efficiency argument may or may not be important considering the fact that Intel has a 295W profile for ARL.
That is probably going to require a serious water cooling setup, not least because of the hotspot problem they will face by having such a small compute tile. All the Intel fanboys who heaped hatred and scorn on AMD for its thermal bottlenecking in Zen 4 and beyond should get ready for a taste of their own medicine.
 

bit_user

Titan
Ambassador
Sorry if I may misunderstand the meaning as english is not my first language, I think you mean is they go efficiency route as they can't scale frequency as high as they want to make the performance gain?
What I meant is that I expect Arrow Lake will face a thermal bottleneck/hotspot problem limiting its clock speeds. That, in turn, will constrain it to run at lower power levels that are more efficient. The main reason Raptor Lake gets into trouble with efficiency is that it's allowed to clock too high. Power usage scales nonlinearly with frequency, so it ends up running very inefficiently. If they had capped it a lower power limits, it wouldn't have such a bad reputation.

If I understand correctly as stated above, that could be the reason for the power draw vs performance figures it seems to be having. It actually looks quite Deja Vu to me as in the P4 era where they hoped to go to 10ghz for netburst, then hit the hard wall of physics and after then going to more core count route until now.
It's not the same as Pentium 4, because Intel clearly understands they can't keep scaling frequency and that's why they focused so much on IPC improvements in both Lion Cove and Skymont.

FWIW it feels like for bothe Intel and AMD, and also Apple and Nvidia, the physics of shrinking the nodes and the get around methods of FinFET or Gate all around are very close to what physical limits of the materials can do, and further performance gain would be really difficult to not ruin things.
I think thermal density is becoming a more pressing problem than gate density. It's an easier problem for GPUs, because they can always increase performance by adding more ALUs. CPUs have a thorny problem that IPC increases are very costly in terms of gates.
 

YSCCC

Commendable
Dec 10, 2022
569
462
1,260
What I meant is that I expect Arrow Lake will face a thermal bottleneck/hotspot problem limiting its clock speeds. That, in turn, will constrain it to run at lower power levels that are more efficient. The main reason Raptor Lake gets into trouble with efficiency is that it's allowed to clock too high. Power usage scales nonlinearly with frequency, so it ends up running very inefficiently. If they had capped it a lower power limits, it wouldn't have such a bad reputation.


It's not the same as Pentium 4, because Intel clearly understands they can't keep scaling frequency and that's why they focused so much on IPC improvements in both Lion Cove and Skymont.


I think thermal density is becoming a more pressing problem than gate density. It's an easier problem for GPUs, because they can always increase performance by adding more ALUs. CPUs have a thorny problem that IPC increases are very costly in terms of gates.
IMO the problem is kind of the longevity side, as the wires goes ever thinner, electron migration becomes more of an issue, and of course, thermal density will likely be hit first.

Why would I care about how other people are using their PC's? That doesn't make sense. Most people might be running their T30 fans at 100% rpm, does that mean I shouldn't buy them? You are not making sense

It is because your claim about the efficiency is complete nonsense, here, in a forum, we are discussing the GENERAL bloody product, which is, what specified by the vendor, not how some random individual will use it in some weird way, nobody should care about how you use your PC, that's why in discussion, or reviews, should be based on the vendor designed spec, nothing more, nothing less, discuss the stock product pros and cons is fair discussion, when factual data is given, how you decide to use/purchase is at your decision. And you derail the thread once again for your liking of stuffs which means nothing to everyone else

e.g. it doesn't matter Bulldozer is a crap gen of CPU, it doesn't contradict that Ivybridge is a much better CPU than Bulldozer, but if one have a usecase or even the FX badge makes him feels better, or just he got the money for that, it is completely fine, but if someone claims POV-Ray test FX8350 is faster than i7-3770k, so FX8350 is a faster CPU, than it is BS in general.

That's using XMP, xmp is slow.
XMP is something kind of factory guaranteed to work, at least most of the time if you are not with very bad luck or get the crazy high freq kit for the top 1% IMC, other "tuning" (AKA overclocking) is something varies greatly from one over another, and it is beyond stupid to compare for example, a stock 7800X3D to a cherry picked 14900K with cherry picked Ram kit and tune it very carefully and maybe get faster and say 14900k is faster in gaming (which have no proof to even be true).
 
  • Like
Reactions: bit_user