News Core Ultra 9 285K is slower than Core i9-14900K in gaming, according to leaked Intel slide — Arrow Lake consumes less power, though

Stesmi

Reputable
Sep 1, 2021
31
34
4,560
I know, I know, I may be nitpicking, but how on earth is Cinebench 2024 a content creation application, unless you mean a tool used by benchmarkers to make... content?
Article said:
... consistently faster in content creation applications, including PugetBench, Blender, Cinebench 2024, and POV-Ray.
 
  • Like
Reactions: rtoaht

philipemaciel

Distinguished
Aug 23, 2011
62
12
18,635
"The Core Ultra 9 285K-based machine consumes around 447W"

I remember when the 220W FX-9590 was released and all the due and fair criticism it received.

How an abhorrence like this (nevermind the 14900K being worse) is even seeing the light of the day.
 
  • Like
Reactions: iLoveThe80s

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
"The Core Ultra 9 285K-based machine consumes around 447W"

I remember when the 220W FX-9590 was released and all the due and fair criticism it received.

How an abhorrence like this (nevermind the 14900K being worse) is even seeing the light of the day.
I know people can never miss the chance to dunk on Intel but the article is talking about gaming, in gaming the majority of the power draw is the GPU, assuming he is testing with a high end GPU 300+ of those 447w is the GPU itself.
 
Mar 12, 2024
16
27
40
We'll see after some driver optimization, most new products take a bit to maximize. I'd be curious to see the die size comparison as well, I'd bet 14th gen is larger.
 

abufrejoval

Reputable
Jun 19, 2020
584
424
5,260
I know, I know, I may be nitpicking, but how on earth is Cinebench 2024 a content creation application, unless you mean a tool used by benchmarkers to make... content?
It does tend to get lost that Cinebench isn't actually the product Maxon is earning money with, but started mostly as a tool to evaluate hardware to use with their content creation software.

To my knowledge they used CPU rendering for the longest time there, to match the quality expectations of their clients.

But now that Maxon (and Cinebench) seems to support high quality rendering also via GPUs, actually using a strong CPU to do Maxon based content creation would be a bad idea.

In GPU rendering via Cinebench 2024 even an RTX 4060 seems to beat my beefiest Ryzen 7950X3D and that one has an RTX 4090, which might even put rather big EPYCs to shame.

Nobody in his right mind should therefore actually continue to use CPU rendering for Maxon content creation, just like Handbrake is a very niche tool in video content conversion dominated by ASICs doing dozens of streams in real-time: both just happen to be readily available to testers, not or no longer useful as such.

Publishers make money from creating attention for vendor products that may have very little real-life advantages over previous gen products.

So a car that now has 325km/h top speed vs 312km/h in the previous generation gets a lot of attention, even if the best you can actually hope to achieve is 27km/h in your daily commuter pileups.
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
It does tend to get lost that Cinebench isn't actually the product Maxon is earning money with, but started mostly as a tool to evaluate hardware to use with their content creation software.

To my knowledge they used CPU rendering for the longest time there, to match the quality expectations of their clients.

But now that Maxon (and Cinebench) seems to support high quality rendering also via GPUs, actually using a strong CPU to do Maxon based content creation would be a bad idea.

In GPU rendering via Cinebench 2024 even an RTX 4060 seems to beat my beefiest Ryzen 7950X3D and that one has an RTX 4090, which might even put rather big EPYCs to shame.

Nobody in his right mind should therefore actually continue to use CPU rendering for Maxon content creation, just like Handbrake is a very niche tool in video content conversion dominated by ASICs doing dozens of streams in real-time: both just happen to be readily available to testers, not or no longer useful as such.

Publishers make money from creating attention for vendor products that may have very little real-life advantages over previous gen products.

So a car that now has 325km/h top speed vs 312km/h in the previous generation gets a lot of attention, even if the best you can actually hope to achieve is 27km/h in your daily commuter pileups.
My 4090 is about 23 times faster than my 12900k, but that's not the point of Cinebench. It is used to see the maximum performance of a CPU. Testing something that only uses 2 or 4 cores might lead you to believe that a 7600x is as fast as a 7950x completely missing the fact that the 7950x can run 3 times as many of those workloads with 0 slowdown.
 

Stesmi

Reputable
Sep 1, 2021
31
34
4,560
It does tend to get lost that Cinebench isn't actually the product Maxon is earning money with, but started mostly as a tool to evaluate hardware to use with their content creation software.

To my knowledge they used CPU rendering for the longest time there, to match the quality expectations of their clients.
Oh yeah, for sure. It really wasn't that long ago that CPU rendering was the only thing used. Or, to me, having used 68000 for raytracing, it's not ... such... a long... time ago. Real3D I think it was called.
But now that Maxon (and Cinebench) seems to support high quality rendering also via GPUs, actually using a strong CPU to do Maxon based content creation would be a bad idea.

In GPU rendering via Cinebench 2024 even an RTX 4060 seems to beat my beefiest Ryzen 7950X3D and that one has an RTX 4090, which might even put rather big EPYCs to shame.

Nobody in his right mind should therefore actually continue to use CPU rendering for Maxon content creation, just like Handbrake is a very niche tool in video content conversion dominated by ASICs doing dozens of streams in real-time: both just happen to be readily available to testers, not or no longer useful as such.
Yeah, the only place it makes sense is if you want to use some option that your asic / hardware rendered doesn't support, but then again, I'm sure offloading a video encode to the computer cores (not hardware video encoding) would maybe be faster than pure CPU - it's just not done, as the dedicated encoder is faster, even though it may not produce higher quality per bitrate.
Publishers make money from creating attention for vendor products that may have very little real-life advantages over previous gen products.

So a car that now has 325km/h top speed vs 312km/h in the previous generation gets a lot of attention, even if the best you can actually hope to achieve is 27km/h in your daily commuter pileups.
Yeah, also halo-cars. "Oh look at that car with a twin turbo, supercharged V12!" "I'll go buy the one with the 3 cylinder that looks sort of the same." And guess what? It works. Please don't take the example as a real vehicle.
 

abufrejoval

Reputable
Jun 19, 2020
584
424
5,260
My 4090 is about 23 times faster than my 12900k, but that's not the point of Cinebench. It is used to see the maximum performance of a CPU. Testing something that only uses 2 or 4 cores might lead you to believe that a 7600x is as fast as a 7950x completely missing the fact that the 7950x can run 3 times as many of those workloads with 0 slowdown.
The point of Cinebench is to evaluate hardware for Maxon work. That's what it is designed and maintained for, while Maxon may also see it as a nice marketing tool.

The point of reviewers using Cinebench is to compare CPUs, ...somehow.

I'd argue that the latter transitions to abuse, when you argue that a faster CPU will help you create content faster or better. Clearly you might be better off with a GPU today, perhaps even with one of those iGPUs these SoCs have, once those are supported by your content creation tools.

I completely understand the dilemma reviewers find themselves in, I just wish they'd occasionally reflect on if the standard text blocks they've been using for the last ten years recommending ever more and more powerful CPU cores for "things like content creation" need to be adapted these days.

It's gotten to the point where it's no longer informational and bordering on a lie. And not everyone has been in the business long enough to understand what they actually want to imply: newbies might take them at face value!

These days nearly any use case that used to take lots of CPUs to solve gets bespoke hardware, even neural nets, when I'd prefer using that real-estate on a laptop for something useful like V-cache.
 
Last edited:
  • Like
Reactions: NinoPino

bit_user

Titan
Ambassador
I know, I know, I may be nitpicking, but how on earth is Cinebench 2024 a content creation application, unless you mean a tool used by benchmarkers to make... content?
Cinebench is a benchmark tool designed to characterize how fast rendering in Cinema4D will run. That's it's original purpose. Someone doing software rendering on their PC will pay close attention to it and Blender benchmarks, because those should be predictive of what kind of rendering performance they'll experience.

But now that Maxon (and Cinebench) seems to support high quality rendering also via GPUs, actually using a strong CPU to do Maxon based content creation would be a bad idea.
I've read people claiming they still use CPUs for rendering large scenes that won't fit in the amount of memory available on consumer GPUs. I'm not sure how big an issue this is specifically for Cinema 4D.

Nobody in his right mind should therefore actually continue to use CPU rendering for Maxon content creation, just like Handbrake is a very niche tool in video content conversion dominated by ASICs doing dozens of streams in real-time: both just happen to be readily available to testers, not or no longer useful as such.
For the longest time, it was said that you needed software video encoders, if you wanted the best possible quality.
 

bit_user

Titan
Ambassador
Intel is probably now thinking they should've slowed down Raptor Lake even more, with their final mitigation for the degradation problem.
; )

On a more serious note, this is one of the main theories I had for why Intel would do Bartlett Lake. Plus, I never believed that BS story about how it was intended for some communications vertical, especially given that it's socketed. And, when a full lineup of the Bartlett Lake family leaked, a couple months ago, it finally put that sorry excuse to bed.

There are only two good reasons for Intel to do it: 1) Arrow Lake is too weak in key use cases (e.g. gaming) and 2) Arrow Lake & its platform will be priced unattractively to certain budget markets.
 
Last edited:

YSCCC

Commendable
Dec 10, 2022
569
462
1,260
Intel is probably now thinking they should've slowed down Raptor Lake even more, with their final mitigation for the degradation problem.
; )

On a more serious note, this is one of the main theories I had for why Intel would do Bartlett Lake. Plus, I never believed that BS story about how it was intended for some communications vertical, especially given that it's socketed. And, when a full lineup of the Bartlett Lake family leaked, a couple months ago, it finally put that sorry excuse to bed.

There are only two good reasons for Intel to do it: 1) Arrow Lake is too weak in key use cases (e.g. gaming) and 2) Arrow Lake & its platform will be priced unattractively to certain budget markets.
TBF I just updated the Gigabyte 0x12B bios and with everysetting retained (with quite some LLC and offset undervolting), the performance just more or less reatined with some 0.5-1% loss. But if that official statement is true, intel likely would hope that 14th gen never exist, pushing too far with default power and killing the reliable reputation is likely far worse than let 13th gen at a slower frequency and lasting 2 years to their brand image.

Back to the news topic. I kinda appreciate the efficiency gain, but that gain while with lower or similar gaming performance to 14th gen, likely means the Lisa is ROFL with the Ryzen 9000s and upcoming X3D.
 
  • Like
Reactions: bit_user

halfcharlie

Prominent
Dec 21, 2022
26
9
535
Only matters to the rich who upgrade every cycle for no reason. I, one of the vast majority, am upgrading from an older gen, 11th gen to be precise so it doesn't matter will still be a huge upgrade. I'll just be waiting for the Ultra 9 vs X3D comparisons when both are released.
 
  • Like
Reactions: bit_user
May 26, 2024
7
2
15
Not too surprising. The 14900K is monolithic, while the 285K uses tiles/chiplets. I assume, like the previous "Core Ultra" series (Meteor Lake & Lunar Lake), that the P-cores and I/O are on separate chiplets.

Intel's "Foveros" technology bonds chiplets to a silicon "base tile", meaning faster & lower-power communication between chiplets compared to just connecting them via PCB traces (which is what AMD does for desktop Ryzen). But there's still going to be some latency penalty. e.g. Anandtech's review of the Ultra 155H found it took about 40% longer to read data from RAM than the (monolithic) 14600K.

Secondly, if currently-available information is accurate, Arrow Lake's (as with Lunar Lake) P-cores won't support hyperthreading. Intel claims that removing hyperthreading improves single-core performance and efficiency by about 15%, while the E-cores can handle heavily-threaded tasks. It's easy to imagine this being generally true, but untrue for some rare/implausible workloads. Such as 1080p gaming with a high-end GPU.
 

YSCCC

Commendable
Dec 10, 2022
569
462
1,260
Only matters to the rich who upgrade every cycle for no reason. I, one of the vast majority, am upgrading from an older gen, 11th gen to be precise so it doesn't matter will still be a huge upgrade. I'll just be waiting for the Ultra 9 vs X3D comparisons when both are released.
Partially true IMO, yes, for most users who hold on and only upgrade 4-5 gens or more it will be a huge upgrade anyway, but then being basically stagnated in performance with great power improvement, it means you likely lose to compmetition, IIRC the Ryzen 9000 already is faster than 14900k, X3D if they didn't mess up big, would mean a huge lead in gaming performance (7800X3D is still a lot faster than 14900k or 7950X to say), TBF I didn't personally expect it to be slower than 14900k in some/ most games..
 
  • Like
Reactions: bit_user
I know people can never miss the chance to dunk on Intel but the article is talking about gaming, in gaming the majority of the power draw is the GPU, assuming he is testing with a high end GPU 300+ of those 447w is the GPU itself.

at 1080p, the GPU load would be low as well, even with ray tracing enabled. But I think they would not do that here as they dont want any GPU bottlenecks to compare peak CPU performance. So at 450W, the CPU could be pulling as high as 150W depending on the game.

Intel is probably now thinking they should've slowed down Raptor Lake even more, with their final mitigation for the degradation problem.
; )

On a more serious note, this is one of the main theories I had for why Intel would do Bartlett Lake. Plus, I never believed that BS story about how it was intended for some communications vertical, especially given that it's socketed. And, when a full lineup of the Bartlett Lake family leaked, a couple months ago, it finally put that sorry excuse to bed.

There are only two good reasons for Intel to do it: 1) Arrow Lake is too weak in key use cases (e.g. gaming) and 2) Arrow Lake & its platform will be priced unattractively to certain budget markets.

I dont know if this would make sense if Arrow lake would match the performance of barlett lake in single core performance.

The only reason I see this to be viable to Intel - bartlett lake is fabbed by Intel so lower production cost. But they risk arrow lake sales if they price bartlett lake similar to 15th gen CPUs. With the core count of bartlett lake, it would make more sense to price them above arrow lake. Especially when AMD is focusing very little on the non pro threadripper SKUs. But I dont think they can match the performance of threadripper.
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
Partially true IMO, yes, for most users who hold on and only upgrade 4-5 gens or more it will be a huge upgrade anyway, but then being basically stagnated in performance with great power improvement, it means you likely lose to compmetition, IIRC the Ryzen 9000 already is faster than 14900k, X3D if they didn't mess up big, would mean a huge lead in gaming performance (7800X3D is still a lot faster than 14900k or 7950X to say), TBF I didn't personally expect it to be slower than 14900k in some/ most games..
The ryzen 9000 is definitely not faster than 14900k. Only one of the released zen 5 is faster than the 14900k. Heck in the entire lineup 14th gen is on average faster (not by a small margin) compared to zen 5. For example the 9700x is sandwiched in price right between the 13700k and the 14700k, and it is massively slower than both. It is even slower than the much cheaper 13 and 14600k. The 9600x is more expensive than the 13600k and is also massively slower. The 9900x is price matching the 14900k and you guessed it, it's also slower.


All prices are from pcpartpicker.

at 1080p, the GPU load would be low as well, even with ray tracing enabled. But I think they would not do that here as they dont want any GPU bottlenecks to compare peak CPU performance. So at 450W, the CPU could be pulling as high as 150W depending on the game
Depends on the game. Even at 1080p a 4090 doesn't drop below 200-220 watts. With RT enabled even at 1080p it's hitting ~400w. I can test it for you and upload a video on eg. cyberpunk.

EG1. I just tested cp2077, with RT (not PT). At 1080p native a stock 4090 is between 380 and 410 watts.
 
Yup, the 200 to 250W is realistic. And like I said, for CPU performance comparisons, they wouldn't want any bottlenecks. So no RT is what I am assuming.

250W GPU, 150W CPU and 50W for other components sounds about right for the 450W power draw mentioned above...
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
Yup, the 200 to 250W is realistic. And like I said, for CPU performance comparisons, they wouldn't want any bottlenecks. So no RT is what I am assuming.

250W GPU, 150W CPU and 50W for other components sounds about right for the 450W power draw mentioned above...
Yeah but my point is we don't have much data to go with here. Do we even know the GPU he is using? Or even the settings? Also the slides have some games that arrow lake supposedly draws 170w less than RPL, which is I'd argue is impossible. I don't care how efficient ARL is or intel thinks it is but you don't go from 200w to 35w, there is no freaking way.
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
As per the review here, the stock gaming power draw is for a bunch of games is tabulated here:

https://www.techpowerup.com/review/intel-core-i9-14900ks/22.html

And I guess with the power unlocked, it will go 250+W

Arrow lake consuming ~100W is not far-fetched.

But I agree with you, we dont have all the info.
That's the 14900ks, the K is at 160w average. From my experience at completely stock with no power limits lowest ive seen in games was ~90w, highest i've seen was ~200w, that's on 1080p with a 4090 and completely cpu bound. I don't think you can make it draw more than 200w, in which case the supposed 170w reduction seems like they messed up something in t heir graph. There is no way ARL is dropping to 30w, lol.
 

YSCCC

Commendable
Dec 10, 2022
569
462
1,260
The ryzen 9000 is definitely not faster than 14900k. Only one of the released zen 5 is faster than the 14900k. Heck in the entire lineup 14th gen is on average faster (not by a small margin) compared to zen 5. For example the 9700x is sandwiched in price right between the 13700k and the 14700k, and it is massively slower than both. It is even slower than the much cheaper 13 and 14600k. The 9600x is more expensive than the 13600k and is also massively slower. The 9900x is price matching the 14900k and you guessed it, it's also slower.


All prices are from pcpartpicker.
We have only the Top ARL and Top 14900k as a comparison here, so it is logical that only compare the 9950X in the game and not the lower tier, that's how full flaged SKUs go. and remember, in gaming, 7800X3D smokes all those, so the coming Zen5 and Lisa is smiling big. the other comparisons based on price is irrelevant, especially now the price dropped a lot due to EOL and the RPL disaster chief.
 
  • Like
Reactions: bit_user

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
We have only the Top ARL and Top 14900k as a comparison here, so it is logical that only compare the 9950X in the game and not the lower tier, that's how full flaged SKUs go. and remember, in gaming, 7800X3D smokes all those, so the coming Zen5 and Lisa is smiling big. the other comparisons based on price is irrelevant, especially now the price dropped a lot due to EOL and the RPL disaster chief.
You said ryzen 9000 is already faster than the 14900k, obviously you weren't talking about gaming. In which case it's irrelevant, most of the 14th gen product stack is much much faster than the zen 5 product stack, and even if ARL gets a performance decrease it will still remain much faster than ryzen 9000.

The 7800x 3d is irrelevant, it's never stealing sales from i9s or ryzen 9s. Nobody is between an 8 core megaslow cheap and a 9950x. Come on now, the performance discrepancy between these chips is huge. If you are looking for anything like a 9950x, 7950x, 7950x 3d, 14900k etc. the 7800x 3d might as well be an i3. Doesn't matter. It might steal sales from an i5, sure, but i9s / r9s etc are way outside it's league to be even a contender.
 

bit_user

Titan
Ambassador
But there's still going to be some latency penalty. e.g. Anandtech's review of the Ultra 155H found it took about 40% longer to read data from RAM than the (monolithic) 14600K.
LPDDR5 has an intrinsic latency penalty that's probably more substantial than the penalty they face from putting the memory controller on another die.

Secondly, if currently-available information is accurate, Arrow Lake's (as with Lunar Lake) P-cores won't support hyperthreading.
Lunar Lake is out and they definitely don't support hyperthreading. The only question about the accuracy of Intel's information is whether all the supporting silicon structures for HT were truly eliminated.

Intel claims that removing hyperthreading improves single-core performance and efficiency by about 15%,
LOL, if that were true, Intel never would've brought it back in Nehalem and AMD never would've added it to Zen. No, what they said was that eliminating it improved perf/W by 15% in single-thread use cases:

d743MPmtAGFZpWgwcv5HDL.jpg


Source: https://www.tomshardware.com/pc-com...pc-gain-for-e-cores-16-ipc-gain-for-p-cores/2

It's easy to imagine this being generally true, but untrue for some rare/implausible workloads. Such as 1080p gaming with a high-end GPU.
Gaming and web browsing are some of the key client workloads, not rare/implausible. And I know you're saying that pairing a high-end CPU with a high-end GPU @ 1080p is the rare/implausible part, but that's only a trick used to try and tease out the more CPU-bottlenecked parts of the rendering pipeline, so that we can get a sense of how the CPU will hold up when running future games at more typical resolutions.
 
  • Like
Reactions: thestryker

bit_user

Titan
Ambassador
The ryzen 9000 is definitely not faster than 14900k. Only one of the released zen 5 is faster than the 14900k.
That's the point. 9950X is the top end Zen 5 CPU, i9-14900K is the top end Raptor Lake. It's a fair comparison.

Heck in the entire lineup 14th gen is on average faster (not by a small margin) compared to zen 5. ... [9700X] is even slower than the much cheaper 13 and 14600k.
Not if we're talking about gaming, which is the subject of this article:

The 9900x is price matching the 14900k and you guessed it, it's also slower.
The 9900X price will come down, but the i9-14900K is probably about as cheap as it'll get. For someone buying a CPU today, the latter represents a better value. If we're looking down the road, then i9-14900K will have more to fear from the 9950X.
 
  • Like
Reactions: thestryker