News Core Ultra 9 285K is slower than Core i9-14900K in gaming, according to leaked Intel slide — Arrow Lake consumes less power, though

Stesmi

Reputable
Sep 1, 2021
31
34
4,560
I know, I know, I may be nitpicking, but how on earth is Cinebench 2024 a content creation application, unless you mean a tool used by benchmarkers to make... content?
Article said:
... consistently faster in content creation applications, including PugetBench, Blender, Cinebench 2024, and POV-Ray.
 
  • Like
Reactions: rtoaht

philipemaciel

Distinguished
Aug 23, 2011
61
8
18,635
"The Core Ultra 9 285K-based machine consumes around 447W"

I remember when the 220W FX-9590 was released and all the due and fair criticism it received.

How an abhorrence like this (nevermind the 14900K being worse) is even seeing the light of the day.
 
  • Like
Reactions: iLoveThe80s

TheHerald

Notable
Feb 15, 2024
1,123
320
1,060
"The Core Ultra 9 285K-based machine consumes around 447W"

I remember when the 220W FX-9590 was released and all the due and fair criticism it received.

How an abhorrence like this (nevermind the 14900K being worse) is even seeing the light of the day.
I know people can never miss the chance to dunk on Intel but the article is talking about gaming, in gaming the majority of the power draw is the GPU, assuming he is testing with a high end GPU 300+ of those 447w is the GPU itself.
 
Mar 12, 2024
15
22
15
We'll see after some driver optimization, most new products take a bit to maximize. I'd be curious to see the die size comparison as well, I'd bet 14th gen is larger.
 

abufrejoval

Reputable
Jun 19, 2020
524
371
5,260
I know, I know, I may be nitpicking, but how on earth is Cinebench 2024 a content creation application, unless you mean a tool used by benchmarkers to make... content?
It does tend to get lost that Cinebench isn't actually the product Maxon is earning money with, but started mostly as a tool to evaluate hardware to use with their content creation software.

To my knowledge they used CPU rendering for the longest time there, to match the quality expectations of their clients.

But now that Maxon (and Cinebench) seems to support high quality rendering also via GPUs, actually using a strong CPU to do Maxon based content creation would be a bad idea.

In GPU rendering via Cinebench 2024 even an RTX 4060 seems to beat my beefiest Ryzen 7950X3D and that one has an RTX 4090, which might even put rather big EPYCs to shame.

Nobody in his right mind should therefore actually continue to use CPU rendering for Maxon content creation, just like Handbrake is a very niche tool in video content conversion dominated by ASICs doing dozens of streams in real-time: both just happen to be readily available to testers, not or no longer useful as such.

Publishers make money from creating attention for vendor products that may have very little real-life advantages over previous gen products.

So a car that now has 325km/h top speed vs 312km/h in the previous generation gets a lot of attention, even if the best you can actually hope to achieve is 27km/h in your daily commuter pileups.
 
  • Like
Reactions: bit_user and Stesmi

TheHerald

Notable
Feb 15, 2024
1,123
320
1,060
It does tend to get lost that Cinebench isn't actually the product Maxon is earning money with, but started mostly as a tool to evaluate hardware to use with their content creation software.

To my knowledge they used CPU rendering for the longest time there, to match the quality expectations of their clients.

But now that Maxon (and Cinebench) seems to support high quality rendering also via GPUs, actually using a strong CPU to do Maxon based content creation would be a bad idea.

In GPU rendering via Cinebench 2024 even an RTX 4060 seems to beat my beefiest Ryzen 7950X3D and that one has an RTX 4090, which might even put rather big EPYCs to shame.

Nobody in his right mind should therefore actually continue to use CPU rendering for Maxon content creation, just like Handbrake is a very niche tool in video content conversion dominated by ASICs doing dozens of streams in real-time: both just happen to be readily available to testers, not or no longer useful as such.

Publishers make money from creating attention for vendor products that may have very little real-life advantages over previous gen products.

So a car that now has 325km/h top speed vs 312km/h in the previous generation gets a lot of attention, even if the best you can actually hope to achieve is 27km/h in your daily commuter pileups.
My 4090 is about 23 times faster than my 12900k, but that's not the point of Cinebench. It is used to see the maximum performance of a CPU. Testing something that only uses 2 or 4 cores might lead you to believe that a 7600x is as fast as a 7950x completely missing the fact that the 7950x can run 3 times as many of those workloads with 0 slowdown.
 

Stesmi

Reputable
Sep 1, 2021
31
34
4,560
It does tend to get lost that Cinebench isn't actually the product Maxon is earning money with, but started mostly as a tool to evaluate hardware to use with their content creation software.

To my knowledge they used CPU rendering for the longest time there, to match the quality expectations of their clients.
Oh yeah, for sure. It really wasn't that long ago that CPU rendering was the only thing used. Or, to me, having used 68000 for raytracing, it's not ... such... a long... time ago. Real3D I think it was called.
But now that Maxon (and Cinebench) seems to support high quality rendering also via GPUs, actually using a strong CPU to do Maxon based content creation would be a bad idea.

In GPU rendering via Cinebench 2024 even an RTX 4060 seems to beat my beefiest Ryzen 7950X3D and that one has an RTX 4090, which might even put rather big EPYCs to shame.

Nobody in his right mind should therefore actually continue to use CPU rendering for Maxon content creation, just like Handbrake is a very niche tool in video content conversion dominated by ASICs doing dozens of streams in real-time: both just happen to be readily available to testers, not or no longer useful as such.
Yeah, the only place it makes sense is if you want to use some option that your asic / hardware rendered doesn't support, but then again, I'm sure offloading a video encode to the computer cores (not hardware video encoding) would maybe be faster than pure CPU - it's just not done, as the dedicated encoder is faster, even though it may not produce higher quality per bitrate.
Publishers make money from creating attention for vendor products that may have very little real-life advantages over previous gen products.

So a car that now has 325km/h top speed vs 312km/h in the previous generation gets a lot of attention, even if the best you can actually hope to achieve is 27km/h in your daily commuter pileups.
Yeah, also halo-cars. "Oh look at that car with a twin turbo, supercharged V12!" "I'll go buy the one with the 3 cylinder that looks sort of the same." And guess what? It works. Please don't take the example as a real vehicle.
 

abufrejoval

Reputable
Jun 19, 2020
524
371
5,260
My 4090 is about 23 times faster than my 12900k, but that's not the point of Cinebench. It is used to see the maximum performance of a CPU. Testing something that only uses 2 or 4 cores might lead you to believe that a 7600x is as fast as a 7950x completely missing the fact that the 7950x can run 3 times as many of those workloads with 0 slowdown.
The point of Cinebench is to evaluate hardware for Maxon work. That's what it is designed and maintained for, while Maxon may also see it as a nice marketing tool.

The point of reviewers using Cinebench is to compare CPUs, ...somehow.

I'd argue that the latter transitions to abuse, when you argue that a faster CPU will help you create content faster or better. Clearly you might be better off with a GPU today, perhaps even with one of those iGPUs these SoCs have, once those are supported by your content creation tools.

I completely understand the dilemma reviewers find themselves in, I just wish they'd occasionally reflect on if the standard text blocks they've been using for the last ten years recommending ever more and more powerful CPU cores for "things like content creation" need to be adapted these days.

It's gotten to the point where it's no longer informational and bordering on a lie. And not everyone has been in the business long enough to understand what they actually want to implie: newbies might take them at face value!

These days nearly any use case that used to take lots of CPUs to solve gets bespoke hardware, even neural nets, when I'd prefer using that real-estate on a laptop for something useful like V-cache.
 

bit_user

Titan
Ambassador
I know, I know, I may be nitpicking, but how on earth is Cinebench 2024 a content creation application, unless you mean a tool used by benchmarkers to make... content?
Cinebench is a benchmark tool designed to characterize how fast rendering in Cinema4D will run. That's it's original purpose. Someone doing software rendering on their PC will pay close attention to it and Blender benchmarks, because those should be predictive of what kind of rendering performance they'll experience.

But now that Maxon (and Cinebench) seems to support high quality rendering also via GPUs, actually using a strong CPU to do Maxon based content creation would be a bad idea.
I've read people claiming they still use CPUs for rendering large scenes that won't fit in the amount of memory available on consumer GPUs. I'm not sure how big an issue this is specifically for Cinema 4D.

Nobody in his right mind should therefore actually continue to use CPU rendering for Maxon content creation, just like Handbrake is a very niche tool in video content conversion dominated by ASICs doing dozens of streams in real-time: both just happen to be readily available to testers, not or no longer useful as such.
For the longest time, it was said that you needed software video encoders, if you wanted the best possible quality.