Review Intel Core Ultra 9 285K Review: Intel Throws a Lateral with Arrow Lake

Page 8 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
No. You made up your mind and are just posing.

Will you try to convince me not to wait for the 9950X3D and get the 285K instead?

Regards.
I will. If you are not in any hurry to upgrade cause your PC is still holding up, you should absolutely without a shadow of a doubt wait for the 9950x 3d. Not only there is a big chance it will be as good as the 285k in non gaming workloads, it will almost certainly be better in games. And in the case for whatever reason it turns out to be a flop, youll still have the 285k and who knows, it might get some scheduler updates and perform good in games too.
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
Cool more claims without data and you're also on your standard "all reviewers are wrong" nonsense.
Sorry, too many replies i got lost. You want data that TLOU works better when 2ccds are enabled on the 7950x 3d than only one? I can send you a pm with videos from a guy that's tested it. With only one ccd (so basically simulating a 7800x 3d) it was slower than my 12900k, with both CCDs it was around 10% ahead of my 12900k.
 
After reading the almost 150 comments in here, all I see is a lot of "Yes, its good" or "No, it sux". Even....'the worstest evar!'
Without a lot to back it up.

Building a new PC in the next couple of months, the Ultra 7 265k at the top of my short list.

Convince me why this is a bad idea or a good idea....

Conditions:
1. I don't really care about no upgrade path for this socket. By the time I need a new/better CPU, I'm changing the whole thing anyway.

2. Not gaming. CAD/photo/video/programming.

3. Probably paired with a 4070 variant and 64GB RAM.


Convince me.
2) I'd suggest trying to find any benchmarks which might match your specific workloads. PC World does more business/workstation type testing than many others.

3) If memory performance matters to your applications 48GB is the way to go until single rank 32GB DIMMs are available and they are coming. I think CUDIMM manufacture has thrown the industry a bit so it might not be until next year.

Otherwise if there isn't something performance/efficiency wise driving your decision look at the platforms. ARL has built in TB4 and PCIe 5.0 x4 without losing the PCIe 4.0 x4 so it effectively has 4 more lanes to use than ADL/RPL. Z890 is also superior to X870E with regards to usable PCIe due to the USB4 mandate, but some X670E configurations can match it (but you lose the USB4).

I'll probably be getting a 245K myself as Z890 is the best overall client platform to date.
 
  • Like
Reactions: helper800

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
2) I'd suggest trying to find any benchmarks which might match your specific workloads. PC World does more business/workstation type testing than many others.

3) If memory performance matters to your applications 48GB is the way to go until single rank 32GB DIMMs are available and they are coming. I think CUDIMM manufacture has thrown the industry a bit so it might not be until next year.

Otherwise if there isn't something performance/efficiency wise driving your decision look at the platforms. ARL has built in TB4 and PCIe 5.0 x4 without losing the PCIe 4.0 x4 so it effectively has 4 more lanes to use than ADL/RPL. Z890 is also superior to X870E with regards to usable PCIe due to the USB4 mandate, but some X670E configurations can match it (but you lose the USB4).

I'll probably be getting a 245K myself as Z890 is the best overall client platform to date.
Igor's lab usually tests autocad as well, some decent results from the new intel chips

01-ACAD-3D.png


02-ACAD-2D.png



62-Power-Draw-ACAD-Efficiency.png



03-ACAD-Disk.png
 

JamesJones44

Reputable
Jan 22, 2021
851
779
5,760
I'd have to see another graph from them.

That shows ALL 3 of them as "Ultra 9..."
285k
265k
245k

Small detail, but...details.
Igor did a nice job with productivity as TheHerald pointed out. It includes all 3 of the K variants.

 
  • Like
Reactions: helper800

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
I'd have to see another graph from them.

That shows ALL 3 of them as "Ultra 9..."
285k
265k
245k

Small detail, but...details.
Well, typo, someone also mentioned that in their comments

 

AndrewJacksonZA

Distinguished
Aug 11, 2011
596
106
19,160
2. Not gaming. CAD/photo/video/programming.

Convince me.
Dude, get the i9 or whatever it's now called, and then underclock/undervolt it by a few percent. Core count increase + what I expect to be a big gain in power savings.

From my observations over the years, it appears that Intel has been and is consistently ahead - even if it's ever so slightly - when it comes to coding, and specifically Microsoft products like Visual Studio in it's responsiveness and SQL Server for sheer grunt (see the EightKB video on youtube from I think it was 2023's conference on why CPU architecture matters for various SQL Server use cases. ) However, I would hold on the purchase until the 9950X3D in case what you're doing gets advantaged by the larger cache.

I can't speak about CAD/photo/video. I would hope that they have advanced and use the magical and mythical AVX-512 by now. : -)
 
Sorry, but no. For gaming in general, PCIe bandwidth simply isn't a primary factor. The reason is simple: the GPU may have anywhere from ~250 to over 1000 GB/s of internal VRAM bandwidth. It's over an order of magnitude faster than the PCIe interface bandwidth.

There are certain workloads where PCIe bandwidth matters, as you note with it potentially boosting AI performance. But even that doesn't often use PCIe that much, unless you're doing multi-GPU workloads. For a single GPU? Even PCIe 3.0 x16 will generally suffice (just not on Intel Arc, which has some problematic design constraints and drivers that want PCIe 4.0).

And of course this doesn't account for things like PCIe x8 and x4 interfaces instead of the full x16, which further restrict the bandwidth. PCIe 2.0 x16 would probably cause a modest (maybe 5~10 percent?) loss in gaming performance on a fast modern GPU. PCIe 3.0 x8 and PCIe 4.0 x4 have the same bandwidth (basically) as PCIe 2.0 x16 and thus are in the same boat. There's a good reason why it's only the budget GPUs that get saddled with an x8 link connection while maintaining the x16 physical connector.
I think the 4090 was the first time there really was any measurable loss going from PCIe 4.0 x16 to 3.0 x16. That's also a card while you might not really notice the loss in everything if you're paying that kind of money probably don't want any loss.

The biggest advantage I see for video cards moving to PCIe 5.0 is the benefit to consumers due to client platforms being stingy on lanes. With ARL Intel finally allows for some native x4 bifurcation (AMD already does) so a motherboard with two PCIe 5.0 slots could run a PCIe 5.0 video card at x8 and lose zero performance while gaining an x8/x4x4 slot (or in the case of the Z890 Unify-X you could use the second PCIe 5.0 M.2 and have an x4 slot).

That flexibility is why I think PCIe 5.0 for video cards has importance even if it's not important for the video card itself.
 

YSCCC

Commendable
Dec 10, 2022
566
460
1,260
Igor did a nice job with productivity as TheHerald pointed out. It includes all 3 of the K variants.

Watching quickly for various reviews online it seems like this gen have issues on stability or bsod loop, and things don’t work properly and the marks jumping a lot from reviewers to reviewer and some even have a run to run big discrepancies. And maybe binning is varying a lot more than it used to be also.

For R23 MT alone I have seen in Intel default profile with results from just over 40k to something like 45k, all claims to use the Kingston cudimm 8200, I just can’t imagine that discrepancy. If 45k, good, but if some padded bins selling out there only do 40-41k… that’s only an efficient 14900k dear god.

And for production IIRC in photoshop bench it’s quite slow, and for video/photo intensive jobs they use PS quite a lot (wedding and event photo+ video for example), then it looks like for rendering it is a good choice but for PS… go use something else.

As per gaming performance isn’t relevant as games are bottleneck by GPU argument.
Don’t forget for years GPU are doing those rendering or photo video encoding stuffs in the most part also. Try disable GPU support for video editing software and you minutes long encoding will become closer to hour long. For past 10 years the advice or building such rigs are usually: get the cpu with enough ram at slower speed, but tons of them, get a higher end NVDA card for the cuss cores… CPU wise if one is on a tighter budget get something mid to high end like an i7 or ryzen 7900x would be way more than enough
 

YSCCC

Commendable
Dec 10, 2022
566
460
1,260
Why does this article only show 1080P gaming benchmarks instead of 1440P and 4K?
Some reviewers also do it in 720p, not that is the resolution one will game at, but highlight how could the cpu performance differs when you de-bottleneck the game by GPU, so if you retain the platform for 1-2 gen of GPU and (which likely is still PCI-e 5.0 x16) how much will or will not the cpu becomes the bottleneck, which is relevant to those who plan to upgrade parts but not the whole rig at once
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
Watching quickly for various reviews online it seems like this gen have issues on stability or bsod loop, and things don’t work properly and the marks jumping a lot from reviewers to reviewer and some even have a run to run big discrepancies. And maybe binning is varying a lot more than it used to be also.

For R23 MT alone I have seen in Intel default profile with results from just over 40k to something like 45k, all claims to use the Kingston cudimm 8200, I just can’t imagine that discrepancy. If 45k, good, but if some padded bins selling out there only do 40-41k… that’s only an efficient 14900k dear god.

And for production IIRC in photoshop bench it’s quite slow, and for video/photo intensive jobs they use PS quite a lot (wedding and event photo+ video for example), then it looks like for rendering it is a good choice but for PS… go use something else.

As per gaming performance isn’t relevant as games are bottleneck by GPU argument.
Don’t forget for years GPU are doing those rendering or photo video encoding stuffs in the most part also. Try disable GPU support for video editing software and you minutes long encoding will become closer to hour long. For past 10 years the advice or building such rigs are usually: get the cpu with enough ram at slower speed, but tons of them, get a higher end NVDA card for the cuss cores… CPU wise if one is on a tighter budget get something mid to high end like an i7 or ryzen 7900x would be way more than enough
The discrepancy between reviews is because the balanced power profile doesn't work correctly, it drops the cache to 400mhz on idle and doesn't boost it back up when under load. If a review tested with high performance instead they would have bigger numbers naturally. Os bug, expecting a fix
 

YSCCC

Commendable
Dec 10, 2022
566
460
1,260
The discrepancy between reviews is because the balanced power profile doesn't work correctly, it drops the cache to 400mhz on idle and doesn't boost it back up when under load. If a review tested with high performance instead they would have bigger numbers naturally. Os bug, expecting a fix
Not as simple as that, this video detailed how they troubleshoot it to be the power profile problem, and after that the R23 score is still around 40k only, there are run to run and background apps etc. could be affecting the mark to some degree, but some 5k difference should be something more major, maybe bin variation related

View: https://www.youtube.com/watch?v=6QoCCFXD0xc
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
Not as simple as that, this video detailed how they troubleshoot it to be the power profile problem, and after that the R23 score is still around 40k only, there are run to run and background apps etc. could be affecting the mark to some degree, but some 5k difference should be something more major, maybe bin variation related

View: https://www.youtube.com/watch?v=6QoCCFXD0xc
The cbr score should be around that ballpark. I don't think the 285k scores 45k at stock. Maybe with the power limits removed, but not normally.

Computerbase said they found no issue or bug for the 2 weeks they've been testing.
 

YSCCC

Commendable
Dec 10, 2022
566
460
1,260
The cbr score should be around that ballpark. I don't think the 285k scores 45k at stock. Maybe with the power limits removed, but not normally.

Computerbase said they found no issue or bug for the 2 weeks they've been testing.
If such it don’t even meaningfully faster than a 14900k… the only thing worth praising is efficiency gain compared to RPL.

But when zen 5% gets mocked this is kind of a similar if not worse launch, worse still is they changed the branding. With such changes you normally will need huge leap all around to get the wow effect. Like smoking the old core i all around to justify the Ultra branding or make it impressive. It’d be better if it’s the i9 15900k or so named and let the real leap forward for the time to change product naming
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
If such it don’t even meaningfully faster than a 14900k… the only thing worth praising is efficiency gain compared to RPL.

But when zen 5% gets mocked this is kind of a similar if not worse launch, worse still is they changed the branding. With such changes you normally will need huge leap all around to get the wow effect. Like smoking the old core i all around to justify the Ultra branding or make it impressive. It’d be better if it’s the i9 15900k or so named and let the real leap forward for the time to change product naming
It's not decisively faster than the 14900k, at least not across the board. Though there are are some workloads that is way faster than everything that exists, mostly science math etc workloads.

The huge leap is efficiency. Especially at lower power, according to the metrics I've seen at 65w it's 50% faster than the 14900k / 9950x.
 

YSCCC

Commendable
Dec 10, 2022
566
460
1,260
It's not decisively faster than the 14900k, at least not across the board. Though there are are some workloads that is way faster than everything that exists, mostly science math etc workloads.

The huge leap is efficiency. Especially at lower power, according to the metrics I've seen at 65w it's 50% faster than the 14900k / 9950x.
Which is pointless… efficiency only matters when performance is close and the efficiency win is large enough, say when 13900k vs 7950x, 253w vs 170w can be a decisive factor. But buying totl chip at $600 just to limit it to 65w? Who will do that? Ppl buy it coz they need the totl performance in the first place. Efficiency is second.

If that being the only thing worth mentioning they better name it something other than ULTRA, name it ECO9 or so
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
Which is pointless… efficiency only matters when performance is close and the efficiency win is large enough, say when 13900k vs 7950x, 253w vs 170w can be a decisive factor. But buying totl chip at $600 just to limit it to 65w? Who will do that? Ppl buy it coz they need the totl performance in the first place. Efficiency is second.

If that being the only thing worth mentioning they better name it something other than ULTRA, name it ECO9 or so
The 7950x isn't a 170w cpu, it draws 230w. 170 is the tdp which is misleading.

But regardless, you are asking me why people are buying efficient chips? Cheaper mobo requirements, smaller case, smaller cooler, lower bills, smaller psu, all of this adds up. Seeing at the 285k is overall more efficient than the 9950x and the 14900k, it looks like an exceptional product
 
But when zen 5% gets mocked this is kind of a similar if not worse launch, worse still is they changed the branding.
The difference is that AMD did it to themselves by effectively lying to everyone with their marketing materials and initial review guide. Intel's marketing slides were pretty much spot on to what reviews saw. It was very obvious that ARL wasn't winning any prizes when it came to gaming and memory latency sensitive workloads.

I don't think Zen 5 is bad at all, but AMD made the launch awful. ARL is pretty much exactly as advertised but everyone's playing the jump on Intel game. Neither one makes sense as an upgrade if you're already on the preceding generation (unless increasing core count or going to X3D from non) and that's fine.