Review Intel Core Ultra 9 285K Review: Intel Throws a Lateral with Arrow Lake

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

bit_user

Titan
Ambassador
I don't get the doom and gloom. According to both tomshardware.com and TPU review,
It's right there in the Pros and Cons sections of both reviews:
  • Toms: "Generational regression in gaming performance"
  • TechPowerUp: "Gaming performance lower than expected, slower than Raptor Lake"

TPU goes on to say: "Some games and applications aren't currently performing well at all"

Basically, I think people are disappointed because it jumped ahead by 2 full nodes and features much-enhanced updates to both the P-cores and E-cores. And, for all that, what we got just doesn't seem like a lot.

My personal take is that the past few generations (Intel: Gen 12 & Gen 13; AMD: Zen 2, Zen 3, and Zen 4) were the aberration. Zen 5 and Lunar/Arrow Lake represent more of a return to trend.

That said, I do think the 9000X3D could potentially be rather exciting. I'm also eager to see what Zen 6 will deliver, after moving to a smaller process node, a new IO Die, and CUDIMM memory support. I'm not saying it'll be any bigger than the gains we got in Ryzen 9000, but I think AMD certainly has some room to grow into its newly reworked microarchitecture.

For Intel's part, Panther/Nova Lake will also be crucial in showing whether or not they have a future as a cutting edge fab. If Intel 18A can't deliver solid gains over TSMC N3B, then perhaps IFS could go the way of Global Foundries and just try to compete on older nodes.

I think the Nova/Panther generation also could be where we first see X86S, APX, and AVX10/256.

I cant wrap my head around how is it any worse than the 9950x.
The reviewers did you the favor of summarizing it quite simply and putting it where you will see it.
 

EzzyB

Proper
Jul 12, 2024
48
58
110
I really only pay attention to these things periodically. But just no benchmarks higher than 1080 at all?

One of the main points of building a new PC for me soon is to move up to 1440. I know that as the resolution goes up the difference becomes smaller, but is it so small that you just don't worry about it now? Am I better off just sticking to the 9700k for another year and just upgrading the GPU and Monitor for now? (Obviously we're also going from PCIE 3 to 5 as well.)
 

irish_adam

Distinguished
Mar 30, 2010
236
64
18,760
I don't get the doom and gloom. According to both tomshardware.com and TPU review, the 285k is on par in both MT performance and efficiency with the 9950x (wins some, loses some), it beats it by 35% in st efficiency, it draws a lot less power in idle / semi idle / low power workloads (Autocad as tested by igors), offers identical gaming performance at way lower power draw.

I cant wrap my head around how is it any worse than the 9950x. It's better on every metric including the igpu.

The i5 and the i7 are obviously even more dominant but that has always been the case anyways since amds offerings are lacking core counts in that segment.
Well the 9950x was hardly positively reviewed was it? It also does actually beat the 285k in MT workloads and low/idle power savings are hardly anything to write home in a desktop environment. It does win on gaming power draw but again does that matter? You would only buy either of those CPU's if you mostly do productivity which means that playing games for a few hours a day would only gain like $10 a year power savings.

Lets also not forget that every reviewer has complained of system instability which is also playing a large part in the negative reviews, no one wants regular system crashes, especially if you do productivity workloads that can have long run times.

Though I think mainly the problem is the fact that CPU's like 12/13/14900k were seen as just the best CPU that you could get, it had prestige and loads of gamers and PC builders used them because of that prestige, the amount of users that actually use those for productivity is probably quite low. While X3D has somewhat knocked that view a little Intel was still very much in the fight and plenty still believed the 14900k was the better gaming CPU. This CPU torpedo's that view, this CPU is going to be seen as a flop because it has lost that prestige, no ones going to be running benchmarks on this and no one is going to be bragging about owning one of these.
 

JamesJones44

Reputable
Jan 22, 2021
858
797
5,760
It's the avx support. Imporant for the server workspace, useless for us mere mortals. That's why you see differences in phoronix and not in other review.
It's definitely not useless for consumer applications or games. Any type of repetitive work of sufficient size that could be done in a loop can benefit from AVX-512 (applying a mask over a large set of data for example).

These type of workloads show up more in the enterprise/server space, but games have started to become complex enough to benefit from vectorized compute.
 

TeamRed2024

Upstanding
Aug 12, 2024
200
131
260
Well the 9950x was hardly positively reviewed was it? It also does actually beat the 285k in MT workloads and low/idle power savings are hardly anything to write home in a desktop environment. It does win on gaming power draw but again does that matter? You would only buy either of those CPU's if you mostly do productivity which means that playing games for a few hours a day would only gain like $10 a year power savings.

Well said. I bought the 9950X for productivity with some side gaming... I'd rather have the video editing/encoding/rendering performance over a few fps in gaming that I probably wouldn't even notice with my 4090 running the show.

I would definitely notice trying to do all that productivity on a 7800x3D.

As for power draw... could care less. :ROFLMAO:
 
  • Like
Reactions: bit_user

bit_user

Titan
Ambassador
I really only pay attention to these things periodically. But just no benchmarks higher than 1080 at all?

One of the main points of building a new PC for me soon is to move up to 1440. I know that as the resolution goes up the difference becomes smaller, but is it so small that you just don't worry about it now? Am I better off just sticking to the 9700k for another year and just upgrading the GPU and Monitor for now? (Obviously we're also going from PCIE 3 to 5 as well.)
How much your current CPU would be a bottleneck depends a lot on what GPU you'd upgrade to and which games you play.
 

JamesJones44

Reputable
Jan 22, 2021
858
797
5,760
I'm afraid it's not the scheduler though. A lot of games perform better with ecores on (at least until 14th gen) because they put non critical gaming tasks on those. But with 285k, since HT is gone, core heavy games might be offloading some primary threads on the ecores which basically kills the performance. Cyberpunk is a prime example since it seems like it's one of those games that performance has tanked. It's losing to my alderlake for gods sake, lol.
I don't think the gaming results are do to HT being removed. HT doesn't guarantee 2 threads per core, it's advertised that way but that's not exactly how it works. At a very basic level, during a cpu cycle, if a pipeline isn't completely filed SMT/HT CAN try to fill the remaining space with instructions that fit within the space to fill the pipeline for a cycle. That's not the same as executing two identical threads at full speed unless those two thread instructions are small enough to fit within a single cycle. This is why SMT/HT is not a slam dunk for all threaded workloads.

Considering most games don't max out 8 cores, HT shouldn't be a major loss and in may benchmarks I've seen out there, SMT/HT in general for gaming typically doesn't make much difference (less than 3% on average). Anandtech did a deep dive on it and for games, the benefit of SMT/HT was pretty much nonexistent.

https://www.anandtech.com/show/1626...-multithreading-on-zen-3-and-amd-ryzen-5000/5

In truth HT was really invented to help with Northwood's extremely long pipelines which could be expensive for a lot of small work items. HT gave Intel a way of combining that work in one cycle. It has stuck around because on the whole it wasn't a net negative and in some non-gaming MT workloads it had a fair uplift. Now with CPU attacks centered around shared resources within a CPU (required for HT/SMT to work) the question has become is it still not a net negative on the whole? I think it will take a couple of product cycles to find out the answer to that question.
 

bit_user

Titan
Ambassador
I don't think the gaming results are do to HT being removed. HT doesn't guarantee 2 threads per core, it's advertised that way but that's not exactly how it works. At a very basic level, during a cpu cycle, if a pipeline isn't completely filed SMT/HT CAN try to fill the remaining space with instructions that fit within the space to fill the pipeline for a cycle.
The only way any of that can happen is if the OS even assigns 2 threads to run on a given P-core, in the first place.

As for @TheHerald 's speculation about "core heavy games might be offloading some primary threads on the ecores", that should actually perform better than how fast they'd have run on a fully-occupied Raptor Lake P-core. Intel said that Alder/Raptor E-cores were faster than if you used HT to run a second thread on a P-core that already had a thread running on it.

This is why SMT/HT is not a slam dunk for all threaded workloads.
No, it pretty much always is a net win, unless the two threads are thrashing each other's cache working set. This mainly seems to be an issue limited to floating point workloads, for whatever reason. Perhaps because it's just easier for most FP-heavy code to achieve high enough backend utilization that SMT has little upside and can pretty much only do harm?

Anandtech did a deep dive on it and for games, the benefit of SMT/HT was pretty much nonexistent.

https://www.anandtech.com/show/1626...-multithreading-on-zen-3-and-amd-ryzen-5000/5
That was a while ago and the average CPU has more cores/threads now. Games are probably using more of those threads, as well.

Now, I'm not saying HT should be a big win for games, even now, because I think part of the problem is that games don't have good ways of telling the OS how best to schedule their various threads. Instead, they probably do lots of hacks around CPU affinity masks, but that's pretty non-ideal.

In truth HT was really invented to help with Northwood's extremely long pipelines which could be expensive for a lot of small work items.
SMT was around for a long time before Intel branded it "Hyper Threading" and put it in the Pentium 4. As a technique, it's not limited to solving only one class of problems - maybe frontend bottlenecks, maybe branch mispredicts, maybe cache misses... perhaps also others.

Today, GPUs are the heaviest users of SMT.
 
  • Like
Reactions: NinoPino
Just some random thoughts on the 285K and ARL in general:

It seems Intel hasn't fared any better than AMD when it comes to Windows being problematic. I'm curious if Intel will further build out APO to take more control as it seems like this would be wise.

DLVR is a double edged sword, but something that I think was absolutely necessary. Roman (Der8auer) went over this in his review. For all core workloads it's costing power and ARL can actually be run quite a bit lower power for the same performance when bypassing it. Doing so would take away the individual core voltage controls thus damaging lighter threaded efficiency.

Idle efficiency didn't take a hit like latency did with the redesign which is a good thing. While people may shrug off things like this I think it depends on usage. Personally speaking I turn off my computer at night and on again the next day when I'm going to use it. In between it does not get shut down or put to sleep so every bit of efficiency makes a difference. Now that Intel has a similar power usage curve to AMD that makes them a superior buy should idle enter into the usage equation.

There are no slam dunks this generation from either AMD or Intel, but I think the 9800X3D will likely be the go to gaming only CPU (even if I doubt the uplift over 7800X3D would make it worth replacing). Z890 is a better platform connectivity wise than X870E, but neither one is cheap and X670E is a potential lower cost option for AMD.

I do think the ARL CPUs are good, and if picking between them and Zen 5 would be my choice (X3D isn't on the table unless the clocks are much higher this time), but they aren't exciting. They do seem to be having random issues which they also had with ADL launch (albeit clearly different issues) and only time will tell how many are resolvable vs architectural.

We've hit a point where neither Intel or AMD can claim clear victory and I think overall that's a good thing.
 
Aug 26, 2024
23
13
15
So let's recap:

1. Intel finally converts to EUV lithography on Meteor lake chips ($100 million per ASML tool). Intel also pursuing high-NA ASML tools ($500 million per ASML tool).
2. Intel converts to GAA transistor design for Arrow Lake and retires FinFET (GAA being a very complicated design). Intel also bragging about backside power delivery and MAXMIM decoupling capacitors.
3. Gaming performance is worse than AMD FinFET?? You kidding me? FinFET to GAA transition should have been groundbreaking.

I want FPS and quick load time. They should have canceled this node (1278) and skipped right to 1280. Power savings benefit is a joke.
 

bit_user

Titan
Ambassador
Idle efficiency didn't take a hit like latency did with the redesign which is a good thing. While people may shrug off things like this I think it depends on usage. Personally speaking I turn off my computer at night and on again the next day when I'm going to use it. In between it does not get shut down or put to sleep so every bit of efficiency makes a difference.
AFAIK, desktop CPUs don't dynamically down-clock their DDR5 frequency, and I'm pretty sure DDR5 DIMMs idle at a few W each. PSUs are also pretty inefficient at such low utilization, so you'd want to go with a GaN-based model, for lowest idle power*. Oh, and be sure to enable ASPM on your SSD. But, the main power hog at idle is definitely going to be your dGPU (if you have one). For my main desktop PC, I just use the iGPU.

Now that Intel has a similar power usage curve to AMD that makes them a superior buy should idle enter into the usage equation.
Have you seen Intel vs. AMD system-wide idle power compared at the wall? If so, I'd be surprised if the difference between CPU package idle power amounts to a whole lot. Again, it depends on what else is in the box, but where CPU idle power is truly a make-or-break issue is in laptops.

We've hit a point where neither Intel or AMD can claim clear victory and I think overall that's a good thing.
It's worth keeping in mind that AMD is at a disadvantage on manufacturing node and DRAM speed. Also, packaging technology, I think. This makes it all the more surprising they're so close. It also leaves more gains on the table for AMD utilize in Zen 6.

* Actually, I know GaN are very efficient at high utilization, but does anyone know how well they do at low utilization?
 
Last edited:
  • Like
Reactions: TeamRed2024

bit_user

Titan
Ambassador
So let's recap:

1. Intel finally converts to EUV lithography on Meteor lake chips ($100 million per ASML tool). Intel also pursuing high-NA ASML tools ($500 million per ASML tool).
2. Intel converts to GAA transistor design for Arrow Lake and retires FinFET (GAA being a very complicated design). Intel also bragging about backside power delivery and MAXMIM decoupling capacitors.
3. Gaming performance is worse than AMD FinFET?? You kidding me? FinFET to GAA transition should have been groundbreaking.
None of that is relevant, here. Wait for Nova/Panther Lake, on the Intel 18A node.

I want FPS and quick load time. They should have canceled this node (1278) and skipped right to 1280. Power savings benefit is a joke.
They did cancel 20A. Like Lunar Lake, this is using TSMC N3B.
 

JamesJones44

Reputable
Jan 22, 2021
858
797
5,760
No, it pretty much always is a net win, unless the two threads are thrashing each other's cache working set. This mainly seems to be an issue limited to floating point workloads, for whatever reason. Perhaps because it's just easier for most FP-heavy code to achieve high enough backend utilization that SMT has little upside and can pretty much only do harm?
Just depends on the work load, FP heavy for sure, but it's not always a win, just a win most of the time. Phoronix has a bunch of tests and it's a mixed bag, but overall it's typically a net win, but I wouldn't call it always unless comparing only the aggregate score, but I don't think that is genuine. A gamer isn't going to see it that way for most games.


It's still a net win by ~15% overall, but I was just trying to illustrate that it's workload dependent and not a guarantee.


That was a while ago and the average CPU has more cores/threads now. Games are probably using more of those threads, as well.

Still seems to be the case for many games in a recent techpowerup review.

https://www.techpowerup.com/review/amd-ryzen-9-9700x-performance-smt-disabled/15.html

The gaming case is curious with the 285K, given the synthetic benchmarks it seems like should at least match the 9950x in most cases, why it doesn't is a curiosity for sure, but I don't think it's HT related.
 
  • Like
Reactions: bit_user

JamesJones44

Reputable
Jan 22, 2021
858
797
5,760
On the gaming side, the one thing that sticks out from the sides is that Intel tweaked their Thread Director. I wonder if those tweaks are causing some threads to get scheduled on E cores instead of P cores. I wonder if it will be a case where games will have to be updated (or ever the OS) to address changes in Thread Director in order to get performance up near RPL.

Given Intel has said nothing about the gaming performance dips, I'm probably wrong, but it was a thought on why gaming performance seems so far off.
 

TeamRed2024

Upstanding
Aug 12, 2024
200
131
260
Have you seen Intel vs. AMD system-wide idle power compared at the wall? If so, I'd be surprised if the difference between CPU package idle power amounts to a whole lot. Again, it depends on what else is in the box, but where CPU idle power is truly a make-or-break issue is in laptops.
100%.

I absolutely do not care about the power usage of my desktop.

The laptop though? I care a little more... but so far it hasn't been an issue. 1 hour of gaming earlier (Diablo 4) took my battery from 100 to 40%. It's nothing I can't live with.

This is exactly what TPU started doing in their reviews and it consistently shows Zen4/5 using 20W+ more than Intel.

So about the same as 2 LED bulbs?
 
  • Like
Reactions: bit_user
AFAIK, desktop CPUs don't dynamically down-clock their DDR5 frequency, and I'm pretty sure DDR5 DIMMs idle at a few W each. PSUs are also pretty inefficient at such low utilization, so you'd want to go with a GaN-based model, for lowest idle power*. Oh, and be sure to enable ASPM on your SSD. But, the main power hog at idle is definitely going to be your dGPU (if you have one). For my main desktop PC, I just use the iGPU.


Have you seen Intel vs. AMD system-wide idle power compared at the wall? If so, I'd be surprised if the difference between CPU package idle power amounts to a whole lot. Again, it depends on what else is in the box, but where CPU idle power is truly a make-or-break issue is in laptops.


It's worth keeping in mind that AMD is at a disadvantage on manufacturing node and DRAM speed. Also, packaging technology, I think. This makes it all the more surprising they're so close. It also leaves more gains on the table for AMD utilize in Zen 6.

* Actually, I know GaN are very efficient at high utilization, but does anyone know how well they do at low utilization?
AMD uses an active interposer and it sucks a lot of power at full load, but also at IDLE. Anandtech had the measurements for that in their old 3950X or 5950X reviews. Keep in mind the mesh they use has not changed since first introduced. Minor tweaks at best, but mainly the same. So, improving the overall IO and CPU dies, but still keeping the piggy package mesh.

https://www.anandtech.com/show/1621...e-review-5950x-5900x-5800x-and-5700x-tested/8

And a very good read on it what AMD was thinking back then (I guess?): https://www.anandtech.com/show/16930/does-an-amd-chiplet-have-a-core-count-limit

The mesh power consumption was amplified a lot in EPYC packaging. I remember seeing measurements where it uses around 150W of the total power budget, which is a bit over half for some models?

All this to say that AMD is still willing to pay on the lower power side (innefficiency) and keep the balance of the cost-to-performance of the current packaging. Which, again, Intel is using a way more expensive solution which, all in all, makes it 20W more efficient on the best case scenario :D

Regards.
 

TeamRed2024

Upstanding
Aug 12, 2024
200
131
260
AMD uses an active interposer and it sucks a lot of power at full load, but also at IDLE.

You're not wrong. My 9950X idles in the 50-60c range and that's with an AIO. It doesn't really heat up that much though under load... into the 70's is the highest I've seen.

Stress testing has pushed it to 90-95c but that's something I rarely do anymore.

Nothing I can't live with... the performance is good.
 
  • Like
Reactions: bit_user

JayNor

Honorable
May 31, 2019
458
103
10,860
"The new chips come with 24 lanes of PCIe 5.0 support, with..."

... so, we'll all be surprised when Battlemage cards arrive with 16 lanes of PCIe 5.0? The gaming reviews will need to be rewritten.

The double data rate PCIE5 should benefit AI processing even more.
 
Sep 12, 2024
12
4
15
If it's insane, Ryzen is super insane by being even more efficient.
Both are TSMC wafer, should be not much difference, intel benefit little bit cosz it is TSMC 3nm technology, for AMD is 4nm. IDLE = Intel Win, Full Load = AMD=Intel (on Par)
 

bit_user

Titan
Ambassador
This is exactly what TPU started doing in their reviews and it consistently shows Zen4/5 using 20W+ more than Intel.
20W sounds like a lot, but not so much when it's the difference between 58W and 78W.

power-idle.png


Source: https://www.techpowerup.com/review/intel-core-ultra-9-285k/24.html