Review Intel Core Ultra 9 285K Review: Intel Throws a Lateral with Arrow Lake

Page 7 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

Peksha

Prominent
Sep 2, 2023
44
31
560
Because I'll see a 0% improvement in my gaming going for the 7800x 3d since my gpu is a huge bottleneck anyways, but I'll notice that the 285k is up to 250% faster on other workloads.
Only until your "not so fast" video card requires dlss/fsr + fg to be enabled, which will render at about 50% of the original resolution.
It's especially shocking to read this when almost all game releases in the last 3 months have required enabling aggressive scaling with FG by default, so as not to bring even a 4090 to its knees.
 
Last edited:

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
It would be prudent to mention that you game on a 4k monitor at max settings.
I can drop to 1440p and the gpu will still be largely the main limiting factor. Of course it depends on the games, yes if I try valorant and cs go my cpu is the bottleneck. But then again I'm getting around 600 fps in valorant with that outdated cpu I got so I'm not sure I need more.
 
  • Like
Reactions: helper800

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
The entire original post on this subject was about gaming efficiency for which neither the 285K or 9950x win in any gaming category, so I have no idea what point is trying to be made on that front.
No, the entire original post was about high end fast chips that can also play games decently. The 285k does that while being the most efficient, by far.
 

rluker5

Distinguished
Jun 23, 2014
901
574
19,760
This gen seems to be aimed at OEM desktops and server. Not as much for gamers.
These chips aren't the best for server, but it looks like AMD's advantages are quickly fading away there.

Perhaps in the future Intel will come out with a gamer CPU with the memory controller being part of the compute tile or they will add cache again.
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
So the whole argument is about saving $6 a year when it will cost $1000 to upgrade a system to a 285K... 👌
Yeap, that's exactly the argument. Gets too boring to talk to amd fans man. I've been drooled in tears about intels inefficiency the last 2 years and suddenly it's just 6$ after all. Good to know.
 
  • Like
Reactions: rluker5
Yeap, that's exactly the argument. Gets too boring to talk to amd fans man. I've been drooled in tears about intels inefficiency the last 2 years and suddenly it's just 6$ after all. Good to know.
That's because the difference now is 6 dollars but before when you were getting lambasted about AMD efficiency it was more like 15-30 dollars depending on how much your kWH is.
 
View: https://www.youtube.com/watch?v=4wdQpVcL_a4


Understanding the root cause confirms the assertion, I'd say.

Regards.
But it doesn't because the 7950X3D is still slower than the 7800X3D in everyone's benchmarks. If everything was actually working properly that wouldn't be the case since the boost clock is higher. Now this might be because of the two CCDs causing random issues in general, but it doesn't change the fact that the 7800X3D provides higher performance across the board.
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
But it doesn't because the 7950X3D is still slower than the 7800X3D in everyone's benchmarks. If everything was actually working properly that wouldn't be the case since the boost clock is higher. Now this might be because of the two CCDs causing random issues in general, but it doesn't change the fact that the 7800X3D provides higher performance across the board.
I think it's more reviewers not bothering / not knowing. In the ocing community where people know what the heck they are doing I've never seen the 7800x 3d being faster than the 7950x 3d
 
Not sure if you are serious or just joking honestly. Please tell me you are joking 😁
Maybe you do a lot of CPU MT tasks for work; 12900k full load stock vs 5950x full load stock is ~300W vs ~180W. (~120W x 10 hours x 365 days) / 1000 = 438 x 0.43 cents per kWH = 188.34 dollars / 12 = 15.70 dollars a month in extra electricity costs.

In all seriousness, energy efficiency matters, but to the extent that I cling to it as a crutch for AMD is nonsense.

efficiency-multithread.png


Source for the above graph.
 
Last edited:
I think it's more reviewers not bothering / not knowing. In the ocing community where people know what the heck they are doing I've never seen the 7800x 3d being faster than the 7950x 3d
Show some data to back that up then rather than making arbitrary claims. Let's say Alan Wake 2 since that's a title where dual CCD chips seem to underperform period.
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
Maybe you do a lot of CPU MT tasks for work; 12900k full load stock vs 5950x full load stock is ~300W vs ~180W. (~120W x 10 hours x 365 days) / 1000 = 438 x 0.43 cents per kWH = 188.34 dollars / 12 = 15.70 dollars a month in extra electricity costs.

In all seriousness energy efficiency matters, but to the extent that I cling to it as a crutch for AMD is nonsense.

efficiency-multithread.png


Source for the above graph.
I'm not running the 12900k at 300 watts, but how is the above relevant to the discussion?
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
Show some data to back that up then rather than making arbitrary claims. Let's say Alan Wake 2 since that's a title where dual CCD chips seem to underperform period.
I have some tests on TLOU and cyberpunk. , nobody really tests Alan Wake. I can ask, but I don't think anyone will care. Is it cpu intensive? I finished the game and as usual, gpu was pegged to a crawl.

But on TLOU and cp the r9 is always at least 10% ahead of the r7. The only 7800x 3d that I've seen actually being fast in these games is a guy using a chiller with 8200 ram and fsb overclocking running 5.4 ghz all core. That was faster than my tuned 12900k even, which isn't common.
 
I have some tests on TLOU and cyberpunk. , nobody really tests Alan Wake. I can ask, but I don't think anyone will care. Is it cpu intensive? I finished the game and as usual, gpu was pegged to a crawl.

But on TLOU and cp the r9 is always at least 10% ahead of the r7. The only 7800x 3d that I've seen actually being fast in these games is a guy using a chiller with 8200 ram and fsb overclocking running 5.4 ghz all core. That was faster than my tuned 12900k even, which isn't common.
AW2 depends on the area because it's like CP2077 in that it's very graphically demanding but also has very heavy CPU areas.

TLOU is one of the only titles that seems to mostly work right. CP2077 depends on whether or not the base bench is being used.

There's also no way with stock behavior a 7950X3D is going to be 10% faster in anything since the clocks are about 5% apart. You can also see across varied reviewers benchmarks when the wrong CCD is still being used. This can be manually fixed of course, but then you run into the manually tweaking territory which will always net you better results on any CPU.

Bottom line is that unless a reviewer is reusing a Windows install from a CPU other than a dual CCD X3D part the results are valid. Something having the potential to be better with user intervention is not the same thing as it just being better.
 

logainofhades

Titan
Moderator
I am sticking with my 12700k for the time being, but if I were in the market for building a gaming rig, I would be waiting on the 9800x3d. I wouldn't even consider Arrow Lake. What I play prefers x3d chips by a huge margin. Arrow lake is about as exciting for me as the 7700k was vs the 6700k.
 
Last edited:
  • Like
Reactions: YSCCC and helper800

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
AW2 depends on the area because it's like CP2077 in that it's very graphically demanding but also has very heavy CPU areas.

TLOU is one of the only titles that seems to mostly work right. CP2077 depends on whether or not the base bench is being used.

There's also no way with stock behavior a 7950X3D is going to be 10% faster in anything since the clocks are about 5% apart. You can also see across varied reviewers benchmarks when the wrong CCD is still being used. This can be manually fixed of course, but then you run into the manually tweaking territory which will always net you better results on any CPU.

Bottom line is that unless a reviewer is reusing a Windows install from a CPU other than a dual CCD X3D part the results are valid. Something having the potential to be better with user intervention is not the same thing as it just being better.
Well if you manually make sure the amd driver - xbox game bar works properly, is that considering "user intervention"?

I've literally seen the 7950x 3d being more than 10% faster in eg. TLOU, but both CCD's were in use, game scales with cores.
 
After reading the almost 150 comments in here, all I see is a lot of "Yes, its good" or "No, it sux". Even....'the worstest evar!'
Without a lot to back it up.

Building a new PC in the next couple of months, the Ultra 7 265k at the top of my short list.

Convince me why this is a bad idea or a good idea....

Conditions:
1. I don't really care about no upgrade path for this socket. By the time I need a new/better CPU, I'm changing the whole thing anyway.

2. Not gaming. CAD/photo/video/programming.

3. Probably paired with a 4070 variant and 64GB RAM.


Convince me.
No. You made up your mind and are just posing.

Will you try to convince me not to wait for the 9950X3D and get the 285K instead?

Regards.
 
  • Like
Reactions: Elusive Ruse

logainofhades

Titan
Moderator
After reading the almost 150 comments in here, all I see is a lot of "Yes, its good" or "No, it sux". Even....'the worstest evar!'
Without a lot to back it up.

Building a new PC in the next couple of months, the Ultra 7 265k at the top of my short list.

Convince me why this is a bad idea or a good idea....

Conditions:
1. I don't really care about no upgrade path for this socket. By the time I need a new/better CPU, I'm changing the whole thing anyway.

2. Not gaming. CAD/photo/video/programming.

3. Probably paired with a 4070 variant and 64GB RAM.


Convince me.

I think it's a bad idea, as they aren't vastly superior to 12th gen, and you can get 12th gen a lot cheaper. A 12900k can be had for a good $125 cheaper, can be paired with cheaper DDR4 as well.
 

USAFRet

Titan
Moderator
No. You made up your mind and are just posing.

Will you try to convince me not to wait for the 9950X3D and get the 285K instead?

Regards.
No, I have NOT, in any way, made up my mind.

An AMD 9something was at the top, and still is, on my short list.

I am agnostic as far as Intel vs AMD.
Currently:
My main system - AMD
Primary laptop - Intel
HTPC - AMD

I don't care.....