News Intel's Arrow Lake fix doesn't 'fix' overall gaming performance or match the company's bad marketing claims - Core Ultra 200S still trails AMD and...

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
So, the Arrow Lake still manages to pull some 150-ish fps on average in those tested games, which sounds like a "smooth enough" gaming experience to me.
Yes, no other CPU even remotely touches the king of gaming (the AMD 9800X3D), but even while being like 25% faster - how exactly does the gaming experience differ between 150 (Ultra 285K) and 200 fps (9800 X3D)? I would say: there is no tangible difference in real lilfe gaming - at all!
The issue here is that you're not getting the performance Intel themselves said you would. These performance figures aren't done in a vacuum either: you're still paying for this lower-than-expected performance and there are options from a competitor that offer higher performance for roughly the same cost. So, why would you choose the inferior product?

It makes sense if you already had an Intel motherboard with LGA 1851, but if you were actually shopping for a board/CPU right now, articles like this should pull you away from Intel's Arrow Lake.

Sufficient gaming fps takes a back seat to getting the expected performance you paid for. And high-end CPUs are often paired with high-end GPUs, so to see Arrow Lake leaving fps on the table must be aggravating for early adopters.
 
  • Like
Reactions: ottonis
You can call out the " least common denominator of the people" thing as well.

The fact that you need to attack my account instead of the argument doesn't reflect you well on your claimed role.

I am not denying Intel's responsibility for the Arrow Lake benchmark, it's just that Tom's should not publish such an incomplete analysis and lay blame specifically on the Intel marketing team.
I think the Tom's article is fine, it covers all that is important to the consumer, that is: for as simple as a mobo swarp, all stock settings, the new "fix" isn't that much of a fix and the Arrow Lake is still lacking behind a lot. How on earth do you expect Tom's to dig into the 100+ settings of a mobo and even more for windows setting or driver combination to figure out what went wrong for them compared to Intel marketing team? they didn't even publish their own hardware platform and settings. It's their own job to investigate the real world discrepancy, not reviewers.

And more importantly, ppl read reviews to aid their purchase decision, for some intel long term users, it will be important for them to see those "post fix" reviews to see what they could expect in some real world stock mobo defaults will be to make the decision, not after 1 month where THG may or maynot been able to figure out why that's the case
 
Before rendering yourself wholeheartedly anti-Intel, can someone tell me how I know if this article is tested with the updated BIOS 0x114 CSME on both boards? Where do I find the BIOS version that is tested?

If you guys are saying that Intel's claim is wrong, this article serves no better virtue.
 
Last edited:
I think the Tom's article is fine, it covers all that is important to the consumer, that is: for as simple as a mobo swarp, all stock settings, the new "fix" isn't that much of a fix and the Arrow Lake is still lacking behind a lot. How on earth do you expect Tom's to dig into the 100+ settings of a mobo and even more for windows setting or driver combination to figure out what went wrong for them compared to Intel marketing team? they didn't even publish their own hardware platform and settings. It's their own job to investigate the real world discrepancy, not reviewers.

And more importantly, ppl read reviews to aid their purchase decision, for some intel long term users, it will be important for them to see those "post fix" reviews to see what they could expect in some real world stock mobo defaults will be to make the decision, not after 1 month where THG may or maynot been able to figure out why that's the case
Well, just point me to the BIOS version that is tested here, two board 4 data points.
ASUS ROG Maximus Z890 Hero
MSI MEG Z890 Ace
 
It makes sense if you already had an Intel motherboard with LGA 1851, but if you were actually shopping for a board/CPU right now, articles like this should pull you away from Intel's Arrow Lake.

IMHO you need to put that in price perspective too. Here in Italy we don't have the giant US hardware market.
Judging by this chart if one can't (and won't) afford an X3D chip and want to avoid the 13th/14th Intel due to microode problem, a real gaming advantage over the 245K would be (I guess) a 9700X with its +13FPS 1% low and +20FPS on Average in 1080p on a RTX 4090 that few can afford as well. But these are the prices here:

- Intel 245K - 340€ (Amazon.it), 311€ (Amazon.de)
- Intel 245KF - 299€ (Both Amazon.it and .de)
- AMD 9700X - 426€ (Amazon.it), 441€ (last seen price on Amazon.de)
- AMD 9600X - 335€ (Amazon.it), +10FPS on average

About motherboard, I know that the ITX and ASUS ones are not to be considered as a standard, but:

- B860-I - 296€ (Amazon.it), 236€ (Amazon.es)
- B850-I - 359€
- B650E-I - 291€

Another option would be the budget friendly 7600 and a Gigabyte/Asrock B650 board but that won't give an huge leap from the 245K.
 
Last edited:
Well, just point me to the BIOS version that is tested here, two board 4 data points.
ASUS ROG Maximus Z890 Hero
MSI MEG Z890 Ace
From what it wrote in the review I believe they have the latest microcode implemented bios in test, and 2 data points with one being bad is bad enough, it's not like we have to choose a default profile which really works across the board is a good thing. And I've yet to see any reviews in youtube or so shows any remarkable fix to the arrow lake performance with all these latest bios and windows version. Hack my undervolted 14900k (bad bin) can go within 10% of the 285k in productivity workload and better than it in gaming.... and that is already considered a kind of bad product even without the degradation due to it's power draw. It's too bad for intel at this gen also, sadly
 
From what it wrote in the review I believe they have the latest microcode implemented bios in test
The point is that they're not providing necessary information. If you're testing something that has very specific requirements and there have been several BIOS/ME releases said information is important.

Example: The first publicly available BIOS for the MSI Z890 Ace which notes having a new enough ME version is dated 1/16 and this article was posted early 1/18. Now maybe that's the version which was tested, but without the article providing that information it's just a guess.

There's no excuse for not providing BIOS/ME version information for what is being tested.

I'd also like to have seen latency testing, but that's because I've seen several reports of these updates causing memory latency issues. So that is more of something I wish had been done, but didn't expect at all.
 
  • Like
Reactions: kcw and cyrusfox
The point is that they're not providing necessary information. If you're testing something that has very specific requirements and there have been several BIOS/ME releases said information is important.

Example: The first publicly available BIOS for the MSI Z890 Ace which notes having a new enough ME version is dated 1/16 and this article was posted early 1/18. Now maybe that's the version which was tested, but without the article providing that information it's just a guess.

There's no excuse for not providing BIOS/ME version information for what is being tested.

I'd also like to have seen latency testing, but that's because I've seen several reports of these updates causing memory latency issues. So that is more of something I wish had been done, but didn't expect at all.
Sure, the info on bios version or so is legit request for a complete review, yet for a brief news kind of day 1 testing on some fix and irregularities they found, that is suffix for what's most important: Is the fix working for a simple install and bam magic happens.

I personally read this as something like the CPU overhead issue of Battlemage, or 7800X3D burning news article. hopefully they are going a more detailed, full review on the fix, or maybe intel is promising reviewers more fixes to come so the full review is halted..
 
It isn't about 150-200 FPS being sufficient or insufficient.

For Intel to cover development/manufacturing costs they have to persuade gamers/developers/etc to throw older AMD/Intel CPUs in garbage bins and to pay Intel money by purchasing new CPUs. If the new CPUs don't generally perform better then upgrading older CPUs to new ones contradicts economic rationality.
I fully agree with that and have said (in the very same post) that Intel deserves every bit of being called out. The main point I am trying to make, is that people are arguing over 150 vs 200 fps while there is actually another much more important aspect: it's the whole package of general performance, the efficiency and the price. So, if the Arrow Lake is cheaper than the 9800X3D while offering a similiar general performance and a gaming performance that is not tangibly worse, then this might be ofcourse of interest for some buyers.
 
  • Like
Reactions: kcw
For non gaming purposes pretty much anything will do. I think, to make your point clearer, you need to qualify WHICH non gaming tasks apply here, because running Office 365 and a browser (Probably 98% of non gaming use) nobody needs and i9 or whatever Intel is calling it this Gen.

Also consider that the gaming experience between PC gaming and Console gaming is equally disparate as many non gaming tasks. They exist as separate markets for a reason.

As for the processors in question. They are fine. Bleeding edge frame rates at 1080p low settings are only for comparisons where the CPU is exposed as the performance limiter. IE benchmarking. In real world scenarios they will achieve mostly identical performance to both their immediate predecessors, and AMD offerings.

Remain brand agnostic people, nobody needs that kind of stress in their life..
Studio work.
 
Problem with that is since the RPL degradation issue, who can guarantee that such tweaks arn't going to do some suicide thing? if one discovered such fix of OCing the ring bus or so gets 15% faster and everyone go for the 101 and somehow fried the chip, it's another big issue, also same as undervolting, that involves some considerable time to test for real stability limit, and when ppl bought a unlucky chip which essentially cannot do such tweaks, they can't have a refund, so for a journalist I do think testing out sttuffs stock will be much more meaningful.
The tweaks I made stayed well within the headroom of the chip. Not even getting close to the limits.
 
The tweaks I made stayed well within the headroom of the chip. Not even getting close to the limits.
Thing is, can you guarantee it is100% working on all arrow lake chips out there? stock being stock for a reason. and when you throw in tweaks, the competitions the ARL are competing can also be tweaked/tuned/megatuned whatever one would like to call.. which makes the comparison even more meaningless
 
Yes and no. I feel Bulldozer was an overall regression, while Arrow Lake is generally a good step forward for power requirement and most non-gaming use cases. It falls flat when it comes to gaming, and not everyone buys a PC to game on. So yes, it regressed, but not as bad as AMD's Bulldozer/ FX series. The big problem now is that the Zen 5 chips are performing pretty much in line with Intel's Arrow Lake, so while they are good in some non-gaming use cases., it still does not make their chips attractive. Like if I use my system for photo/ video editing and for some gaming, I would go with a Ryzen CPU that does well across the board.

bulldozer had 2 issues

1 it was a completely diffrent design so windows 7/8/10 couldnt get its os head around what to do with the cores and treated a 6 core processor as 3 it was due to shared resources and core parking something they tried to work on and the shared shared FPU per module. made there float point performance tank. and 16 KB L1D from what i could gather was to small.

makes me wonder what would happen if you put some amd faster cache now on one of these chips if it would have helped in any way.
 
Thing is, can you guarantee it is100% working on all arrow lake chips out there? stock being stock for a reason. and when you throw in tweaks, the competitions the ARL are competing can also be tweaked/tuned/megatuned whatever one would like to call.. which makes the comparison even more meaningless
No of course not everyone is going to make the changes many are going to be happy with the chips just exactly as they are and not want to make any changes. For those people it's fine to leave them as they are. I just wanted mine to be a little bit more robust.
 
No of course not everyone is going to make the changes many are going to be happy with the chips just exactly as they are and not want to make any changes. For those people it's fine to leave them as they are. I just wanted mine to be a little bit more robust.
Then there's actually no conflict in opinion here, I am fine with ppl reporting the OCing possibility or tuning guides/samples, but then it will be far too complicated to compare random OCed to OCed samples for competition, so in reviews I usually just read on the stock performance to show what is capable in default, maybe with the state of bugs in the review.

But TBH, this gen really intel don't seem to have any really attractive point over AMD, even for production the lead is not significant in most cases.
And as someone bought 12700kf and slot in replace it with 14900k for the same reason to do production and gaming at the same time somehow for me, gaming is what really benefited from the 14900k not bottlenecking in flight siming, but then for production, I can easily live with like 10% slower in rendering and not really affecting my business. What takes up most of my time is the video and photo editing part, but the transcoding portion or raw to jpeg/png portion, I can easily just click process and then go for lunch/dinner and when I am back it is finished. Even for large batches, it's not that difficult to just click and let it run overnight and not affecting my life, while in contrary, in gaming, a stutter is a stutter
 
Something is very wrong here. Arrow Lake can't be that bad. A good journalist should try to dig out what caused the discrepancy between Intel Labs and reviewers instead of blaming Intel's marketing.
Well, it's part of Intel. If the CPU can't be fixed to deliver what they promised that should entitle every buyer to choose a partial or full refund. Easy win on a class action lawsuit...

Then Intel could just come up and say "Well that's it we can't fix it, if you want to buy it be warned that what you will get for your money" and then adjust the price to reflect the CPU real value.

I'm not saying it's not a good product, but charge the right price for it then.
 
Then there's actually no conflict in opinion here, I am fine with ppl reporting the OCing possibility or tuning guides/samples, but then it will be far too complicated to compare random OCed to OCed samples for competition, so in reviews I usually just read on the stock performance to show what is capable in default, maybe with the state of bugs in the review.

But TBH, this gen really intel don't seem to have any really attractive point over AMD, even for production the lead is not significant in most cases.
And as someone bought 12700kf and slot in replace it with 14900k for the same reason to do production and gaming at the same time somehow for me, gaming is what really benefited from the 14900k not bottlenecking in flight siming, but then for production, I can easily live with like 10% slower in rendering and not really affecting my business. What takes up most of my time is the video and photo editing part, but the transcoding portion or raw to jpeg/png portion, I can easily just click process and then go for lunch/dinner and when I am back it is finished. Even for large batches, it's not that difficult to just click and let it run overnight and not affecting my life, while in contrary, in gaming, a stutter is a stutter
And a stutter is so insignificant it means nothing and most times isn't even noticed by me. Lol
 
And a stutter is so insignificant it means nothing and most times isn't even noticed by me. Lol
stutter AFAIK is something that is noticeable, unnoticable ones arn't called stutter, it's called FPS variation. when you read the 1% or 0.1% lows, in some cases, it should be quite annoying in a game, it doesn't matter its 99.9% >200FPS, if say when a boss jump out in a ARPG game, or say in flight simming at the landing phase some gust wind comes and it cause a blink having 15FPS or so, it affects your reaction at the critical moment, in simming it goes from smooth and immersing to something laggy, stutterless is what affects gamers most and the X3D is the ones really shines in those areas.
 
stutter AFAIK is something that is noticeable, unnoticable ones arn't called stutter, it's called FPS variation. when you read the 1% or 0.1% lows, in some cases, it should be quite annoying in a game, it doesn't matter its 99.9% >200FPS, if say when a boss jump out in a ARPG game, or say in flight simming at the landing phase some gust wind comes and it cause a blink having 15FPS or so, it affects your reaction at the critical moment, in simming it goes from smooth and immersing to something laggy, stutterless is what affects gamers most and the X3D is the ones really shines in those areas.
Oh noes! Then you have to do it again... Terrible. Lol it's games for heavens sake not surgery.
 
massively happy back in nov' 23 I switched to AMD for my 1st AMD build ever.. was using a 7 yr old intel 5960x 8c/16t HEDT x99 platform.... my amd 7950x3d, nvidia 4080, 64 gigs of dd5 cl30 memory and 2x 4tb Gen 5 nvme's are kick'n ass. was using INTEL exclusively since early 80's (outside my C64/Amiga 500). Even win 11 23h2 is decent (runs well).. no clue when 24h2 will be avail on my system (use alot of games that use EasyAntiCheat) 😛 was tempted briefly at a 14900k but in the end I didn't want my new rig to be a space heater requiring it's own nuclear reactor to power it lol. my new rig doesn't even hit 500w total power draw (corsair HX1000i PS - as seen from ICUE software monitoring).