News Intel's Arrow Lake fix doesn't 'fix' overall gaming performance or match the company's bad marketing claims - Core Ultra 200S still trails AMD and...

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
These clowns just can't do anything right. It's like Bulldozer only blue this time.

Guess it will take them 2-3 more years to get over it.

i was about to say this actually intel is having a bulldozer moment something is wrong with these cpus.

the other issue is they cant just throw power at the wall again and hope it performs better.

that and i think the e cores sharing resources with the p cores could be just like bulldozers cores sharing the same resources.
 
  • Like
Reactions: Amdlova
My results were much different but I also tweaked the interchip timing slightly. If you try doing that first I think you will get much better results. The other point is I don't know anyone with a PC that only uses it to play games if that's all you're using your machine for you're probably better off getting a console. For non-gaming purposes the Intel chip is much better than the AMD even without the internship timings being tweaked.
Problem with that is since the RPL degradation issue, who can guarantee that such tweaks arn't going to do some suicide thing? if one discovered such fix of OCing the ring bus or so gets 15% faster and everyone go for the 101 and somehow fried the chip, it's another big issue, also same as undervolting, that involves some considerable time to test for real stability limit, and when ppl bought a unlucky chip which essentially cannot do such tweaks, they can't have a refund, so for a journalist I do think testing out sttuffs stock will be much more meaningful.
 
Intel listed all of the test bed setup conditions on the slide deck for each of the claims. I'm not sure it was stated that Tom's testing examined identical original and updated testing with the same setups.
Agree, but when the conditions are largely the same, then I question if there is a problem with their testing or are they doing something that people are not aware to get those "magical" performance numbers. It seems that those performance numbers that Intel claim can only be reproduced by Intel before launch and after the fixes.
 
  • Like
Reactions: NinoPino
Just like AMD had a bummer with their FX series of processors over a decade ago, this is an FX moment for Intel, so they just have to suck it up & do better next round.
Yes and no. I feel Bulldozer was an overall regression, while Arrow Lake is generally a good step forward for power requirement and most non-gaming use cases. It falls flat when it comes to gaming, and not everyone buys a PC to game on. So yes, it regressed, but not as bad as AMD's Bulldozer/ FX series. The big problem now is that the Zen 5 chips are performing pretty much in line with Intel's Arrow Lake, so while they are good in some non-gaming use cases., it still does not make their chips attractive. Like if I use my system for photo/ video editing and for some gaming, I would go with a Ryzen CPU that does well across the board.
 
  • Like
Reactions: TheSecondPower
What's to dig out there, buddy?

They plopped it into the test bench, installed all the necessary updates and that is what it has showed. And that's how it should be tested.

You think average users who buy these CPUs are going to be tinkering around various obscure settings trying to fix Intel's mess?

That's okay, just as we piled on AMD when they had their Bulldozer meme, so we do on Intel when they are in the same position getting destroyed.
I don't see why you think there is no more information to be dug out.
How about other motherboards?
Tinkering settings is not about fixing the mess, it's about teaching people how to get the most out of their hardware.

You may have your incentive to do what you are doing, but I am really curious about what makes the different between Intel Labs' results versus reviewers'
 
I don't see why you think there is no more information to be dug out.
How about other motherboards?
Tinkering settings is not about fixing the mess, it's about teaching people how to get the most out of their hardware.

You may have your incentive to do what you are doing, but I am really curious about what makes the different between Intel Labs' results versus reviewers'
For sure there is, but why do you think it's something that needs to be done in an article that specifically just tests the changes plug and play with the recommended update and default setup by Intel, the way any average person would do?

"Teaching the people"? What kind of nonsense is that. You want to be teaching the people to void warranties and potentially brick their PCs immediately or in the long run?

If there are better settings to be had as default - then it's Intel's job to do that with their update, because if nothing else - that would not be risking voiding warranty.
 
One thing I've never understood is that only a handful of reviewers actually test the officially-supported maximum memory speed. I've never been able to get more than officially supported memory speed to be stable without random crashes. So for me, Ryzen 9000 means 5600 MHz UDIMM memory, but for Core 200 I would consider 6400 MHz CUDIMM memory, even if it's twice the price. The one or two reviewers I found who test this way put Arrow Lake in a slightly more favorable light.
 
  • Like
Reactions: NinoPino
I also feel like the reviews don't really paint a full picture. The 285K is tested in gaming with an RTX 4090 running 1080p games. Not many gamers have an RTX 4090 and not many RTX 4090 owners play 1080p games. Usually games are GPU-bottlenecked. While I most often use my CPU for gaming, the thing I most often want it to be faster for is AV1 encoding. In that benchmark Tom's Hardware puts the 285K in close second behind the 9950X and TechPowerUp puts the 285K in first.
 
I also feel like the reviews don't really paint a full picture. The 285K is tested in gaming with an RTX 4090 running 1080p games. Not many gamers have an RTX 4090 and not many RTX 4090 owners play 1080p games. Usually games are GPU-bottlenecked. While I most often use my CPU for gaming, the thing I most often want it to be faster for is AV1 encoding. In that benchmark Tom's Hardware puts the 285K in close second behind the 9950X and TechPowerUp puts the 285K in first.
It is done so that GPU bottleneck is removed from the equation.
 
For sure there is, but why do you think it's something that needs to be done in an article that specifically just tests the changes plug and play with the recommended update and default setup by Intel, the way any average person would do?

"Teaching the people"? What kind of nonsense is that. You want to be teaching the people to void warranties and potentially brick their PCs immediately or in the long run?

If there are better settings to be had as default - then it's Intel's job to do that with their update, because if nothing else - that would not be risking voiding warranty.
This article criticizes Intel’s marketing without fully verifying which part went wrong.

I’m puzzled by your definition of the "average person." How does Tom's Hardware become a site exclusively for the average reader? If your argument holds true, shouldn’t we avoid publishing all articles related to overclocking?

Ultimately, it’s up to consumers to decide what to buy and how to effectively use their hardware. You can also choose what you want to read.
 
Last edited:
This article criticizes Intel’s marketing without fully verifying which part went wrong.
Why should they not? Intel claims X, it does not reproduce under the basic standard conditions - that's Intel's issue there, not Tom's Hardware one.

It is Intel's job now to see what went wrong, just like AMD had something similar few months ago and it did not check out either, because they based their statements on OS version with administrative access level typically not unlocked for common users.

When statements are made that do not check out, the fault lies with the party making the statement, not with those who factcheck it.

I’m puzzled by your definition of the "average person." How does Tom's Hardware become a site exclusively for the average reader? If your argument holds true, shouldn’t we avoid publishing all articles related to overclocking?

Ultimately, it’s up to consumers to decide what to buy and how to effectively use their hardware. You can also choose what you want to read.
Because it is the least common denominator of the people visiting the site. Believe it or not, vast majority of people reading those articles are normies who want to see what's what and where things stand as a whole.

Besides, it only makes sense to verify Intel's claims as is, without adding custom tweaks. If you want custom tweaks, there are indeed as you mention articles dedicated to that. This is not one such article.
 
Why should they not? Intel claims X, it does not reproduce under the basic standard conditions - that's Intel's issue there, not Tom's Hardware one.

It is Intel's job now to see what went wrong, just like AMD had something similar few months ago and it did not check out either, because they based their statements on OS version with administrative access level typically not unlocked for common users.

When statements are made that do not check out, the fault lies with the party making the statement, not with those who factcheck it.


Because it is the least common denominator of the people visiting the site. Believe it or not, vast majority of people reading those articles are normies who want to see what's what and where things stand as a whole.

Besides, it only makes sense to verify Intel's claims as is, without adding custom tweaks. If you want custom tweaks, there are indeed as you mention articles dedicated to that. This is not one such article.
You have a preference to say it's all Intel's responsibility. I just think that good journalism should figure out what are the benchmarks using Intel Labs' setting and try to figure out what's wrong from there. Not to lay blame on Intel marketing. We can skip debating that.

I have no idea what you are talking about on " least common divisor of people" while there is a clear market for overclocking articles.
 
You have a preference to say it's all Intel's responsibility. I just think that good journalism should figure out what are the benchmarks using Intel Labs' setting and try to figure out what's wrong from there. Not to lay blame on Intel marketing. We can skip debating that.

I have no idea what you are talking about on " least common divisor of people" while there is a clear market for overclocking articles.
I have a preference of calling out BS.

In the very similar fashion, I call out every other marketing release that does not check out in reality.

So, this time it's Intel. Your 6 posts account with weird Intel simping can handle this L, until they reconsile whatever went wrong between their claims and reality, just like AMD did a few months ago.
 
  • Like
Reactions: Thunder64
I also feel like the reviews don't really paint a full picture. The 285K is tested in gaming with an RTX 4090 running 1080p games. Not many gamers have an RTX 4090 and not many RTX 4090 owners play 1080p games. Usually games are GPU-bottlenecked. While I most often use my CPU for gaming, the thing I most often want it to be faster for is AV1 encoding. In that benchmark Tom's Hardware puts the 285K in close second behind the 9950X and TechPowerUp puts the 285K in first.
This is an especially important point. I think it is fair to say that nearly all pc gamers don’t have a 4090 or even a 4080 and will never front the money for one. So my question is if the performance gap we are observing remains with say a 4070 ti? This is another case where testing is great for demonstrating the ideal scenario, but it doesn’t reveal the reality in the field. It’s ok if a 4090 needs a X3D to shine, but does a X3D truly shine with more common GPUs? If so, how much compared to the 285? I think reasonable questions.
 
  • Like
Reactions: TheSecondPower
I have a preference of calling out BS.

In the very similar fashion, I call out every other marketing release that does not check out in reality.

So, this time it's Intel. Your 6 posts account with weird Intel simping can handle this L, until they reconsile whatever went wrong between their claims and reality, just like AMD did a few months ago.
You can call out the " least common denominator of the people" thing as well.

The fact that you need to attack my account instead of the argument doesn't reflect you well on your claimed role.

I am not denying Intel's responsibility for the Arrow Lake benchmark, it's just that Tom's should not publish such an incomplete analysis and lay blame specifically on the Intel marketing team.
 
So, the Arrow Lake still manages to pull some 150-ish fps on average in those tested games, which sounds like a "smooth enough" gaming experience to me.
Yes, no other CPU even remotely touches the king of gaming (the AMD 9800X3D), but even while being like 25% faster - how exactly does the gaming experience differ between 150 (Ultra 285K) and 200 fps (9800 X3D)? I would say: there is no tangible difference in real lilfe gaming - at all!

That's not to say that I am defending Intel here - definitely not! They deserve every bit of being called out for false marketing claims and unethical behaviours they have displayed in the past.

That said, the "gaming fps" really needs to be put into perspective, as literally every contemporary CPU above some 250-300 bucks provides enough performance to cope with virtually every AAA game.
 
  • Like
Reactions: Gururu
This is an especially important point. I think it is fair to say that nearly all pc gamers don’t have a 4090 or even a 4080 and will never front the money for one. So my question is if the performance gap we are observing remains with say a 4070 ti? This is another case where testing is great for demonstrating the ideal scenario, but it doesn’t reveal the reality in the field. It’s ok if a 4090 needs a X3D to shine, but does a X3D truly shine with more common GPUs? If so, how much compared to the 285? I think reasonable questions.
Yes, ironically the Intel B580 truly shines when coupled with the 9800X3D and takes advantage of the best gaming CPU on the market more than the 4060 for example. 😆
Now that AMD CPUs lead the gaming performance the Intel fans are out in droves moving the goalposts, this is quite entertaining to watch.
 
You can call out the " least common denominator of the people" thing as well.

The fact that you need to attack my account instead of the argument doesn't reflect you well on your claimed role.

I am not denying Intel's responsibility for the Arrow Lake benchmark, it's just that Tom's should not publish such an incomplete analysis and lay blame specifically on the Intel marketing team.
It’s a great piece actually, Paul tested out the CPUs just like he always does; your bias is showing it’s better to stop posting rather than throwing wild accusations.
 
Yes, ironically the Intel B580 truly shines when coupled with the 9800X3D and takes advantage of the best gaming CPU on the market more than the 4060 for example. 😆
Now that AMD CPUs lead the gaming performance the Intel fans are out in droves moving the goalposts, this is quite entertaining to watch.
The B580 tests are being called out for the idealistic but unrealistic design. As in who owns a 9800X3D and a B580? So let's hold this article to the same standard. Can someone truly be a fan to a company lol? I guess so, if you are gullible and like to get price gouged.
 
The B580 tests are being called out for the idealistic but unrealistic design. As in who owns a 9800X3D and a B580? So let's hold this article to the same standard. Can someone truly be a fan to a company lol? I guess so, if you are gullible and like to get price gouged.
You exhibit no understanding of benchmarking and how and why it is done and why it is done that way. You also pivot whenever someone debunks your argument, it’s safe to say you are not here to have a discussion but to insist on your POV. Regardless, this doesn’t change the fact that Paul’s testing was done in a fair manner, hechas been doing a great job of testing CPUs for as long as I have been reading his pieces on Tom’s Hardware and his results almost always align with other independent reviewers.
 
You exhibit no understanding of benchmarking and how and why it is done and why it is done that way. You also pivot whenever someone debunks your argument, it’s safe to say you are not here to have a discussion but to insist on your POV. Regardless, this doesn’t change the fact that Paul’s testing was done in a fair manner, hechas been doing a great job of testing CPUs for as long as I have been reading his pieces on Tom’s Hardware and his results almost always align with other independent reviewers.
The review setup is ok, I originally and still think that it is confusing with all of the processor comparisons. Simply, all we need to know is if Tom's results matched Intel's with an accurate and concise write up. Instead, we get flame bait to draw out a debate starting with the title. Every company should be called out for misleading advertising, but it has nothing to do with another company being better or worse.
 
I mean, except for the times it has actually happened, right? Like the windows threading improvement that made a 10% performance difference on ryzen?

If such a CPU-only (unrelated to Windows drivers) 10% performance fix on AMD or Intel CPUs would be possible then it would also automatically translate to other operating systems such as Linux.

I haven't observed any kind of a major speed bump on my Zen5 CPU in Linux compared to the day when I first installed the CPU about half a year ago (despite BIOS updates).

Those performance issues that have been fixed over time were regular Windows performance bugs - nothing more. Many of those bugs weren't ArrowLake-specific nor Zen5-specific. Fixing those bugs benefited overall system performance on machines with CPUs different from ArrowLake/Zen5.

It is foremost Microsoft's responsibility to fix Windows performance issues and security issues - it is much less AMD/Intel's responsibility. The fact that Intel "had to" step in to fix Windows threading issues is an indication of Microsoft's incompetence.

Secondly, benchmarking websites such as Tom's Hardware, TechSpot and others, are measuring only very basic metrics (such as: FPS, Watts) and thus are incapable of providing any sort of deeper explanation about what is behind the performance changes that have been observed after Windows OS updates . If you do want a solid explanation, you should be reading scientific articles (if such articles exist) or be performing a deeper analysis by your own hands...
 
  • Like
Reactions: cyrusfox
So, the Arrow Lake still manages to pull some 150-ish fps on average in those tested games, which sounds like a "smooth enough" gaming experience to me.
Yes, no other CPU even remotely touches the king of gaming (the AMD 9800X3D), but even while being like 25% faster - how exactly does the gaming experience differ between 150 (Ultra 285K) and 200 fps (9800 X3D)? I would say: there is no tangible difference in real lilfe gaming - at all!

It isn't about 150-200 FPS being sufficient or insufficient.

For Intel to cover development/manufacturing costs they have to persuade gamers/developers/etc to throw older AMD/Intel CPUs in garbage bins and to pay Intel money by purchasing new CPUs. If the new CPUs don't generally perform better then upgrading older CPUs to new ones contradicts economic rationality.