News Intel Arrow Lake CPUs benchmarked on Z890 motherboards — Core Ultra 7 265KF up to 4% faster than Core i9-14900K in Geekbench 6

Status
Not open for further replies.
Good reporting, but can Aaron and all TH writers please stop using the tiresome phrase “grain of salt”? Forever and everywhere.
They can not and will not stop.

Assuming any of this is accurate an ARL part that topped out at ~5.5Ghz beating a RPL part that tops out at 6Ghz in single threaded doesn't seem too bad. I think it's fair to assume that whatever multithreaded uplift there is will be muted by the loss of HT for desktop parts.
The uplift for Skymont E-cores looks so large, it could offset the loss of HT when combined with a modest uplift for the P-cores.

Due to the salt grain of this probably being an engineering sample, the numbers can continue to go up.
 
  • Like
Reactions: KyaraM and rtoaht
It's geekbench, who cares. It has never really reflected real world performance and is heavily....HEAVILY...arm favored. It just isn't a great way to tell anything about how this will actually perform anymore.

Just a thought, if you need to end an article with
"However, these results need to be taken with a grain of salt since these CPUs are possibly engineering or qualification samples that might not have the same clock speeds as their official production-ready counterpart"

Maybe you should consider not reporting on it, or starting with that rather than putting it at the very end of the article. I will wait till I see real world performance numbers, I wish reporters would as well.
 
  • Like
Reactions: rtoaht and NinoPino
Looks promising, I have come to expect the i7 n+1 beating the prior i9, has been this way I believe since the i9 Sku landed on the scene (save for 11th gen Rocket lake). This is solid news.

Still not a fan of the new naming scheme, ultra 7, more letters =Bad
i for intel made sense. Marketers and their superlatives...😡
 
  • Like
Reactions: KyaraM and rtoaht
Assuming any of this is accurate an ARL part that topped out at ~5.5Ghz beating a RPL part that tops out at 6Ghz in single threaded doesn't seem too bad. I think it's fair to assume that whatever multithreaded uplift there is will be muted by the loss of HT for desktop parts.
All while allegedly using 100 watts less power than Raptor lake. All those previous rumors Arrow Lake being 40% better than Raptor lake were also a salty sized lake level of bs. Ah potentially with LN2 exotic cooling makes anything probable 🤪. Same goes for Zen5 % shyte show!
https://www.tomshardware.com/pc-com...ected-by-latest-raptor-lake-microcode-updates
 
All while allegedly using 100 watts less power than Raptor lake. All those previous rumors Arrow Lake being 40% better than Raptor lake were also a salty sized lake level of bs. Ah potentially with LN2 exotic cooling makes anything probable 🤪. Same goes for Zen5 % shyte show!
https://www.tomshardware.com/pc-com...ected-by-latest-raptor-lake-microcode-updates
IIRC Raptor Lake are also rated to be 125W part... real world power consumption will have to wait and see, and more importantly, this time around reviewers will need to also review the voltage and power behaviour also, to not... you know the reason
 
We have to wait for solid, repeatable, power consumption data. The goal with Arrow Lake was to show that Intel process tech could deliver a low power chip. They also want to prove that they can deliver a chip with power/performance that's comparable to ARM/RiscV. Intel supposedly has a long tern goal of getting apple's PC business back as a foundry. They're also desperate to win back the commodity server farm business from proprietary AWS and Google ARM chips.
Have you noticed that even though server farms and datacenters are growing at very rapid rates, the combined AMD/Intel datacenter revenue isn't up much? ARM/RiscV implementations are the big worry for both companies. Datacenters don't just face huge costs for power, it's become a flat out constraint in many areas. Lower power use is key and is what has been limiting both AMD and Intel in the datacenter.
If a sierra forest / clearwater forest (Xeon6) uses about the same power as your proprietary chip, why take on the costs and risks of a semiconductor design and manufacturing program?
Apple's looking at paying to rebuild 1/3 to 1/2 (or more) of its China based supply chain in India, Vietnam, Indonesia, etc. Capital investment costs are going to have to be looked at more carefully for a number of years. Intel would be happy to help them with that....
Power consumption / performance is going to be the key.
 
  • Like
Reactions: KyaraM and Mattzun
The performance of these CPUs is NOT going to matter for most gamers
If you are gaming with a 7700x or a 13700K, you already max out a 4090 at 4k/Ultra and come close to maxing one out at 1440P/Ultra.

I'm pretty sure that gamers with a 4090 playing e-sports at 1080p low aren't even a consideration for either Arrow Lake or the 9000 series. If I'm being generous, that might be 0.1 percent of the CPU market.

AMDs mistake was claiming that it was a great gaming chip, not in designing a chiplet that was great for data center, Linux workstations, laptops and OEM desktops.
 
  • Like
Reactions: baboma
>The performance of these CPUs is NOT going to matter for most gamers

True...but nobody wants to hear that. It's not exciting. It's like your mom saying junk food is bad for you. It's not until 30 years and 300 pounds later that you start thinking, yeah, maybe mom had a point.

>If you are gaming with a 7700x or a 13700K, you already max out a 4090 at 4k/Ultra and come close to maxing one out at 1440P/Ultra.

Yep. Gaming benchmarks--taken as gospel nowadays--test CPUs in a bubble, removing GPU bottleneck from consideration by using 4090 @ 1080p. In reality, the majority of peeps have a $300 GPU, some smaller proportion have a $500 one, and only a tiny percentage can afford a $1K one, let alone a 4090. And those with a "good" GPU would NOT be running at 1080p. So, 99% of these fine folks will almost always be bottlenecked by GPU first, and whatever "gaming CPU" they waste their money on won't make one iota of difference.

The whole notion of "gaming CPU" is a marketing construct, like the notion of gift buying for Xmas. Once it's done long enough, it becomes a given, and nobody questions it much. Or at all.


>AMDs mistake was claiming that it was a great gaming chip, not in designing a chiplet that was great for data center, Linux workstations, laptops and OEM desktops.

People tend to think they are the center of gravity, that things revolve around their wants and needs. I doubt many grasp how small a niche the DIY desktop PC market is, relative to other markets that compete for the same resource.

Ryzen 9K's launch may be a PR black eye within that small niche (gaming PCs), but as long as the other non-gaming uses hold up, I doubt AMD would lose much sleep over it.
 
  • Like
Reactions: KyaraM
The performance of these CPUs is NOT going to matter for most gamers
If you are gaming with a 7700x or a 13700K, you already max out a 4090 at 4k/Ultra and come close to maxing one out at 1440P/Ultra.
That's completely game dependent and a faster CPU will absolutely net you a better frame rate floor. Adding in things like ray tracing can increase the CPU load despite the heavy GPU usage (like Spider-Man Remastered 4k where the 7800X3D is ~4.7% faster than a 7700X without RT but is ~13.6% faster with it). So while generally speaking video cards matter more for performance the CPU can still add in decent uplift as well.
 
They can not and will not stop.


The uplift for Skymont E-cores looks so large, it could offset the loss of HT when combined with a modest uplift for the P-cores.

Due to the salt grain of this probably being an engineering sample, the numbers can continue to go up.
The uplift in e-core performance is mostly irrelevant for workloads such as gaming. It is only useful for very heavily threaded workflow. And as an end user, if there is no meaningful performance improvement, there is little reason to upgrade. I do think the power consumption will be lower, but again, it may not be a meaningful upgrade. The only saving grace for Intel is that most people won’t want to buy Raptor Lake due to performance loss since the many microcode updates and potential issue. So if anyone is sticking with Intel, this looks like a better choice.
 
>So while generally speaking video cards matter more for performance the CPU can still add in decent uplift as well.

Perhaps a HW site should test that assertion--not with a GPU reserved for the 1%, but one that most people can afford. I'd like to see just how much these CPUs differ, including the so-called "gaming CPUs," when paired with a 4060, running at 1080p. Wouldn't you?

Then again, the 3060 12GB is still Amazon's best-selling GPU, so maybe that's the better choice.

Perhaps @jarredwalton can take this up as one of his next GPU-compare projects.
He already did exactly that: https://www.tomshardware.com/pc-components/gpus/cpu-vs-gpu-upgrade-benchmarks-testing

The heavier the GPU load the less important the CPU, but it still controls the lows and if you're aiming for higher frame rates becomes extremely important.
 
Last edited:
That article definitely needs more views.
I found it fascinating and amazingly useful for us 99 percenters, but it didn’t get a lot of traffic
I link it every chance I get for exactly that reason. CPU is very important for higher frame rate gaming and maintaining lows, but there are definitely diminishing returns as you go up in resolution.
 
>He already did exactly that: https://www.tomshardware.com/pc-components/gpus/cpu-vs-gpu-upgrade-benchmarks-testing

Well, no, he didn't.

The above tested top-tiered CPUs & motherboards from different gens, and paired them with 2nd-tiered (xx80s) GPUs from different gens.

That's relevant only to upgraders (with those CPU/GPUs), not for people building new systems and trying to determine if "gaming CPUs" are all that, when paired with a "regular" GPU most people would buy, not a top-tier anything.
Okay... so you can extrapolate rough equivalent performance out of all of those parts. They don't exist in a vacuum there are modern parts with comparable performance and in the comments I even cited the most obvious. So no it's not only relevant to upgraders you just have to be willing to do a small amount of research to find equivalent performing parts. That's the reason this article is so valuable it gives you CPU/GPU scaling information over various performance levels and resolutions.
Is this becoming an argument? Are you trying to defend the status quo, that CPU+4090 testing is the only benchmark that matters? If so, let me bow out here. I don't care about disproving the status quo. I just want to see if the emperor has any clothes on.
Top generational tier graphics is the only way to minimize the graphics bottleneck while doing CPU testing. That's the reason every respectable outlet tests CPUs that way. It's also the reason they use top SKU CPUs for graphics testing to minimize the CPU bottleneck. This type of testing isn't meant to tell you what you need to buy, but rather what part has the most performance potential and by how much.
 
  • Like
Reactions: -Fran-
>Top generational tier graphics is the only way to minimize the graphics bottleneck while doing CPU testing.

That doesn't reflect the reality that--FOR GAMING--a graphics bottleneck will always exist. For GPUs that most people buy (read: not 4090), the GPU bottleneck will obviate most differences in CPUs. Testing without that bottleneck means it's not a real-world test.

CPU X may be 10% better than CPU Y w/o GPU bottlenecking. But in the real-world where GPU is the bottleneck, the diff may well be 0%, or 1 or 2%. That's an important distinction when budgeting which CPU to buy.
So you're defining "real world" by your own arbitrary view here and for some reason you don't get that. Also no a graphics bottleneck won't always exist because some people like minimum frame rates or maximizing frame rate and change settings accordingly. Games also aren't arbitrarily always the same and even when the GPU is maxed out the CPU can have an impact on performance (see the 1440p D4 DXR results in the article I linked).
>That's the reason every respectable outlet tests CPUs that way.

Yes, the usual "experts knows better than you" argument.
If you can't grasp why it's important to test CPUs by minimizing other things that can impact performance there's no hope here. Reviews are a snapshot in a moment of time that you can reference in the future and if they don't do their best to isolate CPU performance it'll be worthless down the road.
>Okay... so you can extrapolate rough equivalent performance out of all of those parts.

Why should I as a reader need to extrapolate how much a 4060 compares against a 2080/3080/4080? Or figure where a 7600/7700 would fit against a 8700K? Why not just test a 4060, against current CPUs?
If you're in the market for a $300 video card why would you be even considering CPUs that cost more than it if you're into gaming?

It's fine that you're incapable of seeing the value in something that isn't tailor made to what you want, but for anyone who's willing to see how it may apply to their situation it's valuable.
 
The uplift in e-core performance is mostly irrelevant for workloads such as gaming. It is only useful for very heavily threaded workflow. And as an end user, if there is no meaningful performance improvement, there is little reason to upgrade. I do think the power consumption will be lower, but again, it may not be a meaningful upgrade. The only saving grace for Intel is that most people won’t want to buy Raptor Lake due to performance loss since the many microcode updates and potential issue. So if anyone is sticking with Intel, this looks like a better choice.
It's relevant to what I replied to, the claim that any multi-threading gain would be wiped out by lack of hyperthreading.

If you already have a modern CPU (<5 years) then yeah you probably don't need an upgrade.

Where we would see a vast improvement is an E-core-only lineup like Alder Lake-N. However, the rumor mill has suggested that we will see an Intel N250 based on the same Gracemont cores, so it could be another 2 years before we see a Skymont stunner.
 
  • Like
Reactions: thestryker
Status
Not open for further replies.