News Snapdragon X Elite beats AMD and Intel flagship mobile CPUs in Geekbench 6 – Qualcomm's new laptop chip leads in single- and multi-core tests

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Odd that the title headline is 'arm closes the gap', and talks about the single core speed while ignoring that the apple M3 single core speed leaves all of these in the dust... and Apple already demonstrated an ARM chip that has 'closed the gap'

This is kind of old news now. The only interesting part is that we've possibly finally got another manufacturer of competitive ARM chips, rather than the implication that Qaulcomm is the first,
 
I must be missing something but I don't understand how these chips can run Windows. Does Qualcomm have an x86 license?
Windows 11 has been running natively on ARM64 for years. A lot of applications ship native Windows ARM64 binaries, including e.g. Chrome or Visual Studio, and these who don't will soon start.
 
  • Like
Reactions: P.Amini
These chips are an exciting first serious foray into Windows on Arm since Apple only cares about their walled garden. I wish Qualcomm would take the market seriously, but I'd be shocked if they were going with anything other than premium pricing. It'll also be interesting to see what MediaTek brings to the market since the deal MS had with Qualcomm is over.
After 5 years of reading about the whole Nuvia saga, I think I can wait until it reaches the hands of independent reviewers and undergoes a full array of tests. I think we all know Geek Bench can be misleading, especially when comparing different CPU ISAs. At least the OS should be the same.
I don't think there's any question as to the performance these chips are capable of. The primary question seems to be what the performance under specific power consumption is. We certainly need retail units so we can see what the real world looks like.
BTW, what's the state of the ARM litigation?
Ironically doesn't matter because Arm wasn't capable of getting an injunction. Trial date isn't until September so they'll be onto future generations by the time it ends.
Speaking of litigation, I'll bet Apple is going to hit Qualcomm with a bunch of patent infringement claims, as soon as they get one of these machines into their labs and start analyzing it. There's no way the Nuvia team didn't reuse any of the stuff they patented while at Apple.
I'll bet Apple doesn't do a single thing when it comes out. The Nuvia team would have been extraordinarily stupid to reuse anything Apple had patented. Especially after the contentious way the founders left and the lawsuit that followed even though Apple gave up on it.
 
2) The code is transpiled (or binary translated) from x86 to AARCH64 on the fly via a technology originally developed by Transitive, that's also been used for the Power to x86 transition at the Fruity Cult.
Instead of conjecture, how about citing what they actually use?

@Flayed , support for doing this is built into Windows 11:

I haven't spent much time looking (Windows isn't my OS of choice, so it's not a natural area of curiosity for me), but I turned up this article which outlines some of the techniques and specific implementations they use:

32-bit x86​

32-bit x86 compatibility has been present since the initial release of Windows on Arm64, providing a baseline level of backwards compatibility for the ecosystem as a whole.

CHPE (Compact Hybrid Portable Executable) binaries are present to reduce emulation overhead, by having the OS libraries be native instead of having to go through the XtaJIT (x86 to ARM64 JIT) dynamic binary translator. CHPE is a private ABI not exposed to the public – and has been changed in the past. As such, third-parties couldn’t ship their own CHPE code in a supported way.

XtaJIT‘s historical roots are the Virtual PC for PowerPC Mac code base, which had its origins from Connectix. Virtual PC for Mac got abandoned in the switch of the Mac platform to Intel processors, with that JIT compiler living on the Xbox 360 for compatibility with 1st-generation Xbox titles.

64-bit x86​

The importance of 32-bit x86 decreased with time. 64-bit application compatibility is a requirement to reach mass market adoption.

Starting from Windows 11, backwards compatibility for x86_64 applications is provided through XtaJIT64 (x86_64-to-ARM64 JIT) . ARM64 Emulation Compatible, an ABI compatible with x86_64 code was developed. This allows to mix ARM64EC and x86_64 binary code in the same process. Unlike CHPE, this ABI is available for the public with development tools shipping as part of Visual Studio.

Source: https://chipsandcheese.com/2022/03/13/state-of-windows-on-arm64-a-high-level-perspective/
 
Any company with this level of perf per watt will obviously go after the most profitable segment eventually - servers
That's what Nuvia was founded to do. The founders said they left Apple, because Apple wasn't interested in the server market. So, they went off and started their own company to build server CPUs. Unfortunately for them, Qualcomm made an offer they couldn't refuse, but had mobile cores as their top priority.

No doubt Q is aiming at that in a year or two
All they've said is that they haven't ruled it out. Right now, phones SoCs are their core market, with laptops being the place they're trying to expand to, next.

BTW, Qualcomm briefly had a server CPU, Centriq, which was rumored to have been a victim of the Broadcom hostile takeover attempt.

So, there's reason both to believe they're interested in this market, but also doubt their commitment to it. I think a lot depends on how they do in the laptop market. If that goes well, then perhaps they'll look to expand. If not, they could switch strategies and refocus on the server market, but it's probably more likely they kill off yet another group of custom CPU core designers... for at least the 3rd time in company history?
 
Synthetic ≠ Equal Real world but nevertheless is exciting. Who would have thought they'd pop up in desk/laptops like this.

The last one i could see potentially upsetting the market would be Nvidia but they seem content with GPUs.

Intel and AMF finally getting so,e competition which hopefully means more choice for us and cheaper prices.
 
"Nothing lasts forever" an Intel's executive words to a music executive when digital piracy was at its worst... looks like those words are coming back to haunt intel, the two big ones needs to invest in fighting off Snapdragon & possibly MediaTek... already lost to Apple (even if Apple is stagnating now, they've proved that they can be beat)
 
There’s nothing special about the Apple chips. they trade power usage for die area.
This is a lie people spread mostly out of willful ignorance, at this point. It's debunked easily enough, but most people in this camp find it more comfortable to believe than the possibility that their status as PC Master Race is being threatened by a company whose products and users they've looked down upon for so long. The same people would often write me off as an Apple fanboy (which I'm not), as a way of avoiding having to come to terms with an unpleasant truth.

The technical prowess and innovations of Apple's cores are well-documented. Backed up by actual patents Apple filed and has been granted, as well as extensive independent analysis.

Here's a good place to start:

When you're ready for the deep dive, here's the latest version of the big M1 explainer (volume 1):

He's published 5 volumes, so far, each covering different aspects.

It's relevant to Qualcomm's new CPU, because much of the same core design team went to Nuvia (which Qualcomm bought) that were responsible for designing Apple's CPU cores.

The main problem with the Snapdragon X family is that anything that isn’t compiled specifically for windows on ARM will go through an expensive emulation layer.
Most people spend most of their time running a tiny number of programs. Productivity apps, web browsers, and video streaming, mainly. These are all natively-compiled.
 
Last edited:
  • Like
Reactions: JamesJones44
AMD is already shipping samples to OE's for prototype builds. This is going to be good compitition (but features will be very important!) but they wont be faster than Zen 5 and probably not faster than Intel gen 15 either.
IMO, the key test of how much potential the new Oryon cores have is really how they compare against x86 cores on the same process node. To that end, Qualcomm's first generation containing them is made on TSMC N4. So, that makes the Ryzen 8000 APUs and Meteor lake the proper points of comparison.

Yes, they will eventually face competition from Intel and AMD, both on better nodes, but at least they're launching against CPUs with node-parity.

BTW, I think it should be close to 6 months before AMD and Intel launch anything new, in the laptop segment. For most of this year, AMD's low/mid-range laptop solution will be Zen 4/4C-based and made on TSMC N4, while Intel's will be Meteor Lake, made on Intel 4.
 
  • Like
Reactions: JamesJones44
I'll bet Apple doesn't do a single thing when it comes out. The Nuvia team would have been extraordinarily stupid to reuse anything Apple had patented. Especially after the contentious way the founders left and the lawsuit that followed even though Apple gave up on it.
Nah, these guys exhibited quite a degree of hubris. I wonder if they hadn't lost touch with reality, toiling away in the bowels of Apple for so long.

Furthermore, I don't believe they can build such a competitive CPU core without borrowing any of the tricks they patented, while at Apple. Maybe the tried to shy away from some of the most easily detectable, but I'm not even sure about that.
 
There’s nothing special about the Apple chips. they trade power usage for die area. Easy to do at Spple margins. The main problem with the Snapdragon X family is that anything that isn’t compiled specifically for windows on ARM will go through an expensive emulation layer.
Just how expensive that binary translation or transpiler layer is, will be interesting to test and see.

First, keep in mind that x86 CPU have stopped executing x86 code for a long time. Instead they've been translating on the fly to a proprietary low level RISCier ISA for many generations. Transmeta's Crusoe even made that layer user replaceable.

When Transitive sold its SPARC to x86 version of QuickTransit, they argued that there were no SPARC CPUs faster than their simulation on x86, which might have caught Apple's attention when they licensed it for the Power to x86 transition and called it Rosetta after the stone which allowed the breakthrough in reading the Egyptian script.

And then there is WASM, which is trying to turn into a generic ISA for general purpose code.

We've seen some rather disastrous failures in this domain, e.g. x86 emulation on Itanium. And we've seen hardly any slowdown in busy loops after their initial translation even on games, where the real brunt of the work is often born by accelerators anyway.

While I agree Apple can't do magic, I'm pained to admit that their chips are perhaps somewhat special in their ruthless dedicationg along that specific Apple niche of max power out of thin and light, which makes no sense in many other form factors such as workstations, servers, or even gaming rigs. And I wouldn't mind using Apple silicion at sane prices without iNdenture to their ecosystem.

Just as I wouldn't mind using anyone elses silicon, unless it were just about replacing iNdenture with mAnumission to the big Fruity Cult emulator and ClosedAI sponsor Macrohard.

Microsoft would argue that their store is essential to avoid people accidentally buying or downloading the wrong (x86) binary for their systems, but they are really just after taxing the PC ecosystem like Apple: ¡no pasarán!
 
Last edited:
Instead of conjecture, how about citing what they actually use?

@Flayed , support for doing this is built into Windows 11:

Surely because conjecture is lazy and cheap.

I guess I am hoping that they use something like QuickTransit, because that would pretty much guarantee the lowest possible execution overhead. Anything less just seems unprofessional to me.

But given IBM's stance and weight on IP, Microsoft might have actually chosen to use one of the many lesser approaches to avoid paying license.

How much they emulate or how well they binary translate the app itself, how big the translated code blocks are etc., only testing might reveal unless they are fessing up more details.

With my RP5 running Windows 11, I should be able to tell myself, if only I find the time to test...
 
Last edited:
Just how expensive that binary translation or transpiler layer is, will be interesting to test and see.
Well, it's been out for more than a year. You can find some benchmarks, I'm sure. Probably the biggest limitation has been the selection of machines available, for those who want to kick the tires.

First, keep in mind that x86 CPU have stopped executing x86 code for a long time. Instead they've been translating on the fly to a proprietary low level RISCier ISA for many generations. Transmeta's Crusoe even made that layer user replaceable.
That's somewhat irrelevant, because you have no ability to feed these CPUs anything other than x86.

There's some inherent inefficiency when translating ISAs, because the instructions don't exactly match and some have idioms which they execute efficiently that aren't so efficient after translation. Probably some of the more substantial examples are found in and amongst the vector instructions, like AVX/AVX2.

And then there is WASM, which is trying to turn into a generic ISA for general purpose code.
My take is that it's more like another attempt at what Java Bytecode was trying to do. Its goal is just to look and work enough like the native ISA that it will translate (i.e. JIT-compile) directly enough to be efficient. However, if you know what ISA some WASM is going to be executed on, I'm sure you can generate WASM code that will run more efficiently on it.

We've seen some rather disastrous failures in this domain, e.g. x86 emulation on Itanium.
That's about as good an example of an impedance mismatch as any. It's like trying to use a railroad locomotive to emulate an ATV.
 
What is up with these easily demonstrable lies? "AMD and Intel flagship mobile CPUs", then show 13900H and 8945HS.

13900H is from Q1'23, 14900HX is significantly faster and from Q1'24.

7945HX3D is from 07/27/2023, and afaik the fastest X3D mobile CPU released.
  • Snapdragon X Elite: 2,427 + 14,254
  • 14900HX: 2,809 + 16,513
  • 7945HX3D: 2,816 + 16,466
https://browser.geekbench.com/processors/intel-core-i9-14900hx
https://browser.geekbench.com/processors/amd-ryzen-9-7945hx3d

So, *loses. Snapdragon X Elite .. loses.
 
  • Like
Reactions: atomicWAR and luka3
BTW, there is a lineup of CPUs, not 1 or 2 SKUs.
There is supposed to be an 80W model, but IDK which one it is. I also have no idea how to decipher Qualcomm product numbers.

Snapdragon X Elite
X1E84100
X1E80100
X1E78100 <- The Lenovo is using this one, presumably it is 23W
X1E76100

Snapdragon X Plus
X1P64100
X1P62100
X1P56100
X1P40100
 
I am not sure if Geekbench is a good gauge to begin with. A lot of bench marks just magnify some benefits/ features, and gives an inflated result. Hence, when it comes to actual performance, it’s just fine. I have no doubt at low power conditions, ARM base SOC can do very well because Apple have proven this point. So I expect no less from Qualcomm here when it’s a bunch of seasoned folks working on this X Elite chip. However I still feel that Windows will be the party pooper.
Yeap, Geekbench 6, just like 5, is more memory intensive than computational intensive. My 12900k capped at 35w score this

https://browser.geekbench.com/v6/cpu/5615978

With an average power draw of 15.8 watts. That's just because im running fast ddr5 memory.