Cool! I will wait for real world tests of actual production software before believing a word of it.
The scheduled forum maintenance has now been completed. If you spot any issues, please report them here in this thread. Thank you!
First on Windows, important distinction unless apple decides to reinvigorate bootcamp...This is kind of old news now. The only interesting part is that we've possibly finally got another manufacturer of competitive ARM chips, rather than the implication that Qualcomm is the first
Plenty of ARM software support out there. Microsofts own .NET framework has had support for producing both x86 and ARM binaries for almost 3 years now. Many other development languages have had support for even longer.Without software support all this means nothing.
Windows 11 has been running natively on ARM64 for years. A lot of applications ship native Windows ARM64 binaries, including e.g. Chrome or Visual Studio, and these who don't will soon start.I must be missing something but I don't understand how these chips can run Windows. Does Qualcomm have an x86 license?
I don't think there's any question as to the performance these chips are capable of. The primary question seems to be what the performance under specific power consumption is. We certainly need retail units so we can see what the real world looks like.After 5 years of reading about the whole Nuvia saga, I think I can wait until it reaches the hands of independent reviewers and undergoes a full array of tests. I think we all know Geek Bench can be misleading, especially when comparing different CPU ISAs. At least the OS should be the same.
Ironically doesn't matter because Arm wasn't capable of getting an injunction. Trial date isn't until September so they'll be onto future generations by the time it ends.BTW, what's the state of the ARM litigation?
I'll bet Apple doesn't do a single thing when it comes out. The Nuvia team would have been extraordinarily stupid to reuse anything Apple had patented. Especially after the contentious way the founders left and the lawsuit that followed even though Apple gave up on it.Speaking of litigation, I'll bet Apple is going to hit Qualcomm with a bunch of patent infringement claims, as soon as they get one of these machines into their labs and start analyzing it. There's no way the Nuvia team didn't reuse any of the stuff they patented while at Apple.
Instead of conjecture, how about citing what they actually use?2) The code is transpiled (or binary translated) from x86 to AARCH64 on the fly via a technology originally developed by Transitive, that's also been used for the Power to x86 transition at the Fruity Cult.
Then wait until Notebook Check posts their review of it. Here's their review of a Qualcomm-based laptop using a prior generation CPU:I'd like to see a thorough review with various benchmarks and battery life comparisons for gaming, video watching, office work.
That's what Nuvia was founded to do. The founders said they left Apple, because Apple wasn't interested in the server market. So, they went off and started their own company to build server CPUs. Unfortunately for them, Qualcomm made an offer they couldn't refuse, but had mobile cores as their top priority.Any company with this level of perf per watt will obviously go after the most profitable segment eventually - servers
All they've said is that they haven't ruled it out. Right now, phones SoCs are their core market, with laptops being the place they're trying to expand to, next.No doubt Q is aiming at that in a year or two
This is a lie people spread mostly out of willful ignorance, at this point. It's debunked easily enough, but most people in this camp find it more comfortable to believe than the possibility that their status as PC Master Race is being threatened by a company whose products and users they've looked down upon for so long. The same people would often write me off as an Apple fanboy (which I'm not), as a way of avoiding having to come to terms with an unpleasant truth.There’s nothing special about the Apple chips. they trade power usage for die area.
Most people spend most of their time running a tiny number of programs. Productivity apps, web browsers, and video streaming, mainly. These are all natively-compiled.The main problem with the Snapdragon X family is that anything that isn’t compiled specifically for windows on ARM will go through an expensive emulation layer.
IMO, the key test of how much potential the new Oryon cores have is really how they compare against x86 cores on the same process node. To that end, Qualcomm's first generation containing them is made on TSMC N4. So, that makes the Ryzen 8000 APUs and Meteor lake the proper points of comparison.AMD is already shipping samples to OE's for prototype builds. This is going to be good compitition (but features will be very important!) but they wont be faster than Zen 5 and probably not faster than Intel gen 15 either.
Nah, these guys exhibited quite a degree of hubris. I wonder if they hadn't lost touch with reality, toiling away in the bowels of Apple for so long.I'll bet Apple doesn't do a single thing when it comes out. The Nuvia team would have been extraordinarily stupid to reuse anything Apple had patented. Especially after the contentious way the founders left and the lawsuit that followed even though Apple gave up on it.
Just how expensive that binary translation or transpiler layer is, will be interesting to test and see.There’s nothing special about the Apple chips. they trade power usage for die area. Easy to do at Spple margins. The main problem with the Snapdragon X family is that anything that isn’t compiled specifically for windows on ARM will go through an expensive emulation layer.
Surely because conjecture is lazy and cheap.Instead of conjecture, how about citing what they actually use?
@Flayed , support for doing this is built into Windows 11:
Well, it's been out for more than a year. You can find some benchmarks, I'm sure. Probably the biggest limitation has been the selection of machines available, for those who want to kick the tires.Just how expensive that binary translation or transpiler layer is, will be interesting to test and see.
That's somewhat irrelevant, because you have no ability to feed these CPUs anything other than x86.First, keep in mind that x86 CPU have stopped executing x86 code for a long time. Instead they've been translating on the fly to a proprietary low level RISCier ISA for many generations. Transmeta's Crusoe even made that layer user replaceable.
My take is that it's more like another attempt at what Java Bytecode was trying to do. Its goal is just to look and work enough like the native ISA that it will translate (i.e. JIT-compile) directly enough to be efficient. However, if you know what ISA some WASM is going to be executed on, I'm sure you can generate WASM code that will run more efficiently on it.And then there is WASM, which is trying to turn into a generic ISA for general purpose code.
That's about as good an example of an impedance mismatch as any. It's like trying to use a railroad locomotive to emulate an ATV.We've seen some rather disastrous failures in this domain, e.g. x86 emulation on Itanium.
Yeap, Geekbench 6, just like 5, is more memory intensive than computational intensive. My 12900k capped at 35w score thisI am not sure if Geekbench is a good gauge to begin with. A lot of bench marks just magnify some benefits/ features, and gives an inflated result. Hence, when it comes to actual performance, it’s just fine. I have no doubt at low power conditions, ARM base SOC can do very well because Apple have proven this point. So I expect no less from Qualcomm here when it’s a bunch of seasoned folks working on this X Elite chip. However I still feel that Windows will be the party pooper.