News Multiple Arm vendors are making chips for desktop PCs — Arm exec says Qualcomm Snapdragon won't be the only game in town

DavidMV

Commendable
Nov 18, 2021
27
50
1,610
This is good...

Not because the x86-64 ISA is bad... actually it is pretty darn good for high performance computing. Maybe it is slightly worse for power efficiency for mobile use because of the extra decoding, but that really isn't a huge deal.

This is good because AMD and Intel have a duopoly on x86-64 and no other companies are even allowed to make chips. Two companies are not enough for adequate competition. ARM64 will finally bring real competition back to Windows CPUs. I bet they take a good share of the laptop market in 5 years.
 
  • Like
Reactions: usertests
Mar 17, 2024
12
23
15
What is needed is a very good x64/x86 emulator on ARM. It is my understanding that Microsoft didn't provide that yet (contrary to Apple who did its homework for the M1). Without that, I think adoption will be slugish.
 
  • Like
Reactions: slightnitpick
This is good because AMD and Intel have a duopoly on x86-64 and no other companies are even allowed to make chips.
That's an old wives tale...anybody is free to make a x86 CPU, the problem is that every IP that is needed to make a USEFUL x86 CPU is protected and owned by either intel and or amd. Without all the mmx sse avx and all of that stuff a CPU will be utter crap and all of that belongs to big players.
Two companies are not enough for adequate competition. ARM64 will finally bring real competition back to Windows CPUs. I bet they take a good share of the laptop market in 5 years.
Have you looked around you lately?! The market is being flooded by miniPCs and handhelds with x86 CPUs, a domain firmly in the hands of ARM up until a few years ago. There is no danger for x86 , and arm is doing very well as well, there is no war between them.
 

slightnitpick

Upstanding
Nov 2, 2023
164
101
260
This is good because AMD and Intel have a duopoly on x86-64 and no other companies are even allowed to make chips. Two companies are not enough for adequate competition. ARM64 will finally bring real competition back to Windows CPUs. I bet they take a good share of the laptop market in 5 years.
Via/Zhaoxin. Though as you imply the problem is not a duopoly on a particular architecture, but a heretofore duopoly on competitive PC CPUs in general. Soon the issue will just be inertia or specialized needs.
 
  • Like
Reactions: bit_user

bit_user

Polypheme
Ambassador
This is good...

Not because the x86-64 ISA is bad... actually it is pretty darn good for high performance computing. Maybe it is slightly worse for power efficiency for mobile use because of the extra decoding, but that really isn't a huge deal.
Yes it's bad! If it weren't, Intel wouldn't be going to the trouble of undertaking the most massive overhaul to x86 since it went 64-bit.

This is good because AMD and Intel have a duopoly on x86-64 and no other companies are even allowed to make chips.
Well, Zhaoxin...
 

bit_user

Polypheme
Ambassador
What is needed is a very good x64/x86 emulator on ARM. It is my understanding that Microsoft didn't provide that yet
What MS dragged their feet on was 64-bit support. Win 10 could emulate 32-bit x86, but they only added support for 64-bit x86 binaries in Win 11. I think compatibility still isn't quite as good as Apple's, but we should know more when the launch reviews for the new Qualcomm laptops happen.
 

NeoMorpheus

Reputable
Jun 8, 2021
218
246
4,960
Its funny that I was obsessed with Intel Strong ARM even before knowing the inner details of the architecture or the dirty and illegal Intel practices against AMD.

That said, looks like both AMD and Intel will have to refresh their ARM licenses or pull a miracle with x64.

Personally, I think they can pull a x64 revolution by “abandoning” backwards compatibility, hence removing unnecessary legacy silicon in their current designs.
 
  • Like
Reactions: slightnitpick
Yes it's bad! If it weren't, Intel wouldn't be going to the trouble of undertaking the most massive overhaul to x86 since it went 64-bit.
Extension is even part of the link, an extension is not a overhaul, it's a we keep everything the same...oh, but also we added this now.

ARM adds new extensions with every version and what's the bad thing about ARM is that they take out old expansions leaving old software in the dust.
You paid thousands of dollars for a software suit?! Well, sucks for you, we moved on to android xx that only runs on arm cpu xx so now you are screwed...
(Will be the same for windows on arm)
Of course that plays into the hand of big software companies that went on to turn all of their software into cloud based subscriptions, hail profit.
 

bit_user

Polypheme
Ambassador
Extension is even part of the link, an extension is not a overhaul, it's a we keep everything the same...oh, but also we added this now.

ARM adds new extensions with every version
That's a false comparison, if you actually look at what APX entails. Extensions are nearly always in the form of new instructions, but what APX does is to modify the way all of the existing instructions work.

I maintain what I said: it's the biggest revision to x86 since 64-bit.

what's the bad thing about ARM is that they take out old expansions leaving old software in the dust.
ARM makes 32-bit backward compatibility optional. For a lot of platforms using ARM today, there is very little 32-bit code out there, so removing hardware backward compatibility for it isn't a big deal.
 
  • Like
Reactions: slightnitpick
That's a false comparison, if you actually look at what APX entails. Extensions are nearly always in the form of new instructions, but what APX does is to modify the way all of the existing instructions work.

I maintain what I said: it's the biggest revision to x86 since 64-bit.
It keeps all of the old instructions just the way they are now, they just provide more registers for them allowing modern compilers to work better and make the instructions work better, but the instructions are still all the same ones we have since forever.
Any code made for APX will still run on any old CPU just slower and less efficient.
It's an extension, hence why they call it that. Nothing is being overhauled here.
Today, we are introducing the next major step in the evolution of Intel® architecture. Intel® Advanced Performance Extensions (Intel® APX) expands the entire x86 instruction set with access to more registers and adds various new features that improve general-purpose performance. The extensions are designed to provide efficient performance gains across a variety of workloads – without significantly increasing silicon area or power consumption of the core.

Intel® APX doubles the number of general-purpose registers (GPRs) from 16 to 32. This allows the compiler to keep more values in registers; as a result, APX-compiled code contains 10% fewer loads and more than 20% fewer stores than the same code compiled for an Intel® 64 baseline.2 Register accesses are not only faster, but they also consume significantly less dynamic power than complex load and store operations.
 

bit_user

Polypheme
Ambassador
It keeps all of the old instructions just the way they are now, they just provide more registers for them allowing modern compilers to work better and make the instructions work better, but the instructions are still all the same ones we have since forever.
You could say the same thing about x86-64, but in fact APX doesn't only increase the number of registers. It also turns every instruction into 3-operand instructions! And on top of all that, it adds new stack and predication instructions which, unlike many prior ISA extensions Intel added, affect mainstream workloads, including scalar integer as much or more than vector floating point.

Any code made for APX will still run on any old CPU just slower and less efficient.
No, this is dead wrong! A legacy core cannot execute an APX instruction stream!

It's an extension, hence why they call it that. Nothing is being overhauled here.
Call it what you want, but it's a clear admission that ARM and others have passed them by. APX is an attempt to make up for most of that gap, but there are things you can't fix about x86 without breaking it completely.
 
  • Like
Reactions: slightnitpick

slightnitpick

Upstanding
Nov 2, 2023
164
101
260
Personally, I think they can pull a x64 revolution by “abandoning” backwards compatibility, hence removing unnecessary legacy silicon in their current designs.
Best use case for binary translators. The actual processor design companies would want to make the absolute best translators that their detailed understanding of the inner workings of their processors and instructions sets allow them to. These would theoretically be better than any translator a third party such as Microsoft could come up with on their own. As well as being available for multiple operating systems.
 
You could say the same thing about x86-64, but in fact APX doesn't only increase the number of registers. It also turns every instruction into 3-operand instructions! And on top of all that, it adds new stack and predication instructions which, unlike many prior ISA extensions Intel added, affect mainstream workloads, including scalar integer as much or more than vector floating point.


No, this is dead wrong! A legacy core cannot execute an APX instruction stream!
Do you even read the things you post about or are you 100% driven by make believe?
You need to use new prefixes, if a dev wants to use legacy instructions using the new method they will be able to by adding them.
This still passes through the compiler and if the compiler is set to compile for an old CPU it will not include the optional things and so the same code, not the same compiled binary (although it might depending on how intel will handle that) , will still work on older CPUs.

While every instruction can be made to run better by adding the prefixes to the code, they are also not changed, the dev will have to use them differently (adding prefixes) which keeps all instructions intact.
Compiler enabling is straightforward – a new REX2 prefix provides uniform access to the new registers across the legacy integer instruction set. Intel® AVX instructions gain access via new bits defined in the existing EVEX prefix. In addition, legacy integer instructions now can also use EVEX to encode a dedicated destination register operand – turning them into three-operand instructions and reducing the need for extra register move instructions.

Call it what you want, but it's a clear admission that ARM and others have passed them by. APX is an attempt to make up for most of that gap, but there are things you can't fix about x86 without breaking it completely.
So intel staying stagnant is a clear admission of them being well ahead...don't you see how crazy your argument is?!
All CPU makers ARM included constantly search and adopt things to make them faster and more efficient, is ARM admitting defeat because they add new things?
 

slightnitpick

Upstanding
Nov 2, 2023
164
101
260
Do you even read the things you post about or are you 100% driven by make believe?
I think you're being ungenerous here. Everyone knows that any program written in a high-level language can be compiler optimized to any sort of general instruction set with the right compiler. Bit_user was obviously referring to low-level code such as the compiled binary code.

And sure, depending on the changes it would be possible to compile code with decisions trees allowing it to be run with full optimizations under multiple instructions sets. But programmers are unlikely to do that. It would be easier to just compile separate binaries and have an installed script identify the instruction set. And even easier to not bother compiling separate binaries at all, and just let those on older processors miss out.
 

bit_user

Polypheme
Ambassador
Do you even read the things you post about or are you 100% driven by make believe?
Yes, I do read what I post about. And, by now, you know very well that I choose my words carefully.

The claim I was responding to was:

Any code made for APX will still run on any old CPU just slower and less efficient.

See where you said "any code"? That's a very strong statement and it's different than what you're arguing now.

You need to use new prefixes, if a dev wants to use legacy instructions using the new method they will be able to by adding them.
Yeah, but no. I've used explicit prefix bytes in assembly code to override 32-bit vs. 16-bit. This is not like that. It's not just a fixed opcode byte that you add, but rather one that you'd have to custom compute for each instruction, according to which registers it uses.

This still passes through the compiler and if the compiler is set to compile for an old CPU it will not include the optional things
Um, no. You can't take assembly code written for using 32 General Purpose Registers and 3-operand instructions, simply exclude the REX2 prefix byte and have it work properly with 16 GPRs + 2-operand instructions.

This also doesn't apply to any higher-level languages. In that case, you're 100% dependent on the compiler to generate code for the selected target. On the plus side, you're also not manually computing prefix bytes.

So intel staying stagnant is a clear admission of them being well ahead...don't you see how crazy your argument is?!
Crazy like a fox!
🦊

All CPU makers ARM included constantly search and adopt things to make them faster and more efficient, is ARM admitting defeat because they add new things?
ARM made the move to 32 GPRs back in 2011, when they announced AArch64 (the 64-bit version of their ISA). They already had 3-operand instructions. They haven't made any wholesale changes to their instruction set, since then.

As I said, it's one thing to add new instructions, but another thing to retrofit an existing ISA with wholesale new capabilities. In the case of APX, it comes at the expense of instruction stream density. AArch64 instructions are all a nice 4 bytes in length. In the case of x86-64, I'm reading the maximum instruction length was 15 bytes, which the new prefix bytes presumably increases to 16 bytes.

That affects everything from how much room instructions burn in the cache hierarchy, how much memory bandwidth is chewed up by fetching instructions, how big and power-consuming the instruction decoders get, and how far ahead the instruction decoders have to search for the next instruction. It's not nothing.
 
Last edited:
  • Like
Reactions: ikjadoon

Amdlova

Distinguished
This is so nice... Microsoft need to put some rule...
Lower this power will not work on windows.
Will have tons of crap machines everywhere :(
Time to find another profession.
Atom days with 2gb of RAM and 16gb of mmc crap
 
For ARM to make it in a WintelAMD world, will need the big software houses to port over the business packages, machines need to be low cost but with the same performance and once that happens, why only for Windows, no difference to porting for Linux, maybe even easier, no artificial hardware lockouts we have like W11?
 

slightnitpick

Upstanding
Nov 2, 2023
164
101
260
For ARM to make it in a WintelAMD world, will need the big software houses to port over the business packages, machines need to be low cost but with the same performance and once that happens, why only for Windows, no difference to porting for Linux, maybe even easier, no artificial hardware lockouts we have like W11?
What business packages are you thinking about for Linux, that already exist on Linux?

The Windows 11 hardware requirements shouldn't affect any third-party compiled code, should it? I thought it was just an OS requirement. And since Windows presumably keeps the same libraries for both x86 and ARM there should be effectively no impediments to recompiling for an ARM architecture.

The issue, of course, is that a company that doesn't already target Linux isn't going to suddenly want to target Linux. Compiling for another operating system is a completely separate matter than just compiling for another architecture.

Cloud-based business packages should work fine on any platform. I've got an instance of cloud-based MS Teams and Outlook going in a browser window on Linux right now.
 
  • Like
Reactions: bit_user
What business packages are you thinking about for Linux, that already exist on Linux?

The Windows 11 hardware requirements shouldn't affect any third-party compiled code, should it? I thought it was just an OS requirement. And since Windows presumably keeps the same libraries for both x86 and ARM there should be effectively no impediments to recompiling for an ARM architecture.

The issue, of course, is that a company that doesn't already target Linux isn't going to suddenly want to target Linux. Compiling for another operating system is a completely separate matter than just compiling for another architecture.

Cloud-based business packages should work fine on any platform. I've got an instance of cloud-based MS Teams and Outlook going in a browser window on Linux right now.
off the top of my head, call enter apps, all seem to be Windows reliant, now thats not say they couldn't be ported to Linux, but I doubt you'd get a publisher to invest to doing that.
 
  • Like
Reactions: slightnitpick