News Qualcomm goes where Apple won't, readies official Linux support for Snapdragon X Elite

ThomasKinsley

Prominent
Oct 4, 2023
300
297
560
I'm seeing a pattern jumping out here.

- SemiAccurate alleges Qualcomm chip runs atrociously on Windows. Qualcomm blames MS for poor early sample performance.
- User mod sees Windows 11 ported to Nintendo Switch. Can't run basic programs.
- Qualcomm officially supporting Linux.

It certainly appears that Qualcomm is hedging their bets by having a backup solution in case Windows doesn't work, but there is another issue here that I'm wondering about. I'm sure other members will be able to express this more accurately than I am able, but one of the unique problems on ARM/Mobile development (Android is the best example) is that it needs a custom ROM per unique hardware. There is no universal OS that fits all. Windows has previously gained its users by being easy to install on a wide variety of PCs regardless of the parts chosen. Is that no longer the case on ARM? Are we seeing that universality come to an end and does the OS need to be optimized per device?
 

JamesJones44

Reputable
Jan 22, 2021
754
687
5,760
Windows has previously gained its users by being easy to install on a wide variety of PCs regardless of the parts chosen. Is that no longer the case on ARM? Are we seeing that universality come to an end and does the OS need to be optimized per device?
In my opinion this is very much a Windows problem. Many Linux distros run well on both x86_x64 and ARM 32/64 (also seen in the Switch "hack"). macOS actually still runs on both x86_x64 without issue. I think the WinTel runs deep within Microsoft and for some reason they either can't or are not actually interested in seeing it succeed. Back in the RT days these things were difficult, but today supporting ARM and x86_x64 in a single product is fairly strait forward.
 
  • Like
Reactions: ThomasKinsley

HaninTH

Proper
Oct 3, 2023
111
73
160
I'm seeing a pattern jumping out here.

- SemiAccurate alleges Qualcomm chip runs atrociously on Windows. Qualcomm blames MS for poor early sample performance.
- User mod sees Windows 11 ported to Nintendo Switch. Can't run basic programs.
- Qualcomm officially supporting Linux.

It certainly appears that Qualcomm is hedging their bets by having a backup solution in case Windows doesn't work, but there is another issue here that I'm wondering about. I'm sure other members will be able to express this more accurately than I am able, but one of the unique problems on ARM/Mobile development (Android is the best example) is that it needs a custom ROM per unique hardware. There is no universal OS that fits all. Windows has previously gained its users by being easy to install on a wide variety of PCs regardless of the parts chosen. Is that no longer the case on ARM? Are we seeing that universality come to an end and does the OS need to be optimized per device?
Possibly, but since most reasonable PC outlays, today, have way more storage than typical mobile devices, packing on several "common hardware setup" ROMs in a master distribution shouldn't be an issue.

I know, this can be an problem for some purist that prefer a minimalist approach to distributions, but until/unless they can get some universal components working, this might have to be the approach in the near term.

For me, much like my hatred of CLI when we have vast resources to utilize comprehensive GUIs for every function, having a bigger distro to deal with is not much of a concern, as even a 1TB distro won't make a dent in my data storage systems. Games are pretty close to this range already.
 
  • Like
Reactions: ThomasKinsley

cknobman

Distinguished
May 2, 2006
1,164
312
19,660
This is why Snapdragon will be getting my money.
I'm done with Windows, its ad ridden spyware trash now.
Apple wants to be a dictator and makes products for people that dont ask questions and do what they are told.

Linux is the platform I will be moving forward with.
I only have a windows laptop because my company forces me to do my job on it.

Have not bought a new laptop, and been waiting on purpose, because Intel and x86 in general is shit.
The only mobile platform I've bought is a Steam Deck which runs on Linux.
I love that thing but looking forward to some of that ARM battery life goodness.
 
  • Like
Reactions: bit_user

ezst036

Honorable
Oct 5, 2018
653
561
12,420
I'm seeing a pattern jumping out here.

- SemiAccurate alleges Qualcomm chip runs atrociously on Windows. Qualcomm blames MS for poor early sample performance.
- User mod sees Windows 11 ported to Nintendo Switch. Can't run basic programs.
- Qualcomm officially supporting Linux.

It certainly appears that Qualcomm is hedging their bets by having a backup solution in case Windows doesn't work, but there is another issue here that I'm wondering about. I'm sure other members will be able to express this more accurately than I am able, but one of the unique problems on ARM/Mobile development (Android is the best example) is that it needs a custom ROM per unique hardware. There is no universal OS that fits all. Windows has previously gained its users by being easy to install on a wide variety of PCs regardless of the parts chosen. Is that no longer the case on ARM? Are we seeing that universality come to an end and does the OS need to be optimized per device?

In my opinion this is very much a Windows problem. Many Linux distros run well on both x86_x64 and ARM 32/64 (also seen in the Switch "hack"). macOS actually still runs on both x86_x64 without issue. I think the WinTel runs deep within Microsoft and for some reason they either can't or are not actually interested in seeing it succeed. Back in the RT days these things were difficult, but today supporting ARM and x86_x64 in a single product is fairly strait forward.

To my understanding, you're actually both correct.

ARM isn't like x86 where there is much more commonality across CPUs and software can have a "give" to it and keep working. To that extent, no, AFAIK there can't be a "universal OS" in the ARM world. Each CPU requires CPU-specific bits to even begin.

And while this certainly is a Windows problem, it really isn't a Windows problem. What I mean is, only Microsoft has Windows source code. So Qualcomm cannot really "contribute" as Linux users understand contributing. Meaning: every ARM CPU that Windows will ever support will only be supported once it gain's Microsoft's blessing. Qualcomm has to wait for Microsoft. With Linux, Qualcomm can just go in and get it done and then guess what - its done. Microsoft is a gatekeeper, Linux has no equivalent gatekeeper.

ARM fragmentation will prove to be a big challenge to Windows, but Linux is designed quite well to handle it.

Hopefully someone with more knowledgeable than me can clear this up.
 
  • Like
Reactions: ThomasKinsley

JamesJones44

Reputable
Jan 22, 2021
754
687
5,760
ARM isn't like x86 where there is much more commonality across CPUs and software can have a "give" to it and keep working. To that extent, no, AFAIK there can't be a "universal OS" in the ARM world. Each CPU requires CPU-specific bits to even begin.
The ISA for ARM versions is common for all ARM based processors. Almost all of the code for one ARM CPU can be shared and executed on another ARM based processor that runs the same version of the ISA. Now, just like Intel and AMD there can be differences in performance handling based on the CPU architecture (Hybrid e cores from Intel for example) which can require special handling, but this isn't a common thing.

On the server side there are several different ARM processors that all work out of the box with Alpine Linux or Ubuntu (and other distros) for example, there is no special handling for an Amazon Graviton vs Ampere Altra with Google Cloud/Azure for example. Even in various IoT/DYI board, there are may different ARM based CPUs and Alpine or Ubuntu can still be installed no problem CPU wise. Drivers for the various I/O expansion is where trouble starts, not the CPUs.

This is the whole reason to have a common ISA. The fragmentation in ARM is a perception, it's not a real issue.

Also we are talking about support exactly 1 vendor for Windows at the moment, Qualcomm. Even for previous MS attempts they were only supporting 1 vendor.

I can't buy it's not a Windows problem when you have a common ISA and several other OS's that support just about any ARM CPU ever made and Windows can't support a single vendor.
 
The ISA for ARM versions is common for all ARM based processors. Almost all of the code for one ARM CPU can be shared and executed on another ARM based processor that runs the same version of the ISA. Now, just like Intel and AMD there can be differences in performance handling based on the CPU architecture (Hybrid e cores from Intel for example) which can require special handling, but this isn't a common thing.
So it is for the x86 world. However, contrary to when Linux started and there was 5-6 makers of x86 processors (Intel, AMD, Cyrix, TI etc.) and half a dozen generations (386, 486, 586, PPro and some intermediaries), now you have three of them (Intel, AMD, Via) with little changes for the last decade or so.
You can still build a 486 binary and it will run on all of them, but you will lose a truckload of performance on the way. ARM is more fragmented but each version is less likely to run with a bunch of other hardware connected to it.
On the server side there are several different ARM processors that all work out of the box with Alpine Linux or Ubuntu (and other distros) for example, there is no special handling for an Amazon Graviton vs Ampere Altra with Google Cloud/Azure for example. Even in various IoT/DYI board, there are may different ARM based CPUs and Alpine or Ubuntu can still be installed no problem CPU wise. Drivers for the various I/O expansion is where trouble starts, not the CPUs.
As said above, you can build a one-size-fits-all binary, and it will run, but you lose performance. Also, the main problem with ARM is the firmware used and the extra hardware plugged into it.
This is the whole reason to have a common ISA. The fragmentation in ARM is a perception, it's not a real issue.
It is one, but it's neither new nor is it specific to ARM.
Also we are talking about support exactly 1 vendor for Windows at the moment, Qualcomm. Even for previous MS attempts they were only supporting 1 vendor.
So did they with Alpha, with x86-64 AKA AMD64... But the latter ended up dominant.
I can't buy it's not a Windows problem when you have a common ISA and several other OS's that support just about any ARM CPU ever made and Windows can't support a single vendor.
Because "other OS'es" is pretty much restricted to Linux (including Android), BSD and Minix - and all of these are open source.
As a matter of fact, the main problem isn't the CPU, but build-specific driver blobs for stuff like controllers (audio, I/O, graphics etc.) As a matter of fact, as soon as an open source driver comes out for a piece of hardware, a bunch of homebrew OS versions come up. On Windows, nope nope nope.
 
May 15, 2024
1
1
10
This is good news, but I'm surprised that the biggest issue with 'Linux on Qualcomm arm laptops' is not mentioned. That is that the firmware (on the Lenovo X13s, the only laptop so far available) does not allow Linux to boot in EL2 so you can't run a hypervisor, and thus VMs, properly. The Qualcomm firmware boots the OS in EL1 which is fine for running a normal OS, but a hypervisor needs extra privilege. Proper firmware boots in EL2 so the OS can decide. For Windows they implemented a proprietary security mechanism to talk back to the firmware for hypervisor code to get the necessary privilege, but that's not available for Linux (or any other OS) to use.
So yes Linux works great on the X13s so long as you didn't need a VM. Note that the X13s uses the Snapdragon 8cx Gen 3, not the newer X Elite (X1E-84-100).
That's not a matter of the Linux support in the kernel, its a matter of the UEFI/firmware/'BIOS' implementation, and some fool at Qualcomm only cared about making Windows work. Lenovo and Linaro have not been able to get this fixed yet. I sincerely hope they (Qualcomm) are not going to put the same dumbness in newer laptops using the Elite X. You can have all the kernel support you like, but if it can't be used because of stupid firmware decisions, it doesn't help you much.
 
  • Like
Reactions: bit_user

syadnom

Distinguished
Oct 16, 2010
23
13
18,515
Why does it always seem like someone has to lash out against apple not supporting an operating system they have nothing to do with? Oh, apple wont contribute to linux running on apple silicon! it ruins the credibility of the article. Apple makes Apple products, they aren't into linux. They don't run linux. Nothing about Apple should suggest that they do anything to make linux run on Apple silicon.

In other news, Apple not helping microsoft port windows to apple hardware either. obviously.

Windows on arm is trash. There are very few machines you can even buy and those are barely passable. Almost certainly the best machine is the lenovo x13s and it's pretty meh. Partially from poor software support but also it's just not fast and you spend too much time hitting the CPU if you do anything intensive and that soaks up the battery making it basically no better than an intel model.

Don't get me wrong, I'm really looking forward to decent performance arm laptops. I would take apple M1 level performance here. I run a macbook pro m2 max 16 as my daily driver primarily because of battery and cpu performance. I have no specific love for macos (though I don't know how I'd live without imessage with a keyboard...), I need to be untethered and not pack a power supply everywhere and apple silicon on M1 and now M2 have been the only chips/systems that have provided that.


I hope they get linux and wayland to be first-class software on x elite series and they are M1-M2 level performance AND they put it in a nice box.
 

bit_user

Titan
Ambassador
It certainly appears that Qualcomm is hedging their bets by having a backup solution in case Windows doesn't work,
No. Very few people & organizations, who want Windows as their first choice, would accept Linux as a substitute.

I think their Linux support is mainly about technical users, many of whom prefer Linux. It's pretty shocking that enough in the community wanted a first class ARM laptop that they undertook all the trouble of getting Asahi as far as they have. Qualcomm is probably thinking it'd be easy to win over many of its backers and proponents.
 
  • Like
Reactions: JamesJones44

bit_user

Titan
Ambassador
Apple makes Apple products, they aren't into linux. They don't run linux. Nothing about Apple should suggest that they do anything to make linux run on Apple silicon.
I think they actually do support Linux VMs inside of MacOS, so you're not 100% correct. However, as far as the hardware natively supporting Linux, the main thing Apple did was to allow unsigned OS images to boot. They didn't have to leave that door open, but they did.

In other news, Apple not helping microsoft port windows to apple hardware either. obviously.
Again, they support Windows VMs under MacOS. That requires a little bit of cooperation, I'm pretty sure.
 

JamesJones44

Reputable
Jan 22, 2021
754
687
5,760
I think they actually do support Linux VMs inside of MacOS, so you're not 100% correct. However, as far as the hardware natively supporting Linux, the main thing Apple did was to allow unsigned OS images to boot. They didn't have to leave that door open, but they did.
100% this. Apple could completely blocked Linux and Windows if they wanted to, but they haven't. I don't think anyone really expects Apple to provide full support for getting Linux or Windows products to run on a Mac. I'm sure some want them too, but I don't think that's the expectation for most users.
 

abufrejoval

Reputable
Jun 19, 2020
480
323
5,060
My take so far: yes, I'll start putting pennies into the piggy bank to get one, but no, I won't buy unless I'm sure it will run the Linux of my choice properly (with all the power savings and device controls).

And since that doesn't rhyme with vendors constantly milking consumers on top of supposedly selling a product (which includes a full transfer of property including sovereignty and management), count me very sceptical for this expectation being satisfied fully.

There are things akin to the Pluton chippery to worry about and evidently virtualization (including nesting and per machine encryption) might not come included.

And then there is the long term support issue that tends to plague Qualcomm hardware in the Android domain: what's the use of Linux, if you're tied to a kernel long out of support because of proprietary blobs and internal hardware interfaces?

x86 wasn't designed with proper and forward looking hardware abstractions in mind like say the 360/370 architecture now z/Arch. But because it was multi-vendor from the start, it achieved a quite incredible bit of abstractions over time, which are what made it great (and complex).

These nouveau ARMs aren't in it for the long-term vision, just to grab as big a share of the thinner, longer, faster market, so open standards, interoperability and multi-vendor are more on the don'ts-list than the to-do list.

Consumers and regulators will need to push them towards the proper choices.
 

abufrejoval

Reputable
Jun 19, 2020
480
323
5,060
In my opinion this is very much a Windows problem. Many Linux distros run well on both x86_x64 and ARM 32/64 (also seen in the Switch "hack"). macOS actually still runs on both x86_x64 without issue. I think the WinTel runs deep within Microsoft and for some reason they either can't or are not actually interested in seeing it succeed. Back in the RT days these things were difficult, but today supporting ARM and x86_x64 in a single product is fairly strait forward.
Over the last months I've taken the plunge and got myself both an RP5 8GB and an Orange Pi+ 32GB to play with ARM vs. x86. They are pretty similar in cost and performance to Atom based NUCalikes I've also used for low-cost HCI clusters on oVirt/RHV and Proxmox.

And ThomasKinsley is quite correct in his estimation, that hardware diversification is causing issues and I'd say that they are getting far bigger with the way the industry is moving.

We are used to think of ARM vs x86 vs RISC-V mostly as an ISA or instruction set architecture issue. But that has almost become the least of our worries these days, after people have stopped coding in assembler and even avoid issues like byte sex. All that's needed with open source is a re-compile... or let's just say most of that work has been done already.

But nobody sells CPUs any more. Everybody sells SoCs, where the parts that run the ISA may actually be minority surface area share holders. And the relative importance of those non-CPU IP blocks is only getting bigger, GPUs, DPUs, VPUs, NPUs are pushed to overcome the scale limits of general purpose single threaded code and then there are ISA extensions, which push those xPU functionalities back into processor cores. RISC-V was started years ago precisely to make that easier, but both x86 and ARM have been doing ISA extensions for decades now, to the point where the previous paragraph seems almost false.

Hardware dependencies require software to take care of them and software gets somewhat easier to make when it gets well architected abstractions to build on, also in hardware. And of course what is considered well-architected changes over time and with the workloads that are run: nothing designed in 1990, not even really forward thinking super computers using Inmos Transputers was designed with LLMs running on wafer-scale silicon in mind.

Coming back to the PIs, I can't take a single OS image and run that on that on both PIs, neither Windows nor Linux.

Well again, actually, yes, I can, to a degree. Using a per-device EFI bootloader to deliver some hardware abstraction I can run the same image on both. But the best I can hope for in terms of functionality is CPU, RAM, storage and basic networking, typically via USB interfaces. None of the GPU, VPU, NPU or DPU parts of the SoCs are working, unless that OS does include modules or drivers for each variant.

And when it comes to the various ISA extensions, of which there are really, really interesting ones in terms of hardware pointer validation, return-oriented programming protection etc. like CHERI present or coming to ARM, your OS is rather unlikely to use them until they are really, really common in existent hardware. On the application side, it may be much easier to profit from ISA extensions as long as they won't affect the size of the register file or other structures that need saving/restoring on process switches. Just managing that on x86 is filling whole books.

In practical terms, I've been trying to run Proxmox on the OP5. Promoxmx requires a Debian base that isn't Ubuntu, but full Mali GPU support only seems to be available on Ubuntu (and their proprietary Linux variants).

That might never change, because not all parts of the full Mali GPU software stack seem to be open source and Rockchip itself doesn't think it worth spending time on a full debian port.

I don't know about the NPU, even less if it's something these licensed from elsewhere, but by the time it's fully supported the caravan may have long moved on to some new variable precision IP blocks that deliver 3x the inference at the same Wattage or all that AI has gone bust.

Writing these device drivers is hard work. It's just as expensive to do that for Nvidia's newest and biggest GPU as it is for a Raspberry PIs VPU, which is, if I understand it correctly, a DSP design mophed into a GPU.

In other words the sales price doesn't dictate the effort, which makes it very hard to recover on a chip that sells for a couple of bucks and not in billions of units.

Having to do that with Windows and Linux, won't do well for the economy. Some OS have gone and started to work with Linux compatible drivers, like VMware. Not sure that's an approach Microsoft has chosen for Windows on ARM and actually Dave Cutler might have put better abstractions into his VMS clone than Linus T. put into his clone of Unix: if I remember correctly it's something Andrew Tannenbaum used to chide him about...
 

JamesJones44

Reputable
Jan 22, 2021
754
687
5,760
Over the last months I've taken the plunge and got myself both an RP5 8GB and an Orange Pi+ 32GB to play with ARM vs. x86. They are pretty similar in cost and performance to Atom based NUCalikes I've also used for low-cost HCI clusters on oVirt/RHV and Proxmox.

And ThomasKinsley is quite correct in his estimation, that hardware diversification is causing issues and I'd say that they are getting far bigger with the way the industry is moving.

We are used to think of ARM vs x86 vs RISC-V mostly as an ISA or instruction set architecture issue. But that has almost become the least of our worries these days, after people have stopped coding in assembler and even avoid issues like byte sex. All that's needed with open source is a re-compile... or let's just say most of that work has been done already.

But nobody sells CPUs any more. Everybody sells SoCs, where the parts that run the ISA may actually be minority surface area share holders. And the relative importance of those non-CPU IP blocks is only getting bigger, GPUs, DPUs, VPUs, NPUs are pushed to overcome the scale limits of general purpose single threaded code and then there are ISA extensions, which push those xPU functionalities back into processor cores. RISC-V was started years ago precisely to make that easier, but both x86 and ARM have been doing ISA extensions for decades now, to the point where the previous paragraph seems almost false.

Hardware dependencies require software to take care of them and software gets somewhat easier to make when it gets well architected abstractions to build on, also in hardware. And of course what is considered well-architected changes over time and with the workloads that are run: nothing designed in 1990, not even really forward thinking super computers using Inmos Transputers was designed with LLMs running on wafer-scale silicon in mind.

Coming back to the PIs, I can't take a single OS image and run that on that on both PIs, neither Windows nor Linux.

Well again, actually, yes, I can, to a degree. Using a per-device EFI bootloader to deliver some hardware abstraction I can run the same image on both. But the best I can hope for in terms of functionality is CPU, RAM, storage and basic networking, typically via USB interfaces. None of the GPU, VPU, NPU or DPU parts of the SoCs are working, unless that OS does include modules or drivers for each variant.

And when it comes to the various ISA extensions, of which there are really, really interesting ones in terms of hardware pointer validation, return-oriented programming protection etc. like CHERI present or coming to ARM, your OS is rather unlikely to use them until they are really, really common in existent hardware. On the application side, it may be much easier to profit from ISA extensions as long as they won't affect the size of the register file or other structures that need saving/restoring on process switches. Just managing that on x86 is filling whole books.

In practical terms, I've been trying to run Proxmox on the OP5. Promoxmx requires a Debian base that isn't Ubuntu, but full Mali GPU support only seems to be available on Ubuntu (and their proprietary Linux variants).

That might never change, because not all parts of the full Mali GPU software stack seem to be open source and Rockchip itself doesn't think it worth spending time on a full debian port.

I don't know about the NPU, even less if it's something these licensed from elsewhere, but by the time it's fully supported the caravan may have long moved on to some new variable precision IP blocks that deliver 3x the inference at the same Wattage or all that AI has gone bust.

Writing these device drivers is hard work. It's just as expensive to do that for Nvidia's newest and biggest GPU as it is for a Raspberry PIs VPU, which is, if I understand it correctly, a DSP design mophed into a GPU.

In other words the sales price doesn't dictate the effort, which makes it very hard to recover on a chip that sells for a couple of bucks and not in billions of units.

Having to do that with Windows and Linux, won't do well for the economy. Some OS have gone and started to work with Linux compatible drivers, like VMware. Not sure that's an approach Microsoft has chosen for Windows on ARM and actually Dave Cutler might have put better abstractions into his VMS clone than Linus T. put into his clone of Unix: if I remember correctly it's something Andrew Tannenbaum used to chide him about...
All content of an SoC outside of the CPU is handled by drivers. The OS maker itself is not coding for that hardware. This is exactly why Microsoft does not need to code specific versions of Windows for Nvidia, Intel, AMD, etc. GPUs or why you can run various versions of Intel's iGPU on Windows OS versions that haven't been updated in years. Also why they don't ship specific code for every SSD controller on the market.

Again, the OS only needs to be compiled against the CPU ISA. All of the other content included in the SoC is handled by drivers from the CPU vendor.
 

syadnom

Distinguished
Oct 16, 2010
23
13
18,515
I think they actually do support Linux VMs inside of MacOS, so you're not 100% correct. However, as far as the hardware natively supporting Linux, the main thing Apple did was to allow unsigned OS images to boot. They didn't have to leave that door open, but they did.


Again, they support Windows VMs under MacOS. That requires a little bit of cooperation, I'm pretty sure.
Apple does no such thing. A completely separate company called Parallels does, but that's not Apple. Apple has their own hypervisor kit but it doesn't 'support' anything except macos. It can be used to run other things as well but in no way does Apple support any of this.
 
  • Like
Reactions: bit_user

bit_user

Titan
Ambassador
Apple does no such thing. A completely separate company called Parallels does, but that's not Apple. Apple has their own hypervisor kit but it doesn't 'support' anything except macos. It can be used to run other things as well but in no way does Apple support any of this.
Thanks for the clarification. Anything about Apple is completely alien to me.
 

abufrejoval

Reputable
Jun 19, 2020
480
323
5,060
All content of an SoC outside of the CPU is handled by drivers. The OS maker itself is not coding for that hardware. This is exactly why Microsoft does not need to code specific versions of Windows for Nvidia, Intel, AMD, etc. GPUs or why you can run various versions of Intel's iGPU on Windows OS versions that haven't been updated in years. Also why they don't ship specific code for every SSD controller on the market.

Again, the OS only needs to be compiled against the CPU ISA. All of the other content included in the SoC is handled by drivers from the CPU vendor.
Mostly true on paper.

But the issue ThomasK and I were commenting on is that vendors like Rockchip won't write or maintain those drivers for more than perhaps an initial launch platform, because it's hugely expensive.

Microsoft did and does write some drivers, that were judged critical for their OS success on x86 like the original IDE and VGA code. But most of the new stuff has not just local intelligence, but also the complexity that goes with it. So either the vendors do that, or it's not very likely to get done.

In some cases like the Raspberry PI open source authors might go ahead and do the work, even going as far as reverse engineering them, but it's not a reliable general model and those folks tend not to expend efforts on Windows.

I've adapted GPU driver code for what was considered a very smart GPU fourty years ago a TMS34020, which took me more than a year of full-time work.

Today's xPUs are so much more complex and may require dozens of man-years just to port from one OS to the other, while the functional core is much bigger yet and typically a multi-generational effort (of xPUs not software engineers). When the OS design doesn't cause too much of an impendance mismatch, porting effort might be less, but that depends a lot on the driver domain. And when you look through the sheer number of components an SoC like the RK3588 or the BCM2712 have on a single chip, that's a lot of work.

With Mali iGPUs that work would have to be done by ARM, here I guess Qualcomm has to do it for Adreno, no idea if Broadcom or anyone would ever do a Windows port for the VideoCore Raspberry VPUs. A recompile won't be enough.

Case in point: OrangePI doesn't seem to maintain even the Mali drivers for all the distinct Linux variants they somehow support. If you want Vulkan, Android is your only current option. If you want OpenGL acceleration, it's limited to Ubuntu. Even if in that case, the effort is mostly re-compiling for a different variant of the kernel. But just keeping track of all of these, seems to exhaust their support team.

If you added Windows, BSD, Fuchsia or Redox...
 

JamesJones44

Reputable
Jan 22, 2021
754
687
5,760
Mostly true on paper.

But the issue ThomasK and I were commenting on is that vendors like Rockchip won't write or maintain those drivers for more than perhaps an initial launch platform, because it's hugely expensive.

Microsoft did and does write some drivers, that were judged critical for their OS success on x86 like the original IDE and VGA code. But most of the new stuff has not just local intelligence, but also the complexity that goes with it. So either the vendors do that, or it's not very likely to get done.

In some cases like the Raspberry PI open source authors might go ahead and do the work, even going as far as reverse engineering them, but it's not a reliable general model and those folks tend not to expend efforts on Windows.

I've adapted GPU driver code for what was considered a very smart GPU fourty years ago a TMS34020, which took me more than a year of full-time work.

Today's xPUs are so much more complex and may require dozens of man-years just to port from one OS to the other, while the functional core is much bigger yet and typically a multi-generational effort (of xPUs not software engineers). When the OS design doesn't cause too much of an impendance mismatch, porting effort might be less, but that depends a lot on the driver domain. And when you look through the sheer number of components an SoC like the RK3588 or the BCM2712 have on a single chip, that's a lot of work.

With Mali iGPUs that work would have to be done by ARM, here I guess Qualcomm has to do it for Adreno, no idea if Broadcom or anyone would ever do a Windows port for the VideoCore Raspberry VPUs. A recompile won't be enough.

Case in point: OrangePI doesn't seem to maintain even the Mali drivers for all the distinct Linux variants they somehow support. If you want Vulkan, Android is your only current option. If you want OpenGL acceleration, it's limited to Ubuntu. Even if in that case, the effort is mostly re-compiling for a different variant of the kernel. But just keeping track of all of these, seems to exhaust their support team.

If you added Windows, BSD, Fuchsia or Redox...
Yes, I agree this is a bigger problem if Windows was planning on supporting 1000s of different ARM variants not based on ARM reference designs. This would be very similar to what happened in the early days of windows when there are 100s of graphic card makers, some supported their device drivers well, others didn't.

Even today Windows has this issue in x64 land with bargain basement Desktops and Laptops that have crappy motherboard drivers. It's really not that different.

In context of this article and this discussion most of that is not relevant however. We are talking about one vendor, who will likely be investing time and money into making performant drivers. If we were talking about 100 different vendors all with their own designs I would agree with the what is being said in general, but that has not been the case for MS/Windows and it's not the case this time. It's still a single vendor trying to break into the space with an ARM based product.
 
Last edited:

syadnom

Distinguished
Oct 16, 2010
23
13
18,515
well, many operating systems actually are tied pretty tightly to their architecture. Windows pre-10 definitely was. Windows 11 arm64 pre-copilot leaves a LOT out to run on arm. If you do some reading on what copilot+ AI machines have, it's a re-writted kernel and much more to free windows of the legacy stuff holding it back. Windows wasn't just a re-compile away, it was properly ported to arm.

It's been this way with most things. macos was a major adaptation from powerpc to intel and then intel to arm, though apple learned a lot of lessons.

Linux might be the lucky one that got very portable early on so tends to run on anything you can get a bootloader on.

~~

arm is actually a very fixed architecture so building for snapdragon X really does open up a world of 1000s of arm variants on the ARMv9 instruction set. Apple's M4 is also ARMv9 while previous M series are ARMv8.

I would say there's very little doubt that windows copilot AI will run in a VM on Apple's M4 hardware and probably will run on the M1-3 hardware as well.

Most of the problems with the huge variety of windows hardware and the mixed experiences is the amount of legacy built in. Starting semi-fresh cures a lot of that, plus it's less likely to see a big commodity market for desktop PCs anytime soon with anything more than a graphics card slot.

If NVidia has anything to say about it, you'll probably buy a gaming computer like you buy a console. ie, they're almost certainly be dropping some 'geforce 5070 + arm64' system on a board models soon, they' ve all but said it's what they're doing.
 

bit_user

Titan
Ambassador
well, many operating systems actually are tied pretty tightly to their architecture. Windows pre-10 definitely was.
Windows NT was definitely designed to be cross-platform. Maybe that capability atrophied and withered after too much time on x86, exclusively.

Windows 11 arm64 pre-copilot leaves a LOT out to run on arm. If you do some reading on what copilot+ AI machines have, it's a re-writted kernel and much more to free windows of the legacy stuff holding it back.
Feels like an excuse, to me. If true, I think they just seized on the opportunity to clean up legacy stuff they'd been wanting to clean up for years.

It's been this way with most things. macos was a major adaptation from powerpc to intel and then intel to arm, though apple learned a lot of lessons.
I think most of this was just the natural evolution of the OS, punctuated by ports that served as convenient opportunities to reevaluate things and change parts that didn't work as planned or that had been outgrown.

Linux might be the lucky one that got very portable early on so tends to run on anything you can get a bootloader on.
for 25 years, Linux has maintained simultaneous support for multiple architectures. Everything it does has to be implemented in a way that won't break any of the dozens of CPU ISAs it supports.

Also, Linux has no stable APIs, internally. It's like Agile, where they just change whatever they need to, when they need to change it. So, they don't wait around for some big opportunity to redesign a whole bunch of pieces at once, because that chance never comes. Instead, they're continuously remaking it, piece by piece.

arm is actually a very fixed architecture so building for snapdragon X really does open up a world of 1000s of arm variants on the ARMv9 instruction set.
This is far too simplistic. If it were true, then the Asahi project wouldn't have taken years to make Linux on M-series Macs barely usable.

I would say there's very little doubt that windows copilot AI will run in a VM on Apple's M4 hardware and probably will run on the M1-3 hardware as well.
We'll have to disagree on this, then.

If NVidia has anything to say about it, you'll probably buy a gaming computer like you buy a console. ie, they're almost certainly be dropping some 'geforce 5070 + arm64' system on a board models soon, they' ve all but said it's what they're doing.
I don't claim to know what their consumer/PC plans are, but I can show you what they've released since the Tegra X1 used by Nintendo switch:
 
Jul 22, 2024
1
0
10
I'm seeing a pattern jumping out here.

- SemiAccurate alleges Qualcomm chip runs atrociously on Windows. Qualcomm blames MS for poor early sample performance.
- User mod sees Windows 11 ported to Nintendo Switch. Can't run basic programs.
- Qualcomm officially supporting Linux.

It certainly appears that Qualcomm is hedging their bets by having a backup solution in case Windows doesn't work, but there is another issue here that I'm wondering about. I'm sure other members will be able to express this more accurately than I am able, but one of the unique problems on ARM/Mobile development (Android is the best example) is that it needs a custom ROM per unique hardware. There is no universal OS that fits all. Windows has previously gained its users by being easy to install on a wide variety of PCs regardless of the parts chosen. Is that no longer the case on ARM? Are we seeing that universality come to an end and does the OS need to be optimized per device?
That reputation's only because mobile devices have not truly adopted a unified reference platform to standardize their core hardware and firmware layout around (i.e. the PC/"Wintel" spec). Hopefully these ARM laptops are able to adopt one like their x86 predecessors long have.