News Intel optimizes slimmed-down X86S instruction set — revision 1.2 eliminates 16-bit and 32-bit features

DS426

Upstanding
May 15, 2024
254
189
360
x86 needs every advantage it can get to remain competitive with RISC. Moreover, the majority of purchasers of these future CPU's will be desiring performance over legacy compatibility. Let virtualization and emulation take care of legacy compatibility needs.

This is a good move -- I'm all for it. As noted elsewhere, Intel's biggest competitor in the datacenter might be ARM, not AMD.

From Intel's perspective, Microsoft needs to sign on to it even more than AMD; they might even have a short-term competitive advantage if AMD comes late to party on it. Interestingly, this provides an opportunity for Intel to license x86(S) all over again to AMD.
 

Marlin1975

Distinguished
Dec 31, 2007
28
44
18,560
"get AMD’s support "

Not only amd but Microsoft as well. If they do not then they are wasting their time and limited resources on a wasted divergence.
 
  • Like
Reactions: artk2219

setx

Distinguished
Dec 10, 2014
263
233
19,060
Intel did add a “32-bit compatibility mode,” but it’s unclear what exactly it does; we’ve reached out to Intel for comment.
Why do you try to write about things you don't understand at all?
"32-bit compatibility mode" was added by AMD in AMD64.
Most importantly, the new X86S ISA would remove native 32-bit support
No it won't. Only 32-bit Ring 0 is removed, not 32-bit user space.
followed by 64-bit operating systems like Windows 7
You know no older OS than Win7? x86-64 was supported since Windows Server 2003.
 

bit_user

Titan
Ambassador
The article said:
While simplifying and modernizing one of the world’s most used architectures could be a worthwhile goal, support for legacy hardware and software has been one of x86’s primary characteristics. Windows, primarily run on PCs equipped with x86 chips, has traditionally had legacy support, and the switch to X86S could break with that history.
They have to make tradeoffs vs. performance and power. If they don't, they risk being out-competed by ARM and RISC-V, in the laptop and datacenter segments.

ARM has been pretty brutal about dropping legacy features from its newer architectures. Their current crop of ARMv9-A cores have dropped all support for 32-bit mode and other legacy features and extensions. This enabled them to ditch the uop cache, since their decoders are now simple enough for it not to provide a net benefit. Freeing up that die space enabled them to make the X4 cores' decoders 10-wide, up from the previous generation's 8-wide decoder.
 

bit_user

Titan
Ambassador
That's been the #1 selling point of x86.

That old arse software can run on modern hardware, even if you have to do some software non-sense to get it to work on a modern OS.
Can you really just install an old OS like Windows 95 on something like a Raptor Lake, bare metal? That's mainly the sort of stuff X86S is getting rid of, and it's pretty moot as I personally doubt such a feat is even possible (i.e. due to lack of drivers).

As for just running 32-bit apps, that will continue to be possible.
 
Can you really just install an old OS like Windows 95 on something like a Raptor Lake, bare metal? That's mainly the sort of stuff X86S is getting rid of, and it's pretty moot as I personally doubt such a feat is even possible (i.e. due to lack of drivers).

As for just running 32-bit apps, that will continue to be possible.
Actually, the main problem with running Win95 OSR 2 on x86-64-v3 is that you can't run that system with more than 512 Mb of RAM installed. As for drivers, CSM and a PCI IDE card would handle storage, and VGA still works somewhat. If you get a crash caused by a CPU clocked too high, there's a semi official patch from MS : Microsoft Dial-Up Networking 1.4...
But, I get what you mean : who cares about 16-bit apps or native 32-bit support ? This part could be removed from the core without 99% of the population batting an eye, as emulating a 100 MHz Pentium on a 6-core 64-bit CPU could probably be achieved in Javascript... https://bellard.org/jslinux/
 

ezst036

Honorable
Oct 5, 2018
750
627
12,420
Looks like Intel learned nothing from the Itanium debacle.

There is a "here" here, but perhaps a little different than you suggest.

Intel chose Itanium over the prospect of creating an updated/extended version of x86 that was 64 bit, and now Intel is choosing the Jim-Keller-designed 64bit x86 exclusively while jettisoning every aspect of legacy x86. EDIT: Notwithstanding this "32-bit compatibility mode"

It is the right move on Intel's part in 2024, however, it's finally a full embrace of the smackdown they received in 2003 when the Athlon64 first hit the scene and re-wrote the rules.

Itanium could've done more than it did, if only Intel had open sourced it. It's a shame really. They priced Itanium astronomically and killed it with the high cost.
 
Last edited:

ezst036

Honorable
Oct 5, 2018
750
627
12,420
I thought the whole point of x86 was backwards compatibility.

It is, there are a ton of corporations out there who are STILL reliant on their COBOL applications. A handful of them can adapt by using VMs/virtualization where it could make sense, but the rest will finally have to put their old applications out back and do the full re-write that they've been reluctant to do for all these decades.

Losing those corporate COBOL users is not as big of a threat overall as is RISC. (be it ARM or RISC-V)

The world is now, finally, ready to move on and let go of x86 and that is in fact a part of Intel's crisis.
 

Eximo

Titan
Ambassador
There are still x86 from non-Intel and non-AMD sources. PC104 and the embedded market have been offering 'slow' chips for a long time now. 150-250Mhz chips and the like. New old stock will be around for a long time as well. You can still pick up new Skylake embedded PCs that have been sitting on shelves.

Always FPGA as well if someone needs to keep something very specific going a decade or two from now.
 

bit_user

Titan
Ambassador
Itanium could've done more than it did, if only Intel had open sourced it. It's a shame really. They priced Itanium astronomically and killed it with the high cost.
Intel gave up on Itanium by like the second gen. If they were at all serious, they'd had added things like out-of-order (which was actually possible on EPIC) and SIMD extensions.

As I think you mean, Intel did make a critical misjudgement in patenting the heck out of IA64. It meant that anyone using it was locking themselves to a single CPU supplier (Intel). The success of ARM and RISC-V further prove this point, as they're both open ecosystems where you have a lot more than just two competitors.
 

Findecanor

Distinguished
Apr 7, 2015
326
229
19,060
I don't really see any benefit of X86S other than slightly faster boot-times. My Linux system still boots in seconds.

Actually, I think that Intel and AMD should instead bring back some segmentation features that were disabled in 64-bit mode.
Only FS and GS segments remain and they used to be bounds-checked and could be write-protected. The other segment register address prefixes are unused in 64-bit mode.

This could help with running WebAssembly and in other compartmentalisation schemes, without needing kludges such as allocating 8GB chunks of address space and using MPK or whatnot.

[ARM's] current crop of ARMv9-A cores have dropped all support for 32-bit mode and other legacy features and extensions.
I don't think ARM and Intel can be compared in that way.
A64 and A32 instruction sets are actually very different, whereas most valid x86 instructions are valid x86-64 instructions.

Intel gave up on Itanium by like the second gen. If they were at all serious, they'd had added things like out-of-order (which was actually possible on EPIC) and SIMD extensions.
BTW. Itanium did actually have some SIMD instructions ... in general-purpose registers. They did not have the forethought to extend the register size over what MMX used to offer though.
 
Itanium could've done more than it did, if only Intel had open sourced it. It's a shame really. They priced Itanium astronomically and killed it with the high cost.
At the end of the day it was actually hubris that killed Itanium and IA64.

The development team behind it was pretty big and inexperienced. It took too long to reach market. Intel didn't work with general purpose software vendors leading up to the launch. They also assumed nobody would want to run anything in 32-bit mode so emulation performance was atrocious.

Meanwhile x86 was starting to penetrate the server market and AMD was developing x86-64. This was basically the one-two punch that ensured it had no broad market future.
 
  • Like
Reactions: ezst036
It will be interesting to see if/when Intel sees this as a viable change to make. The fact that they've kept iterating indicates it's coming at some point for sure. There has also been some Linux development done reinforces that. I haven't spent the time to read through the entire paper as it's outside of my knowledge realm, but this doesn't seem like a major shift for the vast majority of end users.

Given the semi-open way Intel has been approaching it so far that indicates to me that major software vendors wouldn't really have any reason to not be prepared.
 

bit_user

Titan
Ambassador
I don't really see any benefit of X86S other than slightly faster boot-times. My Linux system still boots in seconds.
The benefit is that it simplifies the CPU cores by getting rid of a lot of circuitry that's currently cluttering up key parts of it. Also, by simplifying the ISA spec, you reduce the number of test cases that new CPUs must be run through.

Trust me: Intel wouldn't go to this trouble just to streamline boot times by a negligible amount.

Actually, I think that Intel and AMD should instead bring back some segmentation features that were disabled in 64-bit mode.
Only FS and GS segments remain and they used to be bounds-checked and could be write-protected. The other segment register address prefixes are unused in 64-bit mode.
I can't really comment, as I never programmed x86 assembly in the 64-bit era. Is there even a use case for programs to have lots of segments? If it really made sense, then you might expect they'd be adding more segment registers in APX, but I don't recall reading anything about it.

BTW, access control is usually done via the page table. It scales much better than segments (or, at least it did in the 32-bit era).

This could help with running WebAssembly and in other compartmentalisation schemes, without needing kludges such as allocating 8GB chunks of address space and using MPK or whatnot.
Address space is cheap, though. 8 GB is only 33 bits out of 64. The underlying memory is what matters, and simply allocating address space doesn't equate to that.

What's MPK?

BTW. Itanium did actually have some SIMD instructions ... in general-purpose registers. They did not have the forethought to extend the register size over what MMX used to offer though.
I was repeating what someone who worked on it said. I guess they were referring to the way SSE added support for fp32 and fp64 data types, whereas MMX was only operating on integers.

BTW, I found an IA-64 instruction set manual, here. It does confirm MMX-like functionality, as you say:
 

bit_user

Titan
Ambassador
It will be interesting to see if/when Intel sees this as a viable change to make.
My guess is Diamond Rapids. That's also when they're bumping the Family ID from 6 (which it's been since Core 2) to 19. Given that they stayed on the same Family ID for so long, it seems like a good reason to bump it would be if you're breaking some backwards compatibility.
 
  • Like
Reactions: thestryker

TheSecondPower

Distinguished
Nov 6, 2013
133
124
18,760
The last time x86-S was in the news, it was said to remove support for 32-bit operating systems, but 32-bit applications would still work exactly the same way they do today. Did that change with this updated version of the proposal?
 
  • Like
Reactions: adamboy64