News Intel Ponders Transition to 64-Bit-Only x86S Architecture

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

bit_user

Polypheme
Ambassador
Yes, it can improve CPU's by forgoing the x86...but also destroys decades of stuff that rely on it being that way.
The amount of time/money companies/devs would have to put into updating for only x64 is crazy high.
Although they're not talking about breaking 32-bit apps in this article, I would point out that you can run 32-bit apps in emulation*, with little performance impact. You could even run them on ARM CPUs since Windows 10. Legacy apps are generally more than fast enough, on newer CPUs, so I don't buy the argument that those apps can't withstand a 10% - 30% performance hit, especially if doing so would make the entire CPU faster for all apps.

They would need a sizeable benefit from doing it that people would justify the change.
It's like we're all driving around in cars with a 5th wheel that hardly anybody needs or wants, just because a tiny minority still use it. Removing complexity from the instruction decoders would make them even faster and more efficient. If x86 is going to fend off RISC-V and ARM for a couple more generations, that's not an overhead that can be overlooked.

Change does "need" to happen but x64 isn't the only choice in the long term future.
So, for fear of simplifying x86, you're going to switch ISAs, entirely? How is that a better option?

* Most modern emulators are actually JIT translators.
 
Last edited:
  • Like
Reactions: King_V
Apr 1, 2020
1,447
1,100
7,060
As long as it doesn't affect the WoW64 translation layer on Windows so 32 bit applications continue to run seamlessly, the only issue I see is that this needs to be a coordinated effort by both Intel and AMD.
 

danny009

Reputable
Apr 11, 2019
440
26
4,720
Microsoft had an excellent 16-bit virtual PC and an excellent virtual 32-bit PC - but, stopped supporting them.

Oracle supports Oracle VM VirtualBox which is almost as good - but, lacks some details of 80684 PC support and has a very limited selection of virtual hardware components.

A good virtual PC application provides a GUI interface that allows you to select various PC components (CPU, HDD, keyboard, mouse, video monitor, video adapter card, BIOS, motherboard, memory, etc) and build a virtual PC in the same way you would build a physical PC.

I completely agree with Intel dropping support for legacy/obsolete CPU technology - but, I would like to see the availability of excellent virtual PC apps supporting 16-bit PCs and 32-bit PCs hardware technology. The virtual PCs should run any hardware drivers supported on IBM, Compaq and Dell PCs in the 1980s and 1990s.

*******************************

It just occurred to me... generally people involved with building virtual-pc software often poop-out after they get much, but not all, of a PC hardware component virtualized. This is an area where AI technology would be enormously helpful.
VMware is a better option in my opinion than virtualbox, virtualbox limited with 256/128MB max graphics memory while VMware Workstation supports up to 8GB graphics.

However some aggressive DRMs blocks playing games in VMs if thats your thing.

I remember Virtual PC software, used back in the day to install Windows 7 and Windows XP mode, it was real useful and awesome.

Microsoft murdered that software and stopped supporting it, that's what Microsoft do these days since then. Like Windows Media Encoder and MSN Messenger both awesome software back in the day and all MS did to them is stop supporting and removed download links. Nothing good comes out from MS except Windows XP and Windows 7.
 
  • Like
Reactions: truerock

tamalero

Distinguished
Oct 25, 2006
1,133
138
19,470
Me not so much. One of the many reasons I run a PC is because of backward compatibility. I still run a ton of old programsn OSes (the big issue) and games. This would be a product I'd refuse to buy because it simply wouldn't meet my computing needs. I would prefer native compute over emulation or VM any day of the week.
Can't current chips just run out emulation at enough speed?
 

danny009

Reputable
Apr 11, 2019
440
26
4,720
People who are not CPU engineers or software engineers, will never realize why this is such a great idea and how much time and energy it will save to ditch all that old crap. And nobody cares if you play old games, go play them on an old computer.

I cannot wait for the day when there’s no 32-bit anything left

Back in the day IA64 didn’t really stand a chance, but an idea like that could really be something now

People don’t seem to understand by eliminating unused functions. You can eliminate tons of testing and validation, which means you can eliminate tons of cost.

What is the best part in a system? The answer is, no part— which means eliminate as many parts as possible from the system to increase quality

Make it as simple as possible for the highest quality. Dr. Deming teaches us that
"
And nobody cares if you play old games, go play them on an old computer.

That hostile language of yours will get you nowhere.
 
This seems like a good way to simplify design a bit without breaking legacy capability. The BIOS interactions won't matter at all as any platform this is on will be full UEFI. Legacy operating system support would already likely be a problem with any full UEFI platform so this isn't a big change. The vast majority of legacy systems should run easily in a VM as the performance of modern CPUs is significantly higher than anything in the era they were developed.
 

MeeLee

Distinguished
Aug 27, 2014
92
11
18,545
I don't understand why 16 or 32 bit software can't be ran on a 64 bit process natively?

Most 16 bit cpus were below 33MHz, most 32 bit cpus were below 2Ghz. Most current 64 bit cpus are 5Hhz and counting.
Even if a 32 bit code was ran in a 64 bit cpu through some software conversion layer, with a minor performance penalty, older 32 bit programs won't ever run below the speed they have ran on older systems, even without hardware acceleration or extensions.
 
  • Like
Reactions: bit_user
IMHO this is why it still hasn't happened.
Yes, it can improve CPU's by forgoing the x86...but also destroys decades of stuff that rely on it being that way.
The amount of time/money companies/devs would have to put into updating for only x64 is crazy high.
The only stuff that it "destroys" is system software: anything that was relying on the core behavior of these modes. This only affects OS and firmware developers, who are probably eager to dump parts of their code base that only exist to support these legacy modes.

For example, every x86 PC still boots up as if it were an 8086, then has to go through a few hoops before it jumps into the appropriate mode. We could just get rid of this because what system software are you going to actually run in this mode?

They would need a sizeable benefit from doing it that people would justify the change.
I can think of several:
  • Potentially reduces the amount of silicon needed to support legacy modes
  • Potentially improves security by eliminating legacy features that basically have no real concept of security levels
  • Eliminates a lot of odd ways the CPU operates, although I'm not really qualified to know if any of these odd ways have any substantial performance benefits. But if people aren't using these things (because they're odd to use), why bother keeping them?

I don't understand why 16 or 32 bit software can't be ran on a 64 bit process natively?
They can, it's just the operating system has ways of memory management that make it hard, if not impossible without a lot of effort to make it compatible. The reason why 64-bit Windows can't support 16-bit applications is because applications pass around "handles," a unique ID number, to the OS and vice versa. You can see in Task Manager -> Performance -> CPU how many handles are currently referenced in the OS.

Originally handles were limited to 16-bit numbers. So there could only be 65356 unique handles. That might sound like a lot, but currently my system is sitting at around 84,000 handles. So I'd run out. Microsoft updated the handle value to 32-bit when it came to 64-bit Windows. So now you have a problem: 16-bit applications can only use 16-bit handles. Truncating the top 16-bits from a 32-bit handle won't work, because you'll be losing information.

Also from what I can tell, "handles" aren't just a number, they may be a reference to a location in memory. 16-bit 86 applications can't address 32-bit references.

Most 16 bit cpus were below 33MHz, most 32 bit cpus were below 2Ghz. Most current 64 bit cpus are 5Hhz and counting.
Even if a 32 bit code was ran in a 64 bit cpu through some software conversion layer, with a minor performance penalty, older 32 bit programs won't ever run below the speed they have ran on older systems, even without hardware acceleration or extensions.
Clock speed has nothing to do with the performance of the application per se, unless the application heavily relied on the timing of specific processors (which is dumb).
 
Last edited:

bit_user

Polypheme
Ambassador
They can, it's just the operating system has ways of memory management that make it hard, if not impossible without a lot of effort to make it compatible. The reason why 64-bit Windows can't support 16-bit applications is because applications pass around "handles," a unique ID number, to the OS and vice versa. You can see in Task Manager -> Performance -> CPU how many handles are currently referenced in the OS.

Originally handles were limited to 16-bit numbers. So there could only be 65356 unique handles. That might sound like a lot, but currently my system is sitting at around 84,000 handles. So I'd run out. Microsoft updated the handle value to 32-bit when it came to 64-bit Windows. So now you have a problem: 16-bit applications can only use 16-bit handles. Truncating the top 16-bits from a 32-bit handle won't work, because you'll be losing information.
This is a solvable problem, though you're right that it takes effort for little benefit. The solution is simply to maintain a map for each 16-bit process that allocates a local 16-bit ID for each handle it uses, and then the Windows API layer would use the map to translate this to the 32-bit value when, the handle is passed to the kernel or other apps. Of course, I'm assuming Windows "sees" all of the handles going into and out of an app, which would be necessary for it to maintain the mapping.

Also from what I can tell, "handles" aren't just a number, they may be a reference to a location in memory. 16-bit 86 applications can't address 32-bit references.
I'd guess they probably refer to a table entry, in kernel address space.

Clock speed has nothing to do with the performance of the application per se, unless the application heavily relied on the timing of specific processors (which is dumb).
I think @MeeLee 's point is something I was also trying to say: modern CPUs are so much faster than the CPUs for which those apps were written, that the relatively small performance impact of running them in (decent) emulators should be a non-issue.
 
  • Like
Reactions: MeeLee and King_V

SunMaster

Commendable
Apr 19, 2022
159
136
1,760
Depending on the real estate these instructions occupy on the cpu die it might be what x64 needs to compete with ARM and perhaps even Risc-V in the future. I have no idea how large a percentage of the die it steals today, but I assume the implications of supporting the 80s and 90s are considerable,

I have excactly zero need for anything legacy. The software that actually needs this compability can be run in an emulator that you find everywhere today. Like old 8 and 16 bit computers home computers and CPUs of the past.

For Intel and AMD it could even be advantageous, for users too, to reach an agreement on how this should be done. After all, X64 will battle fierce competitions in the years ahead. As I love the underdog I cheer for RIsc-V :)
 
  • Like
Reactions: truerock

bit_user

Polypheme
Ambassador
Depending on the real estate these instructions occupy on the cpu die it might be what x64 needs to compete with ARM and perhaps even Risc-V in the future.
First, the proposal really just talks about removing a smattering of pretty-much unused features - not actually disabling support for 32-bit code. The most consequential thing set to disappear is probably 16-bit addressing, which means you could only run 16-bit code in emulation of some kind.

Second, if you would remove 32-bit app support, it's not the die area, but critical paths and complexity in a crucial part of the front-end that would benefit. In ARMv9, ARM deprecated ARMv7 support, for similar reasons.
 
  • Like
Reactions: King_V and ikjadoon

SunMaster

Commendable
Apr 19, 2022
159
136
1,760
First, the proposal really just talks about removing a smattering of pretty-much unused features - not actually disabling support for 32-bit code. The most consequential thing set to disappear is probably 16-bit addressing, which means you could only run 16-bit code in emulation of some kind.

Second, if you would remove 32-bit app support, it's not the die area, but critical paths and complexity in a crucial part of the front-end that would benefit. In ARMv9, ARM deprecated ARMv7 support, for similar reasons.

The proposal does talk about a 64-mode only architecture, both the blog (which I read) and the whitepaper (which I just glanced at).

The 8, 16 and 32 bit register references/use is valid in 64 bit mode, but 32 and 16 bit addressing is not. But as I said, I have no idea how much of the die would be freed.
 
Ok some notes here. Intel is not, I say again, they are not removing 32-bit support from their CPU's. The way x86_64 works makes that fundamentally impossible as it is just an extension to the basic 80386 ISA. In fact Intel didn't even create the 64-bit extensions to x86, AMD did and eventually cross licensed them to Intel.

What Intel is doing is removing the legacy modes and instructions that make the CPU act like an 8086, complete with all the behaviors that are from that era. Stuff like how the A20 gate in the keyboard controller that enabled access to the High Memory Area just above
the 1MB. Intel removed that back with Haswell as nobody is running bare DOS on these CPUs. Ultimately this just changes how the BIOS would initialize and boot the CPU.

That table should help people understand, Intel would be removing everything under legacy mode while keeping long mode, which includes both 16 and 32 bit modes. Those will always be around because of how the CPU register stack is made on x86. Each 64-bit register also contains the 32-bit register, and those 32-bit registers also contain the 16 bit registers.
 
Last edited:

bit_user

Polypheme
Ambassador
The proposal does talk about a 64-mode only architecture, both the blog (which I read) and the whitepaper (which I just glanced at).
It's the mode that they're limiting to 64-bit. That's something only the kernel sees or uses, and separate from 32-bit addressing. The article says:

"Using the simplified segmentation model of 64-bit for segmentation support for 32-bit applications, matching what modern operating systems already use."
 
  • Like
Reactions: ikjadoon

SunMaster

Commendable
Apr 19, 2022
159
136
1,760
It's the mode that they're limiting to 64-bit. That's something only the kernel sees or uses, and separate from 32-bit addressing. The article says:
"Using the simplified segmentation model of 64-bit for segmentation support for 32-bit applications, matching what modern operating systems already use."​
Eh yeah ;-) And 64 bit mode has addressing modes it supports - and stuff it doesn't.

https://hjlebbink.github.io/x86doc/html/MOV.html is an example of what can and cannot be done in 64 bit mode for the mov instruction.

EIther way I don't think we disagree much.
 
The proposal does talk about a 64-mode only architecture, both the blog (which I read) and the whitepaper (which I just glanced at).

The 8, 16 and 32 bit register references/use is valid in 64 bit mode, but 32 and 16 bit addressing is not. But as I said, I have no idea how much of the die would be freed.

Almost none as these CPU's haven't been native x86 in a very long time. What would happen is the CPU microcode would get simpler and there would likely be a minor savings in the instruction branch predictor as it would no longer after to deal with those early weird operating modes.
Honestly this is more about BIOS firmware and how the system boots up, which anything using UEFI is already doing.


Eh yeah ;-) And 64 bit mode has addressing modes it supports - and stuff it doesn't.

https://hjlebbink.github.io/x86doc/html/MOV.html is an example of what can and cannot be done in 64 bit mode for the mov instruction.

EIther way I don't think we disagree much.

That would only matter from the kernel point of view, from the usermode application it still works the exact same. Once we are in protected mode then every application thinks it is the only application the CPU runs and the CPU is just switching register stacks in and out of cache whenever a task switch happens.
 

Amdlova

Distinguished
That's a dilema I have... keeping dreaming with this x99 maybe some day use Windows XP to play old game... but the day never comes. Who want keep 32 bits aplications just keep the old hardware. Who needs a fast New tech goes to 64 bits and beyond. Intel and amd need to have a reboot and remove what don't make sense any more. 14 gen cpus with mmx lol.
 

ikjadoon

Distinguished
Feb 25, 2006
1,983
44
19,810
If this is just a change in basically boot / initialization, why is Intel bothering with this anyways?

Seems like a lot of trouble to “fix” something that is hardly broken / not in any desperate need of repair. You need OS buy-in, AMD buy-in, support two different ISAs during the transition…for how much gain on any relevant metric?

Intel’s standards are sometimes like that: BTX, ATX12VO, and now this. The old DNA remains.
 

Findecanor

Distinguished
Apr 7, 2015
248
161
18,760
Personally, I lament the removal of segments from the architecture ... because I want some of it PUT BACK into 64-bit mode! Not for near/far addressing as in the old days, but for security:

There are schemes for compiling code to have protection of the call-stack by putting it in its own protected address space.
That would be possible for an OS to do in 32-bit mode x86 if you'd use a separate stack segment, but that capability was removed from 64-bit mode by AMD.

In the last decade, there has been much work in the security research community for protecting the stack in 64-bit mode using other means, such as by continuously randomising the location of the stack or by unorthodox uses of other hardware features such as MPK , CET or virtualisation with the kernel bumped to hypervisor mode. But those methods are still hacks, and often add considerable overhead. Better to have a hardware support from the start — which already exists.

Edit: 16-bit addressing is not available in 64-bit mode. I had read the documentation wrong, so I removed the section of my post about that.
 
Last edited:
  • Like
Reactions: SunMaster

bit_user

Polypheme
Ambassador
If this is just a change in basically boot / initialization, why is Intel bothering with this anyways?
Addressing modes are something the CPU needs to implement support for, in a performance-critical part of the core. I could believe there might be some slight efficiency improvements by not having to mux paths for 16-bit addressing and 32-bit mode.

Seems like a lot of trouble to “fix” something that is hardly broken / not in any desperate need of repair. You need OS buy-in,
According to them, the modes they're using aren't even used by modern operating systems. The only buy-in would be from the UEFI firmware developers, and their response would likely be a big "thank you", for making their lives just a little bit easier.

AMD buy-in,
You don't strictly need AMD buy-in. It's up to AMD whether they want to follow.

support two different ISAs during the transition…
Not really. The motherboard firmware is what has to deal with the 16-bit -> 32-bit -> 64-bit mode transitions, and that's already CPU-specific.
 

bit_user

Polypheme
Ambassador
Personally, I lament the removal of segments from the architecture ... because I want some of it PUT BACK into 64-bit mode! Not for near/far addressing as in the old days, but for security: I think the stack should be kept in its own protected address space, accessible only through address modes with the %rsp register.
Oh, that'll never happen. It would break waaaaaaaaaaay too much software. That horse is already out of the barn. Currently, stack addresses are interchangeable with heap addresses. You can't just make them exclusive, now.

To be honest, I don't think you can ever make them exclusive, without breaking support for C, C++, and certain other C-like languages. Maybe you could, but then you'd have to package each pointer with some additional information to tell the recipient whether it's a stack or heap pointer. That would incur some significant overhead.

In the last decade, there has been much work in the security research community for protecting the stack in 64-bit mode using other means, such as by continuously randomising the location of the stack or by unorthodox uses of other hardware features such as MPK , CET
Shadow stack.
 
If this is just a change in basically boot / initialization, why is Intel bothering with this anyways?

Seems like a lot of trouble to “fix” something that is hardly broken / not in any desperate need of repair. You need OS buy-in, AMD buy-in, support two different ISAs during the transition…for how much gain on any relevant metric?

Intel’s standards are sometimes like that: BTX, ATX12VO, and now this. The old DNA remains.
Because x86 computers as you and I understand them to be* still boot as if it were an IBM 5150 PC. I believe what Intel wants to do is get away from this for a couple of reasons:
  • It simplifies firmware and boot process. It could also simplify the OS code, depending on how much it still has to mess around with setting up the CPU mode, such as flipping a gate open to resolve the A20 Line issue. Simplifying code should in theory lead to less bugs and other issues.
    • Overall, it also eases the development process for x86 based systems and hardware itself. I'm sure AMD would also love to move away from this.
  • There's legacy hardware that in order to support it, caused Intel (and possibly AMD) to do a few things as workarounds, at least one of which led to a security problem. Legacy modes either don't have a concept of security (especially the initial mode x86 CPUs boot in) or are limited to what it can do.
    • One issue was when Intel introduced the Advanced Programmable Interrupt Controller (APIC) to replace the original interrupt controller. But they didn't actually replace the original interrupt controller, because backwards compatibility is king in IBM PC compatible systems. The work around to allow usage of the older interrupt controller caused a vulnerability known as the x86 Memory Sinkhole (it was fixed internally after Sandy Bridge CPUs)

* I say "x86 computers as you and I understand them" because there are x86 computers/systems that aren't IBM PC compatible: the PlayStation 4/5. You actually can't boot a bog standard x86-64 build of Linux on at least the PS4, because the OS is expecting legacy hardware/features that the PS4 lacks. However, once you modify Linux to not look for those, it runs most x86 applications as if it were a typical PC (see
View: https://www.youtube.com/watch?v=QMiubC6LdTA
)
 
Last edited: