Geef
Distinguished
Mandark: And nobody cares if you play old games, go play them on an old computer.
No need!
Somebody has probably already made a 64 bit emulator and if not, will do so soon. It kinda kills any winers.
The scheduled forum maintenance has now been completed. If you spot any issues, please report them here in this thread. Thank you!
Mandark: And nobody cares if you play old games, go play them on an old computer.
Although they're not talking about breaking 32-bit apps in this article, I would point out that you can run 32-bit apps in emulation*, with little performance impact. You could even run them on ARM CPUs since Windows 10. Legacy apps are generally more than fast enough, on newer CPUs, so I don't buy the argument that those apps can't withstand a 10% - 30% performance hit, especially if doing so would make the entire CPU faster for all apps.Yes, it can improve CPU's by forgoing the x86...but also destroys decades of stuff that rely on it being that way.
The amount of time/money companies/devs would have to put into updating for only x64 is crazy high.
It's like we're all driving around in cars with a 5th wheel that hardly anybody needs or wants, just because a tiny minority still use it. Removing complexity from the instruction decoders would make them even faster and more efficient. If x86 is going to fend off RISC-V and ARM for a couple more generations, that's not an overhead that can be overlooked.They would need a sizeable benefit from doing it that people would justify the change.
So, for fear of simplifying x86, you're going to switch ISAs, entirely? How is that a better option?Change does "need" to happen but x64 isn't the only choice in the long term future.
VMware is a better option in my opinion than virtualbox, virtualbox limited with 256/128MB max graphics memory while VMware Workstation supports up to 8GB graphics.Microsoft had an excellent 16-bit virtual PC and an excellent virtual 32-bit PC - but, stopped supporting them.
Oracle supports Oracle VM VirtualBox which is almost as good - but, lacks some details of 80684 PC support and has a very limited selection of virtual hardware components.
A good virtual PC application provides a GUI interface that allows you to select various PC components (CPU, HDD, keyboard, mouse, video monitor, video adapter card, BIOS, motherboard, memory, etc) and build a virtual PC in the same way you would build a physical PC.
I completely agree with Intel dropping support for legacy/obsolete CPU technology - but, I would like to see the availability of excellent virtual PC apps supporting 16-bit PCs and 32-bit PCs hardware technology. The virtual PCs should run any hardware drivers supported on IBM, Compaq and Dell PCs in the 1980s and 1990s.
*******************************
It just occurred to me... generally people involved with building virtual-pc software often poop-out after they get much, but not all, of a PC hardware component virtualized. This is an area where AI technology would be enormously helpful.
Can't current chips just run out emulation at enough speed?Me not so much. One of the many reasons I run a PC is because of backward compatibility. I still run a ton of old programsn OSes (the big issue) and games. This would be a product I'd refuse to buy because it simply wouldn't meet my computing needs. I would prefer native compute over emulation or VM any day of the week.
"People who are not CPU engineers or software engineers, will never realize why this is such a great idea and how much time and energy it will save to ditch all that old crap. And nobody cares if you play old games, go play them on an old computer.
I cannot wait for the day when there’s no 32-bit anything left
Back in the day IA64 didn’t really stand a chance, but an idea like that could really be something now
People don’t seem to understand by eliminating unused functions. You can eliminate tons of testing and validation, which means you can eliminate tons of cost.
What is the best part in a system? The answer is, no part— which means eliminate as many parts as possible from the system to increase quality
Make it as simple as possible for the highest quality. Dr. Deming teaches us that
And nobody cares if you play old games, go play them on an old computer.
Prefer not. But I'd assume so.And you can't run these apps in a VM running said OS?
The only stuff that it "destroys" is system software: anything that was relying on the core behavior of these modes. This only affects OS and firmware developers, who are probably eager to dump parts of their code base that only exist to support these legacy modes.IMHO this is why it still hasn't happened.
Yes, it can improve CPU's by forgoing the x86...but also destroys decades of stuff that rely on it being that way.
The amount of time/money companies/devs would have to put into updating for only x64 is crazy high.
I can think of several:They would need a sizeable benefit from doing it that people would justify the change.
They can, it's just the operating system has ways of memory management that make it hard, if not impossible without a lot of effort to make it compatible. The reason why 64-bit Windows can't support 16-bit applications is because applications pass around "handles," a unique ID number, to the OS and vice versa. You can see in Task Manager -> Performance -> CPU how many handles are currently referenced in the OS.I don't understand why 16 or 32 bit software can't be ran on a 64 bit process natively?
Clock speed has nothing to do with the performance of the application per se, unless the application heavily relied on the timing of specific processors (which is dumb).Most 16 bit cpus were below 33MHz, most 32 bit cpus were below 2Ghz. Most current 64 bit cpus are 5Hhz and counting.
Even if a 32 bit code was ran in a 64 bit cpu through some software conversion layer, with a minor performance penalty, older 32 bit programs won't ever run below the speed they have ran on older systems, even without hardware acceleration or extensions.
This is a solvable problem, though you're right that it takes effort for little benefit. The solution is simply to maintain a map for each 16-bit process that allocates a local 16-bit ID for each handle it uses, and then the Windows API layer would use the map to translate this to the 32-bit value when, the handle is passed to the kernel or other apps. Of course, I'm assuming Windows "sees" all of the handles going into and out of an app, which would be necessary for it to maintain the mapping.They can, it's just the operating system has ways of memory management that make it hard, if not impossible without a lot of effort to make it compatible. The reason why 64-bit Windows can't support 16-bit applications is because applications pass around "handles," a unique ID number, to the OS and vice versa. You can see in Task Manager -> Performance -> CPU how many handles are currently referenced in the OS.
Originally handles were limited to 16-bit numbers. So there could only be 65356 unique handles. That might sound like a lot, but currently my system is sitting at around 84,000 handles. So I'd run out. Microsoft updated the handle value to 32-bit when it came to 64-bit Windows. So now you have a problem: 16-bit applications can only use 16-bit handles. Truncating the top 16-bits from a 32-bit handle won't work, because you'll be losing information.
I'd guess they probably refer to a table entry, in kernel address space.Also from what I can tell, "handles" aren't just a number, they may be a reference to a location in memory. 16-bit 86 applications can't address 32-bit references.
I think @MeeLee 's point is something I was also trying to say: modern CPUs are so much faster than the CPUs for which those apps were written, that the relatively small performance impact of running them in (decent) emulators should be a non-issue.Clock speed has nothing to do with the performance of the application per se, unless the application heavily relied on the timing of specific processors (which is dumb).
First, the proposal really just talks about removing a smattering of pretty-much unused features - not actually disabling support for 32-bit code. The most consequential thing set to disappear is probably 16-bit addressing, which means you could only run 16-bit code in emulation of some kind.Depending on the real estate these instructions occupy on the cpu die it might be what x64 needs to compete with ARM and perhaps even Risc-V in the future.
First, the proposal really just talks about removing a smattering of pretty-much unused features - not actually disabling support for 32-bit code. The most consequential thing set to disappear is probably 16-bit addressing, which means you could only run 16-bit code in emulation of some kind.
Second, if you would remove 32-bit app support, it's not the die area, but critical paths and complexity in a crucial part of the front-end that would benefit. In ARMv9, ARM deprecated ARMv7 support, for similar reasons.
It's the mode that they're limiting to 64-bit. That's something only the kernel sees or uses, and separate from 32-bit addressing. The article says:The proposal does talk about a 64-mode only architecture, both the blog (which I read) and the whitepaper (which I just glanced at).
Eh yeah ;-) And 64 bit mode has addressing modes it supports - and stuff it doesn't.It's the mode that they're limiting to 64-bit. That's something only the kernel sees or uses, and separate from 32-bit addressing. The article says:
"Using the simplified segmentation model of 64-bit for segmentation support for 32-bit applications, matching what modern operating systems already use."
The proposal does talk about a 64-mode only architecture, both the blog (which I read) and the whitepaper (which I just glanced at).
The 8, 16 and 32 bit register references/use is valid in 64 bit mode, but 32 and 16 bit addressing is not. But as I said, I have no idea how much of the die would be freed.
Eh yeah ;-) And 64 bit mode has addressing modes it supports - and stuff it doesn't.
https://hjlebbink.github.io/x86doc/html/MOV.html is an example of what can and cannot be done in 64 bit mode for the mov instruction.
EIther way I don't think we disagree much.
Not sure what you're referring to, since that table literally says every form of addressing is available in 64-bit mode.Eh yeah ;-) And 64 bit mode has addressing modes it supports - and stuff it doesn't.
https://hjlebbink.github.io/x86doc/html/MOV.html is an example of what can and cannot be done in 64 bit mode for the mov instruction.
Addressing modes are something the CPU needs to implement support for, in a performance-critical part of the core. I could believe there might be some slight efficiency improvements by not having to mux paths for 16-bit addressing and 32-bit mode.If this is just a change in basically boot / initialization, why is Intel bothering with this anyways?
According to them, the modes they're using aren't even used by modern operating systems. The only buy-in would be from the UEFI firmware developers, and their response would likely be a big "thank you", for making their lives just a little bit easier.Seems like a lot of trouble to “fix” something that is hardly broken / not in any desperate need of repair. You need OS buy-in,
You don't strictly need AMD buy-in. It's up to AMD whether they want to follow.AMD buy-in,
Not really. The motherboard firmware is what has to deal with the 16-bit -> 32-bit -> 64-bit mode transitions, and that's already CPU-specific.support two different ISAs during the transition…
Oh, that'll never happen. It would break waaaaaaaaaaay too much software. That horse is already out of the barn. Currently, stack addresses are interchangeable with heap addresses. You can't just make them exclusive, now.Personally, I lament the removal of segments from the architecture ... because I want some of it PUT BACK into 64-bit mode! Not for near/far addressing as in the old days, but for security: I think the stack should be kept in its own protected address space, accessible only through address modes with the %rsp register.
Shadow stack.In the last decade, there has been much work in the security research community for protecting the stack in 64-bit mode using other means, such as by continuously randomising the location of the stack or by unorthodox uses of other hardware features such as MPK , CET
Because x86 computers as you and I understand them to be* still boot as if it were an IBM 5150 PC. I believe what Intel wants to do is get away from this for a couple of reasons:If this is just a change in basically boot / initialization, why is Intel bothering with this anyways?
Seems like a lot of trouble to “fix” something that is hardly broken / not in any desperate need of repair. You need OS buy-in, AMD buy-in, support two different ISAs during the transition…for how much gain on any relevant metric?
Intel’s standards are sometimes like that: BTX, ATX12VO, and now this. The old DNA remains.