News Intel optimizes slimmed-down X86S instruction set — revision 1.2 eliminates 16-bit and 32-bit features

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
The snapdragon gen 8 v3 has removed the 32-bits.
It's why I will keep the gen 8 v2 for long time...
Intel want to remove the x86 for years. Some day they will do.
some folks uses the raptor lake with windows XP and a a gtx 9xx family to keeping the old school games alive.
I have here a 13100T maybe get a H chipset and some DDR4 4GB to build a old machine, cpu can clock 500mhz without problem. Two watts CPU... lol
 
I honestly hope the x86 people cut cord on 32bit sooner rather than later..

yes, its great casue backwards compact for decades, but it also severely limiting improvements.

native support for 32bit stuff will eventually die out & sooner the better same w/ a bandaid.
Rip it off, let the system heal, & the people who can WILL make transition layers for it for those that need it.

Industry knows it cant keep the relics of 90's alive forever..it just need full x64 adopted but as not one company is doing it we keep dragging it out.
Cut the cord already.
Devs will do their job and make old stuff work regardless when they are forced to do so.
 
X86S needs to become an open ISA, with AMD, Microsoft, IBM, Chinese CPUs and many more on the steering group. I have long said, we should be on a pure 64-bit CPU with x86 open source emulation, as we should be looking at the next family of CPU's that remove all the problems x86/x64 has.
 
My guess is Diamond Rapids. That's also when they're bumping the Family ID from 6 (which it's been since Core 2) to 19. Given that they stayed on the same Family ID for so long, it seems like a good reason to bump it would be if you're breaking some backwards compatibility.
It just dawned on me that this means they're skipping Lion Cove for at least big Xeon.
 

setx

Distinguished
Dec 10, 2014
263
233
19,060
The last time x86-S was in the news, it was said to remove support for 32-bit operating systems, but 32-bit applications would still work exactly the same way they do today. Did that change with this updated version of the proposal?
No, it's the same. You've got wrong impression because the author don't understand himself what is going on.
 
  • Like
Reactions: bit_user

Findecanor

Distinguished
Apr 7, 2015
326
229
19,060
At the end of the day it was actually hubris that killed Itanium and IA64.
"Something of a tragedy: the Itanium was Bob Rau's design, and he died before he had a chance to do it right. His original efforts wound up being taken over for commercial reasons and changed into a machine that was rather different than what he had originally intended and the result was the Itanium. While it was his machine in many ways, it did not reflect his vision."
-- Ivan Goddard of Mill Computing (source)
 
  • Like
Reactions: King_V and bit_user

ezst036

Honorable
Oct 5, 2018
750
627
12,420
Intel gave up on Itanium by like the second gen. If they were at all serious, they'd had added things like out-of-order (which was actually possible on EPIC) and SIMD extensions.

As I think you mean, Intel did make a critical misjudgement in patenting the heck out of IA64. It meant that anyone using it was locking themselves to a single CPU supplier (Intel). The success of ARM and RISC-V further prove this point, as they're both open ecosystems where you have a lot more than just two competitors.
Agreed.

Had Itanium been open sourced on day 1, the CPU world today would look entirely different. By second gen it was way way too late.
 
  • Like
Reactions: bit_user

Pierce2623

Prominent
Dec 3, 2023
483
366
560
It is, there are a ton of corporations out there who are STILL reliant on their COBOL applications. A handful of them can adapt by using VMs/virtualization where it could make sense, but the rest will finally have to put their old applications out back and do the full re-write that they've been reluctant to do for all these decades.

Losing those corporate COBOL users is not as big of a threat overall as is RISC. (be it ARM or RISC-V)

The world is now, finally, ready to move on and let go of x86 and that is in fact a part of Intel's crisis.
The world will simply use whichever ISA works the best. Right now x86 does thing ARM can’t while it’s also still perfectly possible to match ARM designs on efficiency and exceed their performance. Lunar Lake and Strix Point both turned out much better than Qualcomm’s much vaunted Apple copy.
 

rtoaht

Reputable
Jun 5, 2020
119
124
4,760
The last time x86-S was in the news, it was said to remove support for 32-bit operating systems, but 32-bit applications would still work exactly the same way they do today. Did that change with this updated version of the proposal?
You're right. Nothing changed from that. I wish the writers are more knowledgeable.
 
  • Like
Reactions: bit_user

bit_user

Titan
Ambassador
X86S needs to become an open ISA,
If this ever happens, it'll follow what IBM did with OpenPOWER, which is to say they only opened it after its demise was virtually inevitable.

I have long said, we should be on a pure 64-bit CPU with x86 open source emulation, as we should be looking at the next family of CPU's that remove all the problems x86/x64 has.
The good news is that there are open source emulators of x86 that run on AArch64, RISC-V, and others.
 
  • Like
Reactions: thestryker

usertests

Distinguished
Mar 8, 2013
928
839
19,760
For posterity I must write here that "No one will ever need more than 64 bit on a client system" just so we can come back and make fun of it in twenty years.
We kind of got that with AVX2 and AVX-512.

But we would really like to be in a situation where 4 PiB of RAM isn't enough.
 

bit_user

Titan
Ambassador
We kind of got that with AVX2 and AVX-512.
No, the word size isn't defined by the largest data format the processor can natively handle, or else you'd have to say that that a 486 was an 80-bit CPU, due to the maximum precision floating-point number format supported by its integrated FPU.

I'd say the word size of a CPU is defined by the largest size data type on which all core operations are supported. So, that's things like arithmetic, bit-wise ops, comparisons, and load/store. If you wanted to refine the definition further, the natural split to make would be talking about data vs. addressing. So, the old 8086/8088 supported 16-bit data types, but 20-bit addressing.

The concept of vector data types really muddles this, because your AVX2 vectors can be up to 256-bits in length, but you can't add them as two 256-bit numbers. The largest element size you can process in vector form is still just 64 bits.

But we would really like to be in a situation where 4 PiB of RAM isn't enough.
I wouldn't. Geez, imagine how bad web pages and internet ads would have to get for a couple dozen browser tabs to use that much physical memory!
; )
 
  • Like
Reactions: adbatista

bit_user

Titan
Ambassador
The world will simply use whichever ISA works the best.
It's not based simply on technical merit, but also what's supported by the software people want to run. Different use cases prioritize performance vs. efficiency vs. hardware cost differently.

Lunar Lake and Strix Point both turned out much better than Qualcomm’s much vaunted Apple copy.
The picture is more complex than how you describe it. Lunar Lake has a distinct node advantage over the other two, so let's set that aside for now.

Even comparing just Strix Point vs. Snapdragon X, we face a question of OS support and native vs. emulation. Strix Point also adds SMT, which Snapdragon X lacks. Finally, Strix Point implements AVX-512 (although with the same limitations as Zen 4), while Strapdragon X is still an ARMv8-A design that has just 128-bit NEON as its only supported SIMD format. All of these factors complicate the business of trying to compare the microarchitectures on their core merits.

For Lunar Lake, Apple is truly the better point of comparison. It's on the same node, offers the same big/little configuration, and the M3 actually manages the same or better performance, by Intel's own admission.

yi38Ks6RqQy5FwZ2f9Zuh7.png


The M4 has a slight node advantage, which should help put some distance between it and Lunar Lake.
 

truerock

Distinguished
Jul 28, 2006
327
45
18,820
There is a "here" here, but perhaps a little different than you suggest.

Intel chose Itanium over the prospect of creating an updated/extended version of x86 that was 64 bit, and now Intel is choosing the Jim-Keller-designed 64bit x86 exclusively while jettisoning every aspect of legacy x86. EDIT: Notwithstanding this "32-bit compatibility mode"

It is the right move on Intel's part in 2024, however, it's finally a full embrace of the smackdown they received in 2003 when the Athlon64 first hit the scene and re-wrote the rules.

Itanium could've done more than it did, if only Intel had open sourced it. It's a shame really. They priced Itanium astronomically and killed it with the high cost.
The idea of Itanium was dreamed up by Intel and HP around 2000. The issue was that there were 32-bit servers that sold for $5,000 that were encroaching on IBM/HP/Sun 64-bit servers that sold for $500,000. It was very hard for IBM/HP/Sun to accept that inexpensive 32-bit servers were where the world was going and $500,000 servers were going to become niche/specialty products.

Where Intel/HP overlooked was that manufacturing a 64-bit processor was not that much more difficult to produce compared to a 32-bit processor (definitely not 100 times more difficult). The huge margins on 64-bit processors were obviously artificial and obviously dead going forward.

And making it so incredibly obvious was that 32-bit numbers can represent only up to 4-billion.

It was a combination of greed and stupidity that created Itanium. That never bodes well for anything.
 

TheSecondPower

Distinguished
Nov 6, 2013
133
124
18,760

bit_user

Titan
Ambassador

bit_user

Titan
Ambassador
I guess that means the design capabilites of Lunar Lake relative to M3 are 5% better than I thought.
Not sure what you mean by this comment, not least because the M3 is on a worse node than you said it was. The article you linked shows N3 (i.e. N3B) as performing 10-15% better than N5. It also shows N3E performing 18% better than N5. If the M3 is made on N3B and not N3E, then it nullifies the purported node advantage you previously claimed.

Furthermore, if you're trying to do some sort of microarchitecture comparison, you'd do well to remember that the Apple M3 is still using just 128-bit NEON instructions. They've not yet embraced SVE/SVE2. This puts them at a disadvantage on vector throughput.

I'd love to see some SPECint data on the two. I expect it'd show that the M3 is indeed ahead on what best reflects their respective microarchitectural sophistication.
 
The last time x86-S was in the news, it was said to remove support for 32-bit operating systems, but 32-bit applications would still work exactly the same way they do today. Did that change with this updated version of the proposal?
It doesn't look like it - previous versions mentioned that the CPU wouldn't boot in anything other than 64-bit mode, but AFAIK it didn't mean that you couldn't "downgrade" back to 32- or 16-bit mode after boot. This revision gets rid of that - It's 64-bit all the way with 32-bit compatibility in Ring 3.
 

RUSerious

Honorable
Aug 9, 2019
63
25
10,570
No it won't. Only 32-bit Ring 0 is removed, not 32-bit user space.
Excellent point. Some drivers would be problematic, such is life.
They have to make tradeoffs vs. performance and power.
It's certainly on the ropes, which is why they have to do things like X86S and APX!
The front ends on modern x86-64 processors are eating more and more power - X86S would be a great way to reduce power at, or above, the same performance.

Gains for x86-64 CPUs are getting harder and harder to come by, X86S and APX can reduce the complexity of code streams using these architectural changes. Perhaps there would be even be better optimizations available in compilers as well (speculation obsv).

Great points guys!