Intel Kills Itanium, Last Chip Will Ship in 2021

The biggest loss is that we never went to a pure 64bit uArch and are still tied down to the aging x86 design. While its nice to have for migration it doesn't help us move to a pure 64bit world. Many programs are still written in 32bit which don't take advantage of the biggest thing 64bit gave us, more than 4GB of total system memory.
 

joeblowsmynose

Distinguished


If the main feature of a 64bit world is the ability to use more than 4gb of RAM (which I agree, it is), and many programs still don't require more than 4gb, and, if your program does need more than that then programing it in 64 bit would be your choice (if HD caching can't be used due to performance restrictions), I am not sure where the loss really is ...

Would there be an advantage if we removed the 32bit instruction sets from everything? If so, where might we find that?
 

richardvday

Honorable
Sep 23, 2017
185
30
10,740
That's not true.
The operating system vendors are almost unanimously killing off 32 bit versions of there OS anyway
Just because an application is 32 bit does not limit the operating system to only 4gb.
Now that 32-bit application is limited in how much memory it can access.
Most applications do not need that much memory anyway and the applications that do are written in 64-bit so they can access the memory they need .
I'm not saying we should stick to 32 bit by any means the move to 64-bit needs to continue by default most companies are writing their applications in 64-bit now anyway
 


The main advantage is a clean uArch that doesn't rely on an older one to function. x86-64 relies on x86 code to run and even if all programs and OSes move to 64bit we will still be on x86-64. Still tied to an older standard.



You can still get 32bit 10 ISOs but I was mainly talking about software. Windows 7 64bit was smoother and faster than its 32bit counterpart, much different than XP or Vista where 64bit was still a mess, but it always makes me wonder if an OS is written purely for a 64bit uArch and we don't have any x86 32bit code to pull us down how much better would it be?
 

CerianK

Distinguished
Nov 7, 2008
260
50
18,870
A few thoughts:
1. Although there are some 64-bit programs that require access to more than 4GB of RAM, few of those need access to more than 4GB all at once. Most could be rewritten to use the blazing-fast paging of a modern SSD, with no significant penalty if done properly. Back when IA64 came out, slow paging was a big impetus for expanding RAM addressing. Contrary to popular belief, almost all 64-bit dependent operations can be broken down into 32-bit operations with minimal performance degredation. Even 128-bit operations run extremely well on today's 64-bit CPUs (thanks to advances in both compilers and CPU micro-ops).
2. Although 64-bit OSes are getting better, there is still room for improvement. My colleagues are still amazed at how fast some of our-old-and-tired 32-bit multi-threaded applications are when run under Windows 7's XP Mode. Microsoft only gave it 1 core to work with and yet it is instantly responsive with most anything I click or throw at it (within reason). I wouldn't be surprised if one of the reasons that its free-license on 7 Pro wasn't passed on to Win 8+ was that it was 'too good'. On the other hand, perhaps I'm the only one that has to wait 5 or 10 seconds for the Win 10 Start Menu to open on occasion, so having it open in 1/10th of a second in XP Mode sets an unreasonable expectation.
 

lsorense

Distinguished
Jan 9, 2007
14
0
18,510


We already had pure 64 bit architectures, like the Alpha. Of course intel as well as incompetent management at Digital, Compaq and HP killed that off. It was certainly a better design than the itanium. At least we got hyperthreading and point to point interconnects (like hypertransport and QPI) out of the Alpha work and intel and AMD gained some good leftover engineers.

The end of the Itanium is a relief. About time. Good riddance. All it caused was harm to better architectures and divert resources from better things.
 

sergeyn

Distinguished
Oct 23, 2011
47
4
18,545


I have an impression that you have no idea what you are talking about.
 

Xajel

Distinguished
Oct 22, 2006
167
8
18,685


Backward compatibility has it's cost, Even today, you can't install Windows 98 for example on a modern system, hell some system can't even start Windows XP or 2000, this is mainly duo to the move to UEFI from BIOS but even the CPU's now are different. Apple is leading the move in this regard, they stopped accepting 32bit iOS apps a while ago, Google followed them, even Apple now doesn't accept 32bit macOS apps. They're forcing the developers to move away from 32bit.

Both Intel and AMD slowly deprecate older instructions that no longer are required, then eventually will remove them all together, this happened before with 16bit and 32bit, starting from the OS (Windows) then moving to the CPU's.

Any company can make a big move like this, pure 64bit CPU. But moving the whole world to it is very hard and costly, nearly impossible.
Itanium while it was good, but it's performance wasn't that much to justify the move (cost, time and loss of backward compatibility), it's dependence on VLIW also made things harder compared to x86, you know x86 isn't just well known to developers, it's also easier to develop and optimise specially for the kind of software we're using.
Itanium x86/32bit code was basically emulation, suffered huge performance drop also to the point it was slower than most much much less expensive x86 systems, while Itanium was first meant for enterprise market, the later issue meant it will never find it self go to the mainstream.

AMD64 paved the way not just for the mainstream market, but it also promoted many enterprise/server market to move away from SPARC/POWER/Itanium to lower cost and easy to develop x86 systems. Intel later adopted the x86-64 with it's own implementation (they called it Intel64) and used it, the x86 market even in specialised market exploded then.
 

lsorense

Distinguished
Jan 9, 2007
14
0
18,510


I would certainly hate the idea of going back to every application doing it's own overlays to do code and data paging that we used to use back in the dos days. That was part of why people started using the 32 bit dos extenders so they could go to flat memory maps and use more memory and avoid that insanity.

And of course paging to an SSD constantly would be a great way to kill any SSD's lifespan.
 

CerianK

Distinguished
Nov 7, 2008
260
50
18,870

A few more thoughts, as I am an assembly language programmer, therefore my take on things might be a little different.

I was not suggesting to go back to 32-bit, but hinting at why IA-64 got it wrong (i.e. pressure to move forward due to slow paging, without consideration for existing code libraries). Extended/Expanded memory solutions were just a stop-gap measure that avoided the real issue with memory addressing. Aside from the memory addressing issue, extending 32-bit code to emulate 64-bits or above is tedious (and for more advanced projects requires degrees in both mathematics and computer science).

Paging to an SSD is certainly off-limits for write-intensive operations, therefore not a viable solution for certain classes of programs.

I am pleased with the current architectural state in terms of bringing programming back to the masses, but worry that there is not enough push in basic science education to address the potential future issue with a wide-spread need for quantum algorithm development (e.g. trying to learn a second spoken language after 5-8 years old is difficult, though not impossible, but usually is never as robust as having learned it much earlier).
 

CerianK

Distinguished
Nov 7, 2008
260
50
18,870


I think that is often the case with most of us.

For example, I spent weeks of additional time on a project due to the memory/speed constraints that were imposed and eventually found a solution. Years later, I was doing some research and realized that I re-invented sequency ordered Hadamard matricies.

Knowledge is fluid, thus requires discussion to avoid wasting time re-inventing the wheel or chasing dead-end paths.
 

GoatGuy

Honorable
Feb 22, 2016
22
10
10,515
Its kind of sad, at least to me, who is old enough to have actually <b>used</b> the 8080 'original microcomputer' single-chip revolution, AND virtually every microprocessor that came after. I've had the blessing to write software for 2900 and 29000 "bit slice" chips, for 6502, 6809, 6800, 68000, 680×0, SPARC, Power, you name it. And even Itanium.

The thing the designers of Itanium came up with was the <b>super</b> scalar design, the VWIW or very-wide-instruction-word format … that forced runtime optimizations onto the code compiler. This in turn allowed quite a bit of overlapping of code bits; the chip's ability to process multiple threads was more limited, but the ability for the compiler to produce thread code that was ridiculously efficient was equalled by none. Hence why "threading" was more a function of rubber-stamping the basic Itanium core multiple times. Thee. There's your threading.

However, it is also the case that both AMD and Intel (and others) have REALLY optimized the x?? architecture runtime architecture to where the possibility of radically improving the execution speed of a given segment of code … at a particular clock rate … doesn't have a whole lot of room left. Oh sure, there IS DEFINITELY room left: AMD and Intel both predict a future with even higher IPCs and higher cores-per-chip.

Sure.

But its kind of getting architecturally boring: not much of what is being done, apart from fueling a huge industry of endlessly breathless architecture pundits making videos espousing the fine-point differences between Intel, AMD and ARM … not many real-world programmers actually make programming decisions — you know, CODING decision of process flow — based on those refined nuances of architecture.

Just saying,<br><b>Goat</b>Guy