News Former Intel CPU details how internal x86-64 efforts were suppressed prior to AMD64's success

Seems like the logical path would have been to release an x86-64 as a bridge alongside a pure 64 Itanium. Instead of trying to force people to be pure 32 bit or pure 64 bit.
Itanium was able to execute x86 programs with a bridge to translate instructions for Itanium, but due to that the performances was worst than previous generation of x86 CPU. As all programs where in x86 then peoples doesn't want to buy it, then developers doesn't build programs for Itanium making it a flop.
AMD was not invited to Itanium conference and licensing seemed unlikely, so they make their one solution.
On x86-64 (AMD64) AMD added the 64 bits instructions to the 32 bits instruction set meaning the processors remain 100% natively compatibles with x86. Peoples were buying AMD Athlon 64 because they were more powerful than AMD Athlon when Windows for AMD64 was not published and no application were using 64 bits instructions.
This has taken decades for 64 bits applications to take root (from 2003), in Windows "Program files (x86)" containing 32 bits applications is becoming smaller.
 
Last edited:
AMD was not invited to Itanium conference and licensing seemed unlikely,

Here is the meat of it. Remember AMD made x86 chips on an old license (and a few court cases). It started way back when Intel couldn't make near enough 8088 processors to satisfy the market in the 80's.

Intel believed that Itanium, breaking away from the x86 mold, would break AMD's license and thus ability compete going forward. The same license, BTW, had/has a technology sharing agreement. So Intel essentially got AMD 64 for nothing.
 
Unsurprising that the x86 side of the company saw the writing on the wall and it was likely due to how far behind schedule Itanium was.

While it didn't work out for Intel this was probably the best route for client computing. Intel having to use AMD's x86-64 implementation led to the long term licensing agreement between the two companies.
I saw Intel scrap numerous projects, precisely because they were dead set on making it Intel exclusive or some other hindrance for everyone else.
It shows current problems are not a new development. It just cought up with them finally.
 
  • Like
Reactions: cyrusfox
I was working at Intel (BX chipset, for example) during the PPro and P II period. I was personally present at meetings held with respect to the new arrival of the 64-bit architecture and most particularly to the meetings with Phoenix and Intel surrounding the development of a new BIOS and the operating system goals for these projects.

At the time, Phoenix had two teams present at the table: their 32-bit group which essentially "ran the show" and the newly minted 64-bit group which exhibited far less power.

In those meetings I personally attended around that time, the 32-bit group wanted the 64-bit group to operate entirely separately from them. The 32-bit group wanted their 32-bit BIOS to not be affected by what the 64-bit group was on about (the 64-bit group was clearly younger and more junior in skills) and as a consequence of that, the 32-bit group proposals were to either boot 32-bit or else boot 64-bit. But they declared that there would be no joint support in the BIOS. They wanted them to be entirely independent of each other and one or the other would be selected at boot.

There were some serious implications that took some of my heart and enthusiasm away, and I was personally both shocked as well as quite vocal at these meetings. I had no impact.
 
OMG... year 2000...

A typical 32-bit HP Windows server cost maybe $5,000
A comparable 64-bit HPUX server cost $50,000

Gee, I wonder why Intel and HP worked so hard to kill 64-bit Windows servers?
 
  • Like
Reactions: anoldnewb
In the Quora post he says nothing about compatibility he says:



To me this says an Intel version of x86-64, not AMD64.

That's my point. How did they develop a compatible version so quickly if they already had their own? And why not try to force their own as in the past Intel won when developing new x86 extensions?
 
The article said:
If he "didn't stop yammering about the need to go 64-bits in x86", he'd be fired on the spot
LOL, sounds like one of my old bosses. Okay, he never threatened to fire me, but he'd ignore or dismiss a lot of my ideas without ever explaining why. The joke was on him, though - I kept my job at that place a lot longer than he did. I even got a promotion, after he was gone.

The article said:
Intel could have beaten AMD to the x86-64 punch if the former wasn't dead-set on the x64-only Itanium line of CPUs.
This was quite obvious to the general computing public, at the time. Even before AMD64.

The article said:
In the face of repeated firing threats, the knowledgeable Colwell left the gates for 64-bit in Pentium 4, but fused off the functionality to make the inevitable return to x86 easier for Intel when the time came.
Given how hot the P4 ran, and how much trouble it had achieving clockspeeds that made it competitive, I'd say this was a bad call. Even worse, that extra die area wasted money. Depending on just how bad these impacts were, I'd say it could've been a fireable offense.
 
Last edited:
  • Like
Reactions: jp7189
To me this says an Intel version of x86-64, not AMD64.
According to Wikipedia:

"AMD originally announced AMD64 in 1999 with a full specification available in August 2000."

Given that the first Pentium 4 launched in November 2000, it does seem like Wilamette would've launched too soon to have it. Wikipedia claims:

"The first Pentium 4-branded processor to implement 64-bit was the Prescott (90 nm) (February 2004), but this feature was not enabled."

With Prescott, the pipeline length jumped from like 20-stage to 31-stage. That long pipeline largely offset the benefits of Prescott's doubled L2 cache and higher clockspeeds. So, depending on how much of it was due to that disabled 64-bit support, I'd say he could've made a very bad call, indeed.
 
  • Like
Reactions: Thunder64
I remember reading how the pentium 4 had a lot of things cut it even came with an L3 cache at some point in the design phase.
That there was a point that there was work done in addressing more memory isn’t surprising either.

Intel employs and has employed a lot of very talented engineers which have been held back by financial realities.
Some understandable like not including L3 on the pentium 4 as it would have ballooned the die size and increased the manufacuring cost of the already large chip for it’s time.
some not so much like the not offering 64 bit memory addressing or doing something with their arm license as they were failing to get a grip on mobile
 
Chris Harper states "Itanium landed with a thud in the market despite being among the first to the 64-bit punch".
This is grossly inaccurate and reveals profound ignorance of computer architectures and of the computer market.
Intel was actually the LAST computer company to develop a 64-bit architecture, and did so primarily because by
the mid 1990s it became obvious that 64-bit architactures were a de facto requirement for competing in the
server market. The first 64-bit computers were the CDC (Control Data Corporation) 7600 in 1969, followed
by the CDC STAR in 1974 and the Cray-1 in 1975. All the subsequent Cray and CDC Systems were 64-bit.
Sun, MIPS, HP, IBM, DEC and FPS all developed 64-bit architectures and shipped actual products by 1995.
The MIPS architecture in particular was perceived as a major threat to Intel because it was licensed to and
widely adopted by major computer system vendors such as Pyramid, SGI and Tandem. Against this landscape
Itanium was very late to the market and had deep flaws. AMD actually saved Intel's ass.
 
Intel was actually the LAST computer company to develop a 64-bit architecture, and did so primarily because by
the mid 1990s it became obvious that 64-bit architactures were a de facto requirement for competing in the
server market. The first 64-bit computers were the CDC (Control Data Corporation) 7600 in 1969, followed
by the CDC STAR in 1974 and the Cray-1 in 1975. All the subsequent Cray and CDC Systems were 64-bit.
Sun, MIPS, HP, IBM, DEC and FPS all developed 64-bit architectures and shipped actual products by 1995.
Yeah, I knew the RISC CPUs had all gone 64-bit much earlier, but I didn't know just how early mainframes made the jump.

We should also note that, recognizing it was running behind, Intel introduced a stop-gap measure called PAE (Paged-Address Extensions), which extended the address range of 32-bit CPUs to 64 GiB. However, each segment was still only able to access 4 GiB at a time. I think it saw very little use outside of kernel space. Maybe a few database services used it, but that was probably about it.

Funny enough, I have an old Pentium M laptop which did not have PAE. At one point, there were special instructions for installing Linux, because the 32-bit build of the distro had defaulted to using it.

BTW, this past April Fool's day, ChipsAndCheese took a look at the CDC 6600, which implemented support for 60-bit scalars and 18-bit addressing, back in 1964. They claim it ran at 10 MHz, which was only matched by PCs some 20 years later!

Also, it was designed by the legendary Seymour Cray, before he founded his own company.
 
Last edited:
LOL. I was there too, and that's not at all what happened.

Very few computers of that era had more than 4GB of ram, and frankly 64 bit doesn't necessarily buy you much other than addressibility of 4+GB of memory without using page mapping tricks. It was literally something that nobody needed. But hey, people all felt that 64 was definitely better than 32, so after rolling our eyes a lot, we added it to the architecture. And then we all sat around for years waiting for the OS's and apps to actually make use of it.

It'd have been better for everyone if that software work was jointly developed over a period of time and as it was actually needed.

Now we have all sorts of tricks happening with kernels being compiled as a 32 bit image, to save the storage requirements of 64 bit addresses in a compiled image taking up too much space on disk and footprint in ram. And everyone freaking out over it not being 64 bit, even though that doesn't matter even a little bit.
 
Very few computers of that era had more than 4GB of ram, and frankly 64 bit doesn't necessarily buy you much other than addressibility of 4+GB of memory without using page mapping tricks. It was literally something that nobody needed.
Back in November 2002, my desktop at work was a Northwood Pentium 4, with 512 MiB of RAM. So, within an order of magnitude and I'm not sure its RAM slots were even fully populated. IIRC, Windows limited you to either 2 GB or 3 GB, because it used the top couple msbs of pointers to help distinguish kernel and userspace addresses (not sure if this was for the sake of security, debugging, or exactly what).

64-bit addressing was a big deal for some server apps and HPC, where > 4 GB of memory was not so outlandish and programs could actually benefit by having a 64-bit address space.

Within the first couple years of its launch, I heard the first inroads made by Itanium were people running server-side Java apps, who used it mainly for its large memory support.
 
Last edited:
Here is the meat of it. Remember AMD made x86 chips on an old license (and a few court cases). It started way back when Intel couldn't make near enough 8088 processors to satisfy the market in the 80's.

Intel believed that Itanium, breaking away from the x86 mold, would break AMD's license and thus ability compete going forward. The same license, BTW, had/has a technology sharing agreement. So Intel essentially got AMD 64 for nothing.

To understand the reasons for this people need to understand the origin of the home PC market, OEM whitebox systems and why the IBM PC standard absolutely dominated everyone and everything.

Kinda longish story, BLUF is that Intel was attempting some Dr. Evil shenanigan during that era.

During those early years it was very common for companies to release systems where damn near every component interconnect or standard was proprietary or custom. Hardware and software for manufacturer A would not work for manufacturer B and so forth. IBM wanted to make a small desktop computer for both business and home and had decided that it would ensure every component was made by two or more supplies to ensure no one hardware company would ruin the platform. The memory chips, ISA bus, and pretty much everything else in the IBM PC platform was made in what we now call an "open architecture" and "off the shelf" parts. This was to keep costs down and prevent component monopolies from happening. The result of this was that they forced Intel to license a smaller chip company called AMD to make early 8088/8086 CPUs, then later 80286 CPUs. For software IBM wrote the BIOS but in such a way that anyone could write software for the system, they contracted Microsoft to provide some of the original software that systems shipped with.

That entire platform was extremely successful due to its crazy flexibility. The only control IBM had was that they owned the BIOS code used for boot. Of course eventually someone was able to clean room reverse engineer it and make an "IBM PC Compatible" BIOS, which we still have written down somewhere on most PC components. Once that happened, the flood gates opened and people could "build" an "IBM PC Compatible" computer by just purchasing the different components from the OEM's and put them together in a system that didn't have any IBM logo's on it.

So what does this have to do with Intel during Itanium? Well that original IBM PC platform had become mega popular, so popular that it set the standard across the entire industry and this was a massive windfall for the primary CPU provider for that platform, Intel. Unfortunately they had to deal with another manufacturer who could make cheaper CPUs that consumers and integrators could buy instead. This put a price ceiling on who much Intel could charge for the critical "must have" component of the open IBM PC platform. Intel sued to prevent AMD from making an 80386 compatible CPU but lost, and kept losing because the original forced licensing agreement said AMD had rights to the x86/x87 ISA and not just cloning older Intel CPUs. Then two other x86 providers showed up, IDT and Cyrix were able to create x86 compatible chips by reverse engineering the Intel CPUs and making ones that were compatible. That is four competitors in the x86 market, which to Intel was three competitors too many. And since Intel had lost in their legal attempts to monopolize the IBM PC platform, they decided to just build another platform that was mostly compatible but one that they could control. Thus Itanium was made, a whole new CPU that while able to emulate x86 was absolutely not x86 and instead ran entirely in the now common 64-bit mode.

Except the market really didn't like Intel doing that, everyone wanted to keep all their software running at full speed and AMD's Athlon was very capable. Itanium was only adopted seriously by HP for their Server / Workstation line and while Microsoft continued making OS's for the platform, it never gained any sort of consumer following. It's no surprise that Intel would not allow 64 bit extensions to be added to the x86 line while they were trying to push the Itanium onto market.
 
LOL. I was there too, and that's not at all what happened.

Very few computers of that era had more than 4GB of ram, and frankly 64 bit doesn't necessarily buy you much other than addressibility of 4+GB of memory without using page mapping tricks. It was literally something that nobody needed. But hey, people all felt that 64 was definitely better than 32, so after rolling our eyes a lot, we added it to the architecture. And then we all sat around for years waiting for the OS's and apps to actually make use of it.

It'd have been better for everyone if that software work was jointly developed over a period of time and as it was actually needed.

Now we have all sorts of tricks happening with kernels being compiled as a 32 bit image, to save the storage requirements of 64 bit addresses in a compiled image taking up too much space on disk and footprint in ram. And everyone freaking out over it not being 64 bit, even though that doesn't matter even a little bit.

64-bit was useful in the mainframe and higher end server space because it allowed for a flat NUMA. Now you are correct that having more then 4GB on a single CPU was a bit much during the 90's, those systems didn't use a single CPU. They way most of them worked was you had a refrigerator sized system with different cards loaded horizontally. Each card would support two to four CPU sockets along with it's accompanying memory modules. If each card had two CPU's with one GB of memory each, and you had eight cards, that is 16GB worth of total system memory divided into 16 physical ranges. Trying to use page flipping would be a nightmare, so instead the core system would just lay them all out in a linear fashion and then could route the memory access request to where it needs to go based purely on that.

You are correct that nobody at home needed that kind of stuff but business's absolutely did. Of course this all changed in the mid 2000's when Athlon 64 landed and Microsoft released Windows XP 64 (Server 2003 x64 desktop edition). Was rocky at first but that platform was ridiculously stable, I had it back then and absolutely loved it.
 
... people could "build" an "IBM PC Compatible" computer by just purchasing the different components from the OEM's and put them together in a system that didn't have any IBM logo's on it.

... That is four competitors in the x86 market, which to Intel was three competitors too many. And since Intel had lost in their legal attempts to monopolize the IBM PC platform, they decided to just build another platform that was mostly compatible but one that they could control. Thus Itanium was made, a whole new CPU ...
Nice summary.

I just wanted to point out an interesting symmetry between what IBM tried to do with the PS/2 and what Intel tried to do with Itanium. PS/2 introduced some proprietary formats, perhaps most notably the MCA (Micro-Channel Architecture) bus. It suffered a similar fate as Itanium, as it died out and was succeeded by the open standard PCI bus (in between, there was also VLB - VESA Local Bus).
 
That entire platform was extremely successful due to its crazy flexibility. The only control IBM had was that they owned the BIOS code used for boot. Of course eventually someone was able to clean room reverse engineer it and make an "IBM PC Compatible" BIOS, which we still have written down somewhere on most PC components. Once that happened, the flood gates opened and people could "build" an "IBM PC Compatible" computer by just purchasing the different components from the OEM's and put them together in a system that didn't have any IBM logo's on it.

So what does this have to do with Intel during Itanium? Well that original IBM PC platform had become mega popular, so popular that it set the standard across the entire industry and this was a massive windfall for the primary CPU provider for that platform, Intel. Unfortunately they had to deal with another manufacturer who could make cheaper CPUs that consumers and integrators could buy instead. This put a price ceiling on who much Intel could charge for the critical "must have" component of the open IBM PC platform. Intel sued to prevent AMD from making an 80386 compatible CPU but lost, and kept losing because the original forced licensing agreement said AMD had rights to the x86/x87 ISA and not just cloning older Intel CPUs. Then two other x86 providers showed up, IDT and Cyrix were able to create x86 compatible chips by reverse engineering the Intel CPUs and making ones that were compatible. That is four competitors in the x86 market, which to Intel was three competitors too many. And since Intel had lost in their legal attempts to monopolize the IBM PC platform, they decided to just build another platform that was mostly compatible but one that they could control. Thus Itanium was made, a whole new CPU that while able to emulate x86 was absolutely not x86 and instead ran entirely in the now common 64-bit mode.
Not accurate. The fact that other manufacturers were allowed to build 8088 processors was certainly a supply agreement with IBM. The reason the IBM PC was so successful was pretty simple. They were IBM. They were the absolute monolith of business data processing. Putting the IBM name on one of these newfangled small computers signaled the business world that these were now serious machines. They didn't take the likes of Apple, Texas Instruments, Commodore, Amiga (for crying out loud) seriously. And do not think, even for a second, that business did not drive the personal computing revolution. Consumer PCs were nothing at the time and really didn't start to gain enough buyers for another decade and a half to be thought of seriously. It doesn't matter what you think of those other companies or how their technologies stacked up. It only mattered what General Motors, or Coca-Cola, or Sears thought. And they thought IBM. And they thought IBM enough to make the typewriter extinct in less than a decade.

Then the clone wars began. IBM being IBM thought, after the first sales figures came in, that maybe these little things will sell. And if only we can get those suckers to pay several thousand dollars a copy.... yeah. Apple tried to make a play as well, introducing the Apple Lisa to get in on the action with a huge buy of slick advertising. THAT sucker, as depicted in the TV ad, cost $14,000. If you're going to make the typewriter extinct in less than a decade you simply can't pay that much. It took Phoenix technologies six months to make an absolutely, positively, willtotallystandupincourt copy of the IBM BIOS and, remember, that was the ONLY intellectual property that IBM owned. Both IBM and Apple priced themselves out of the market early.

This ridiculous AMD fanboi idea that AMD was there leading the way for cheaper gaming PC's back in the 80's is ridiculously stupid. They were quite happy to ride Intel's coattails and rake in the $$ and, literally no one even knew who they were other than they owned a fab that could make the 8088.
Except the market really didn't like Intel doing that, everyone wanted to keep all their software running at full speed and AMD's Athlon was very capable. Itanium was only adopted seriously by HP for their Server / Workstation line and while Microsoft continued making OS's for the platform, it never gained any sort of consumer following. It's no surprise that Intel would not allow 64 bit extensions to be added to the x86 line while they were trying to push the Itanium onto market.
Because they believed, (we don't know, there was never any legal challenge), that because the chip didn't include the original x86 instruction set, it would finally end the technology sharing agreement from the 80's. They could make 64 bit a clean break. When AMD added AMD64 to those same chips, they actually couldn't stop Intel from using it, the agreements went both ways. So, we all use AMD64 today no matter what brand we buy.

By the way, this is absolutely, hilariously similar to what IBM tried to do as well. They did design a new bus for the PS2. It was even objectively superior to the original and totally owned by IBM (they would even license it to other manufacturers.) But by then the clone makers had enough clout to design their own, close enough, bus (EISA). Thus IBM lost the opportunity to gain a royalty on every PC made (like Microsoft effectively had) forever and sold their PC business to Lenovo.