News Pat Gelsinger's initials are etched into every 386 processor ever made — Intel CEO literally made his mark as a key CPU designer

Status
Not open for further replies.
What a shmuck! That explains it!

I never really liked the 80386.

It was too late!

It was too early!

It was incomplete!

And it was boring!

It was too late, because I had just spent all my money on a 80286 and there was just no way I could replace it with the "real thing" a year later. Soon after it came out my local Computer dealer rang the doorbell and wanted to deliver a Compaq 80386, that very first machine that had this CPU. Saying no has never been harder, because as it turned out my dad had ordered that machine for his business and they had mistaken us...

I sooo much wanted to say "yes, leave it here with me"...

It was too early, because I had spent a year on writing a bit of software, which would take my GEM applications written in Turbo Pascal, basically relink and modify them in such a way that they ran in 80286 protected mode, returning to REAL-mode every time a GEM drawing routine or DOS call needed to be made: it allowed "vast" amounts of data to be held in RAM (a whopping 1.5MB via an Intel AboveBoard) for the mapping-visualization software I was writing.

With a true 32-bit machine out, that was obviously no longer going to be attractive, even if it took ages for 32-bit OSs with real graphics to appear. Even Unix was all text back then, I ran Microport System V.2 on my 80286, my dad had gotten a proper 32-bit Unix from UnixWare, that was vastly more stable.

It was incomplete, because it supported virtualization only for 8086 DOS machines! That was just unforgivable, a totally crippling and obviously quite intentional oversight, PG, I am looking at you!

I mean, even then the IBM 360/67 had been twenty years old by then. Everbody knew how little it takes to add virtualization capabilities to a 32-bit CPU. Still Intel only implemented a vitual 8086 mode, that's only ever been useful for a few years and left out the real thing, evidently because they were afraid at how much fewer CPUs they were going to sell.

I rejoiced when Mendel Rosenblum and Diane Greene found out how they could use the 80486SL system management mode (SMM) (and binary translation) to implement a hypervisor anyway and immediately bought the very first VMware software release (which today would be called VMware Workstation 1.0).

And just to show their spite, Intel fired back and invalidated most of VMware's patents rather quickly when they finally implemented hardware virtualization a little later and sponsored the adaption of Xen to make use of it, when Xen had previously only been a software virtualization platform.

It proved beyond doubt, how intentional that feature cut had been before and how mad they were at VMware for getting around their functional fence.

And PG going to VMware and Diane Greene getting fired would close the circle, if there wasn't a 4 year Paul Maritz tenure between that. Still, I could easily see this as late revenge...

And to be honest the 80386 was also already boring, even against Intel's own. The 80432 seemed so much more interesting (ok, it was a dog) but the iAPX850 ("Cray on a chip") was sooo much cooler! People in my SUPRENUM lab at the time were working on a quad-i860 board and teaching GNU-cc how to schedule loop instances first into the SIMD register sets and then across cores for the extra wide HPC workloads.

What I quite liked was when AMD made that boring CPU 64-bit and thus managed to starve the far more interesting Itanium into a niche. And that was mostly, because the Itanium was quite simply Unobtainium for the Personal Computer. The bean counters eventually even managed to put that HPC CPU into (integer only) database servers! I've always felt sorry for the engineers, but Intel got what they deserved.

I've always hated everything that was either closed or too expensive (Apple managed both after the ][), no matter what the other merits might have been: If I couldn't tinker with it, it was no good!
 
  • Like
Reactions: jp7189 and bit_user
That was mine as well, also got the math co-processor too, which was made by Cyrix...
I remember putting a system like that together for my sister. When I turned it on for the first time black smoke started coming out from under the Cyrix coprocessor. No harm done though. Just had to rotate the chip 180 degrees.
 
  • Like
Reactions: bolweval
*while today's chips have reached 3nm, and future variants will shrink to 1.4nm — and shrink even further as we move into the Angstrom era."

This is very misleading. It's extremely unlikely that we'll get below the 1nm level (when you really start needing angstroms). 14A chips are 7 silicon atoms. 7. The smallest you can possibly get is 2A and have it still even be silicon.

There will be no "Angstrom Era"
 
Last edited:
  • Like
Reactions: bit_user
Thanks for the history.


Not exactly.
Well the Microport Unix for the 80286 was definitely all text and the first Unixware releasese for the Compaq 386 were as well. Thomas Roell and I exchanged a few e-mail when he was creating the 8514/A port for X11R4 while hired by Dell (https://en.wikipedia.org/wiki/Accelerated-X) while I was porting it to an EISA graphics card that was a hybrid with a VGA chip and a TMS 34020 "graphics processor" for my thesis within SUPRENUM.

At the time X11R4 only supported monochrome and pseudo-color visuals (8-bit with a 24-bit palette) and I had to rewrite it to support full true-color 32-bit pixels (8 alpha bits unused). I also had to write a Unix emulator because the 34020 was a real CPU and was in fact running the entire X-Server, with sys-calls being forwarded to the host CPU (80486 at the time) for execution on Unixware. That again was only a development setup because "the real" thing ran on a VME bus variant of the GPU using a set of 68010/68020 and 68030 CPUs on a µ-kernel very similar to Mach or QNX, for which I then had to implement a minimal Unix subsystem to enable X-server and demo-clients.

Since I had access to original Unix and Motif source code, I shamelessly copied large parts of the C-Library and everything else I might need from there. But that also meant none of my stuff could be released as open source. Linux was still in its diapers then and open source basically just the GNU compiler.

The 34020 was a fun chip with a "full 32-bit address space" where an incremental address wouldn't address the next byte, as it does for pretty near every general purpose CPU since the PDP/11, but the next bit! So the theoretical memory size limit was 16Mbyte, the same as for the 80286 with a 24-bit bus (addressing bytes).

Unfortunately these TIGA cards never got to be very popular, even if the 34020 even got its own floating point co-processor, the 34082 towards the end, so the fact that my work never became open source wasn't that much of a loss.
I thought the i860 was pretty cool, too. SGI liked using them as geometry engines.
SUPRENUM was working on a quad i860 design to use as a Phong-shading accelerator: the hardware was ready when I left the project, I don't know if they ever managed to get the then current GCC 1.36 ready to work with that setup. I used the same compiler to cross-compile from SPARC to MC680x0 for my X-work.

Costly pipeline stalls and Incredible task switch latencies killed the i860 as a general purpose CPU and Intel didn't learn when they created the Itanium.

Funnily at the time Linus Torvalds didn't know much better because he was using task-state segments for context switching in his initial kernels. Those look great on paper, because switching between two tasks on a 80286 or bigger would only take a jmp or call to a task-state segment, while the µ-code of the CPU would do all the required register saves and restores: it would seem that you could fit the main functionality of a Unixoid OS on a single page of paper!

What he didn't realize until Jochen Liedke stuck his nose into it, is that µ-code took vastly more time to complete a task switch than any real-world OS had time to spare e.g. to serve an interrupt. But Linus' greatest virtue is decision making and he was happy to have others who knew how to make an OS perform replace is code and the rest is history.
 
  • Like
Reactions: bit_user
No, the Cyrix FastMath was one of the better aftermarket FPUs, but Intel did make their own (i.e. the official 80387 or i387). We also had Cyrix, and I remember seeing it mentioned in the POST boot sequence.

More:
I got an 80287 "for free", together with Windows 1.0 when I purchased an Intel Above (RAM expansion) card that supported both EMS and extended memory for my 80286.

And I was quite happy that my 80386 systems would support that same DIL socket co-processor, which was still quite a bit faster at 8MHz than any emulation library on the 33MHz 80386.

But since it was a dog compared to most Unix mid-range systems or the Control Data systems my main clients were coming from at the time, I turned to Weitek co-processors to obtain acceptable performance for the Fortran weather simulations they were trying to run: the 1167 at first and even with the 80486 the Weitek 4167 still vastly outperformed the quaint stack architecture FPU that Intel had originally designed for the 8088/8086.

The Weitek architecture was interesting in how it managed to keep the natural bandwidth limitation caused by 64-bit operands at bay: it mapped to a 64k segment of memory for its (probably) 32 registers and then encoded the intended operation in the address. So writing a number to address A meant something like "multiply this with the contents of register 12" while writing to address B meant "store the square root of this argument in register 11" etc.

I roughly remember the 4167 being 4x faster on average than the built-in 87-type FPU of the 80486...

I honestly don't know how the Unix systems at the time managed a task switch, quite probably they just ignored the Weitek FPUs completely, which was fine as long as you never ran more than a single Fortran job at the same time...

MMX aliased the stack registers of the x87 FPUs and basically just about caught up with the state of everybody else's art. It's only today that x86 FPUs no longer need to hide, at the time of the 80386 FP sucked quite, quite badly.
 
  • Like
Reactions: bit_user
MMX aliased the stack registers of the x87 FPUs and basically just about caught up with the state of everybody else's art. It's only today that x86 FPUs no longer need to hide, at the time of the 80386 FP sucked quite, quite badly.
SSE was basically Intel's reset of FPU arithmetic. SSE instructions have both a scalar and vector version, meaning you no longer needed x87, at all. One way that SSE found a little more speed is by omitting hardware support for denormals. You can still enable denormal support, in the fpconfig register, but it gets handled by a costly ISR. So, you're better off just using fp64 than fp32 + denormals.
 
Status
Not open for further replies.