News Linux takes 4.76 days to boot on an ancient Intel 4004 CPU — CPU precedes the OS by 20 years

bit_user

Titan
Ambassador
Very misleading article. The 4004 isn't running Linux. You need at least a 386 for that. What it's doing is running an emulator (and for more than just the claimed C compiler support) that's running Linux.

This is akin to other tricks we've seen, where people run some heavy-weight OS on something like the Raspberry Pi Pico, also inside of an emulator. These simple processors lack the sophisticated hardware features needed to run a complex OS. The only way you can do it is by having it emulate a much more capable CPU.

That obviously comes at the expense of a lot of additional performance, which is how we end up with the astonishingly bad execution times quoted in this article.

I'm sure they also had to use some sort of hack for giving it access to enough memory that it could run an OS like Linux. That's going to come at an additional performance hit, since it's likely making an I/O request per each memory read/write, each of which must be broken into multiple steps for the entire 32-bit address to be written out. Keep in mind that a simple, ancient CPU like the 4004 had no caches, either.

The article said:
it also demonstrates Linux’s flexibility.
No, not that flexible!
 
Last edited:

tony-w

Commendable
Oct 18, 2021
4
7
1,515
I'm amazed that someone could put enough effort into all the emulation and memory paging required to make a 4 bit Intel 4004 do this. The chip was initially developed to run a calculator. It can only directly address 1,280 4 bit RAM locations and 32,768 bits of ROM equivalent to 4,096 bytes. Just about all the code running directly in the CPU address space must have been dedicated to paging in and out memory.

It does make me wonder - WHY
 

bit_user

Titan
Ambassador
I'm amazed that someone could put enough effort into all the emulation and memory paging required to make a 4 bit Intel 4004 do this.
Yeah, the real heroes of this piece are the MIPS emulator that's small enough to fit on the 4004, and yet complete enough to boot Linux. Then, the extended memory hack that enabled enough RAM to be connected.

The chip was initially developed to run a calculator. It can only directly address 1,280 4 bit RAM locations and 32,768 bits of ROM equivalent to 4,096 bytes.
Thanks for the specs. I was too lazy to look them up, myself. I'd guessed it must be something tiny. So, 640 bytes of RAM? If you're used to thinking in such small quantities, I can see why 640 kB might indeed seem like enough for everybody!
; )

Just about all the code running directly in the CPU address space must have been dedicated to paging in and out memory.
It had no paging mechanism, as far is know. That doesn't mean the emulator couldn't have introduced its own form of soft paging, but the hardware wouldn't be helping you out with that.

It does make me wonder - WHY
Art?

It does give me some hope that post-apocalyptic peoples could one day boot early Linux distros they find amidst the rubble, even if the only functional computers they have at that time are 1970's era machines, or equivalent. To be honest, I don't really know how a Linux image would even survive in a format they could read. CD-ROM was 1980's technology, but I'm not sure how long even pressed CDs could last.
 
  • Like
Reactions: tony-w

bit_user

Titan
Ambassador
intel ... When I was studying at TUM in Germany , back in the day , we never had a single intel chip at the computer lab , ALL were SUN stations running Unix which lead to Linux ... Intel was Nothing back then ... how things changed ...
Well, the 386 built on the foundations laid in the 286 to provide support for memory protection and virtual memory that server & workstation operating systems demanded. The 486 was first to integrate the FPU and the Pentium made it actually somewhat competitive with competing RISC CPUs, eroding the last major advantage they had. Pentiums were also the first to support multi-CPU cache coherency, enabling up to quad-CPU machines.

Even so, up to the late 90's, many university CS departments looked down on PCs and preferred to keep using Sun and SGI workstations. Probably the beginning of the end was when SGI released its first x86 workstation, back in '98 or '99.

I think Microsoft also helped turn the tide by donating PCs (running Windows, of course) to some bellwether colleges and universities. They were also influential by means of fellowships. I used to work with a guy who interned at Microsoft Research for 3 summers, while he was in grad school. I don't know how he felt about Microsoft before that (didn't know him back then), but it he would certainly sing their praises afterwards.
 

tony-w

Commendable
Oct 18, 2021
4
7
1,515
Back in the late 70's when I was responsible for developing my first data communications product I used a Motorola 6809 processor. I remember how great it was to have the luxury of 8K bytes of UV erasable EPROM and 8K bytes of static RAM to play with when compared to "earlier" processors like the 4004 or 8008. (The 6809 could address more RAM & ROM but memory was so expensive back then we could not afford to put more in our products.) It was amazing what you could achieve with so little RAM and ROM although there was a lot of effort put into looking at how many clock cycles each instruction took to get the best overall performance.
 
Last edited:

bit_user

Titan
Ambassador
It was amazing what you could achieve with so little RAM and ROM although there was a lot of effort put into looking at how many clock cycles each instruction took to get the best overall performance.
In modern communications equipment and certain embedded applications, there's still a lot of cycle-counting. Back in the early 2000's, I worked on a layer 2 networking product that had to maintain line rate. At the fastest supported speeds, we had just a couple dozen cycles to handle each piece of data coming in. There were DMA engines that handled the actual data movement, but the firmware had to decide what to do with it.

When you're running that close to the limit, there's no headroom for high-level languages or a proper OS of any sort. It was all assembly language.
 
  • Like
Reactions: Sluggotg and tony-w

ottonis

Reputable
Jun 10, 2020
188
170
4,760
Very misleading article. The 4004 isn't running Linux. You need at least a 386 for that. What it's doing is running an emulator (and for more than just the claimed C compiler support) that's running Linux.

This is akin to other tricks we've seen, where people run some heavy-weight OS on something like the Raspberry Pi Pico, also inside of an emulator. These simple processors lack the sophisticated hardware features needed to run a complex OS. The only way you can do it is by having it emulate a much more capable CPU.

That obviously comes at the expense of a lot of additional performance, which is how we end up with the astonishingly bad execution times quoted in this article.

I'm sure they also had to use some sort of hack for giving it access to enough memory that it could run an OS like Linux. That's going to come at an additional performance hit, since it's likely making an I/O request per each memory read/write, each of which must be broken into multiple steps for the entire 32-bit address to be written out. Keep in mind that a simple, ancient CPU like the 4004 had no caches, either.


No, not that flexible!
Thanks a lot for this very interesting and useful explanation! Running an emulation of a complex CPU that could run a modern OS is a task that most very old CPUs will only be able to run in extreme super-slow mode.