One interesting thing with The Mill's
Belt is that it does not actually exist. It is mostly an abstraction used for instruction encoding.
Each result is stored in each execution unit's output latches, and when there is a new instruction then the old output gets lost. The length of the belt is the issue-width of the machine.
Itanium was supposedly based on work by
Bob Rau, who died prematurely.
Ivan Goddard (who is Mill Computing's main compiler guy and public face) has likened The Mill to being close to what Intel might have come up with had Bob Rau's vision been kept intact and they had iterated a few times before release.
I've watched all of Ivan's lectures end-to-end and they were absolutely fascinating, while also a pleasure to watch because Ivan speaks one of my favorite languages so beautifully.
And at first glance the Belt is so incredibly elegant and convincing: clearly a much better way to do general purpose compute than the ordinary ways of doing compute.
So why does nobody care?
I've tried to ask around the HPC crowd I tend to mingle with occasionally and even tried to get a word on it from Calista Redmond when I met her in Grenoble a few years ago: they clearly never head of him or the Belt but RISC-V people are all sold on general purpose CPUs being a dead horse: it's all in the
special purpose accelerators and extensions.
The Belt is all about doing general purpose compute with far fewer transistors. But unfortunately there is no longer any scarcity of transistors, unlike in the DSP days where he grew his mindset. In fact CPU designers gain so many transistors with every shrink, they really struggle to turn them into any kind of value.
Of course you could use those transistors to multiply the cores. But getting value out of 255 general purpose cores on a single chip isn't that easy. Even the hyperscalers can't think of any great way to have them
cooperate on anything. Instead they have them serving an ever greater number of separate customers.
Some people would call the
z/architecture an incarnation of the IBM 360. The latest CPU:
z17 "Telum II" was released this year.
Oh yes, z/Arch is living proof of how 360 lives on and is the most equivalent in terms of evolutionary potential to the x86.
Except of course, that Gene Amdahl & Co actually designed 360 forward looking and to be modular in terms of instructions that might be microcode on smaller machines and hardware on bigger ones, while x86 (8086) started as a hack, on a hack (8080) on a hack (8008) on a hack (4004) and arguably never designed.
But I guess the forward looking parts of 360 were at such a primitive level, it never had a negative impact on the evolution of the ISA. I've never looked in detail e.g. on instruction encodings and formats, pretty sure leftovers from BCD or similar would no longer be useful, whole addressing variants useless.
So yes, if it wasn't for IBM's lawyers, x86 might have died long ago and PC might be running z.
LoongArch does not have delay slots, and didn't adopt register windows. It omitted more quirks of MIPS, making it 64-bit (ready) from the start, but the MIPS legacy still shows in some off its behaviour and specification.
Imagination sold MIPS. Then it was resold, and the new owner was restructured and adopted the name MIPS.
But yes, that company ditched the MIPS architecture to design RISC-V cores.
I never dipped deeply into LoongArch, but I can appreciate the attraction from the Chinese perspective.
I believe MIPS was very popular in textbooks, so whole generations of CPU design engineers might have been raised on it. And then there are things like compilers and OS ready to use and play with.
Finally: no lawyers chasing after you as long as you don't try to sell to Western markets.
RISC-V has all those advantages as well now and then some. So with it bing the most widely used in teaching and being so accessible, what remains of MIPS probably has little room to grow, unless lawyers and politicians change the rules.