News Intel to Explore RISC-V Architecture for Zettascale Supercomputers

Interesting. I have to say it will be quite a long time before x86 will be replaced. Anyway, pure x86 doesn't exist anymore today. When cpu received an x86 instruction, it will break it down into simpler proprietary instructions for faster execution (AMD and Intel has their own implementations). Only certain complex instructions need to rely on slow microcode ROM (very few).

We are already using arm and x86 interchangeably without realizing it. Phones can link with our laptops via wireless/Bluetooth/USB and communicate etc even though both are using different OS and cpu. Android and iOS are using arm based CPUs while laptops are x86.

So now, it's more of an interface rather than processor architecture issue.
 
Interesting. I have to say it will be quite a long time before x86 will be replaced. Anyway, pure x86 doesn't exist anymore today. When cpu received an x86 instruction, it will break it down into simpler proprietary instructions for faster execution (AMD and Intel has their own implementations). Only certain complex instructions need to rely on slow microcode ROM (very few).

We are already using arm and x86 interchangeably without realizing it. Phones can link with our laptops via wireless/Bluetooth/USB and communicate etc even though both are using different OS and cpu. Android and iOS are using arm based CPUs while laptops are x86.

So now, it's more of an interface rather than processor architecture issue.

This is even true on backends. Several software vendors release and run their software on both x86 and Arm hardware. It will take more time to leak down into the consumer desktop/laptop space, but I believe CPU architecture agnostic software will come to everything eventually.
 
The whole concept of "run your software on anything" dates back to at least the late 70s and early 80s. Every home computer had a BASIC interpreter and from what I can gather, most of the major ones had similar code words that you could conceivably take a BASIC program from one computer and plop onto another, barring of course using POKE, PEEK, and other memory related commands as-is.

As long as we're talking about application level software and programming languages made for such, then such applications are already CPU agnostic. However, if we're talking about systems level software, then you can't really escape from the nuances of the hardware you have until we unify on a single ISA.
 
Intel's Ponte Vecchio is stated to be 100B transistors.

how many risc-v cores could be built with 100B transistors?
Depends on how many bells and whistles you want each core to have. You could probably fit thousands in this, but only if you restricted it to a basic instruction decoder, in-order execution, one ALU, one AGU, one FPU, and enough cache to make a mainstream Intel part look like a data vault.

... And then you'd basically have a RISC-V GPU at that point.
 
Interesting. I have to say it will be quite a long time before x86 will be replaced. Anyway, pure x86 doesn't exist anymore today.

Actually on the Atom-based architecture, they directly execute x86 instructions lot of the time.

The whole shebang about x86 vs ARM isn't as important today. What matters more is execution of the companies and teams that involve them.

If x86 vs ARM was the only issue, why is Apple dominating the rest on the GPU side as well? Intel/AMD/Nvidia isn't using x86 GPUs are they?
 
Actually on the Atom-based architecture, they directly execute x86 instructions lot of the time.

The whole shebang about x86 vs ARM isn't as important today. What matters more is execution of the companies and teams that involve them.

If x86 vs ARM was the only issue, why is Apple dominating the rest on the GPU side as well? Intel/AMD/Nvidia isn't using x86 GPUs are they?

All x86 CPUs will still receive and execute x86 instructions. There is no change on this since the beginning, the difference is that all x86 CPUs now has its own instruction decoders which will break the x86 into its own instructions (micro-ops) for faster execution. These micro-ops are proprietary and not x86.

Of course, the decoder is not able to decode every x86 instruction. ITs only for more commonly used ones. Certain isntructions still have to rely on the microcode rom.

Btw, GPU is neither x86 nor RISC. Its using different architecture which is independent of CPU. GPUs are compatible with all kinds of CPUs. ITs the drivers and firmware that acts as a bridge to allow GPU to communicate with CPU.

Your Windows or MAC or Linux does not talk directly to the GPU, it talks to the drivers. Then the drivers will act like a "decoder" which in turns tell GPU what to do. This is why drivers are very critical to GPU performance.
 
All x86 CPUs will still receive and execute x86 instructions. There is no change on this since the beginning, the difference is that all x86 CPUs now has its own instruction decoders which will break the x86 into its own instructions (micro-ops) for faster execution. These micro-ops are proprietary and not x86.

Yes I know. Atom based cores don't have to do that for most instructions.

ARM also has to deal with decoders and have the uop cache.

Btw, GPU is neither x86 nor RISC. Its using different architecture which is independent of CPU. GPUs are compatible with all kinds of CPUs. ITs the drivers and firmware that acts as a bridge to allow GPU to communicate with CPU.

Again, I know and it doesn't matter as much as people think. People arguing about x86 vs ARM don't realize Apple beats everybody in GPUs which are in a completely different category. x86 vs ARM just gives them an excuse to rely on as a crutch. Some guys and teams just do better than others that's all.

There are no such thing as Apple vs Apple in the real world because everything is different.