News Jim Keller joins ex-Intel chip designers in RISC-V startup focused on breakthrough CPUs

How does RISC-V compare to CISC/X86-X64 and ARM flavors in terms of :

code size
licensing
performance
power consumption
suitability as the foundation for mobile / portable devices (silicon size I guess is what's important here)...

I just don't know much about it, or had any experience with it other than the old days of extracting data sets off of Sun Microsystems running RISC cpus and Unix.
 
  • Like
Reactions: artk2219
License is free and all the other things depend on the design of the CPU which is why everybody tries their own thing, they all make the design that best fits what they want to do.
It's not like intel and amd where you have specific CPUs to talk about, anybody can do anything they want and produce them anywhere they want.
 
  • Like
Reactions: artk2219
The article said:
The company indicates that one of its goals is to develop CPU cores that excel in per-core performance, which is a focus of virtually all CPU developers today, including Apple, AMD, Intel, and Qualcomm.
Not Ampere Computing. They're focusing on density-optimized CPUs for cloud applications. They're not exactly "little" cores, but more like "medium" cores.
 
  • Like
Reactions: artk2219
I can’t wait to see some good performance RISC-V cpus.

This is the best ISA in terms of technical design and for licensing.

I want my PC to run with it.
 
  • Like
Reactions: artk2219
How does RISC-V compare to CISC/X86-X64 and ARM flavors in terms of :
Those categories are too broad. In all of them, you can find examples of big and small cores that support either a large or small subset of the total ISA.

code size
While not strictly code size, here are 3-way instruction count comparisons of two different workloads.

Please keep in mind that neither of the RISC-V cores they investigated support the vector extensions. They're just 1 generation too early for that, and it explains why the instruction counts are so vastly higher for x264, in particular. I'm showing these mainly for what it tells us about how baseline x86-64 compares with ARMv8-A (both of which have been superseded, BTW).

https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9752540-b7e5-478e-a5c2-3cde4b525494_1483x773.png


https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1548894a-4da3-47da-b7e3-df177fed5c13_1319x732.png


Source: https://chipsandcheese.com/p/a-risc-v-progress-check-benchmarking
Lower is better. It's pretty intriguing what an advantage ARM has here, considering that ARM CPUs also tend to have wider front ends.

licensing
x86 cores can only be made by Intel, AMD, and VIA/Zhaoxin.

ARM can be designed by anyone who obtains an architectural license from ARM, or you can license pre-designed cores from ARM and integrate them into your own SoC.

RISC-V can be designed by anyone, without the need for an architectural license. You can also license pre-designed cores from SiFive and many others.

performance
RISC-V doesn't have examples of high-performance cores yet, in the wild. There are some server CPUs, such as those designed by Ventana, which should be launching soonish, IIRC.

So far, most RISC-V activity has been in the area of embedded computing.

power consumption
All three are all over the map. You have to narrow down to specific implementations.

suitability as the foundation for mobile / portable devices (silicon size I guess is what's important here)...
ARM currently dominates mobile. ARM and RISC-V mostly dominate IoT/embedded. Look for RISC-V to make major inroads into mobile, soon.
 
Last edited:
x86 cores can only be made by Intel, AMD, and VIA/Zhaoxin.
Anybody can make x86 cores, the companies you list have additional IP that is needed to make a competing x86 core.
That's why we had like a dozen companies making pre pentium CPUs , why it was called "ibm and compatibles" the trick is making a core that is actually useful today, or even compared to an pentium which is why that was the cut off point for all but the two that remain.
https://en.wikipedia.org/wiki/X86
Partly. For some advanced features, x86 may require license from Intel, though some do not need it;[citation needed] x86-64 may require an additional license from AMD. The Pentium Pro processor (and NetBurst) has been on the market for more than 21 years[1] and so cannot be subject to patent claims. The i686 subset of the x86 architecture is therefore fully open. The Opteron 1000 series processors have been on the market for more than 21 years[2] and so cannot be subject to patent claims. The AMD K8 subset of the x86 architecture is therefore fully open.
[th]
Open​
[/th]​
 
  • Like
Reactions: artk2219
ARM currently dominates mobile. ARM and RISC-V mostly dominate IoT/embedded. Look for RISC-V to make major inroads into mobile, soon.

Is the IoT still a thing? I thought it was supposed to be the next "big thing"? You know, like 3DTV, VR, and currently AI (which seems to be a thing).

Anybody can make x86 cores, the companies you list have additional IP that is needed to make a competing x86 core.
That's why we had like a dozen companies making pre pentium CPUs , why it was called "ibm and compatibles" the trick is making a core that is actually useful today, or even compared to an pentium which is why that was the cut off point for all but the two that remain.
https://en.wikipedia.org/wiki/X86

Good luck with that.
 
  • Like
Reactions: artk2219
Good on Jim Keller! Nice to have a major player in the industry on board to move things forward with RISC-V.

As for Ms. Marr, another mixed bag as Pentium 4 started out terrible (and didn't end great, either), Haswell was meh, and Ice Lake was interesting in that they were fabbed on Intel 10nm (+, even though they removed that later, lol) while their desktop counterparts were still on 14nm +++++++++++.

Successes worth mentioning would be Pentium III (aside from the errate fiasco) and Nehalem's introduction (as mentioned in the article). HyperThreading is great and all too... just don't mention P4. 😛

Speaking of Intel: they're STILL looking for a permanent CEO? None too optimistic to jump aboard the 18A train?
 
  • Like
Reactions: artk2219
Anybody can make x86 cores, the companies you list have additional IP that is needed to make a competing x86 core.
That's why we had like a dozen companies making pre pentium CPUs , why it was called "ibm and compatibles" the trick is making a core that is actually useful today, or even compared to an pentium which is why that was the cut off point for all but the two that remain.
https://en.wikipedia.org/wiki/X86
So we could technically make K8 clones, thats legitimately interesting, not necessarily hugely useful, but very interesting. Potential K8 clones with hyperthreading in the future? I know that would be a drastic core redesign, but that would be super neat.
 
Is the IoT still a thing? I thought it was supposed to be the next "big thing"? You know, like 3DTV, VR, and currently AI (which seems to be a thing).
Have you been sleeping under a rock 😅? The iot market is massive. There are many applications. Anytime you want something "unheavy" done over long periods of time. Iot does this for you. The networks have been massively upgraded in recent years.
 
Is the IoT still a thing? I thought it was supposed to be the next "big thing"? You know, like 3DTV, VR, and currently AI (which seems to be a thing).
Just walk into an appliance store and look at how many of them now have a phone app for talking to them. In theory, they could use bluetooth or NFC, but the ones I've seen tend to be wi-fi enabled and probably also talk to the cloud.

Outside the home, there are lots of cloud-connected smart sensors being deployed for different purposes and use cases.

So, I'd say IoT did happen. Maybe it's been over-hyped, but that's different than saying it didn't happen.
 
  • Like
Reactions: JamesJones44
Just walk into an appliance store and look at how many of them now have a phone app for talking to them. In theory, they could use bluetooth or NFC, but the ones I've seen tend to be wi-fi enabled and probably also talk to the cloud.

Outside the home, there are lots of cloud-connected smart sensors being deployed for different purposes and use cases.

So, I'd say IoT did happen. Maybe it's been over-hyped, but that's different than saying it didn't happen.
Might be from a consumer perspective. But that is mostly because much of it is transparent to the user as long as everything is working correctly. Ofc a bit different with every device now having a nic , that is pretty exposed to the user, but there is alot of transparent stuff going on. They even have their own network protocols for ultra low power draw , which I am sure you know already.
 
Good on Jim Keller! Nice to have a major player in the industry on board to move things forward with RISC-V.
Tenstorrent already added their own range of RISC-V cores since he became the CEO, which raises lots of questions about this move and what implications it might have for their IP.

As for Ms. Marr, another mixed bag ...
It's hard to say how much influence any one person had over the whole project. Without more insight, I wouldn't hold it against anyone that worked on a given project.

just don't mention P4. 😛
Some aspects of it were quite cutting-edge. Just because they got unlucky with Dennard Scaling doesn't mean there wasn't anything redeeming about it.

"... it’s wrong to write off Netburst as just a failure. Some of the fundamental ideas behind the architecture were definitely flawed. Emphasizing clock speed at the cost of extreme penalties for “corner cases” that aren’t actually uncommon turned out to be a pretty bad strategy. But Netburst served as a learning platform for Intel. The company implemented a variety of new microarchitecture techniques for the first time, and saw how they performed in practice. They figured out what worked and what didn’t. They took years to gather data and tune novel features like HyperThreading.

Going through Netburst’s architecture again, we can see how the good parts came back to enable Sandy Bridge’s success."

https://chipsandcheese.com/p/intels-netburst-failure-is-a-foundation-for-success
 
Is the IoT still a thing? I thought it was supposed to be the next "big thing"? You know, like 3DTV, VR, and currently AI (which seems to be a thing).



Good luck with that.

Bruh, what do you think those ring doorbells are? What do you think unlocks peoples cars when they use a phone app?
 
  • Like
Reactions: bit_user
How does RISC-V compare to CISC/X86-X64 and ARM flavors in terms of :

code size

This is the one in which facts are unequivocal. In 64 bit ISAs RISC-V code is consistently on average 20% or more smaller than x86_64 or arm64.

Anyone can verify this. Download the same version of the same OS e.g. Ubuntu 24.04 (or run them in Docker w qemu emulation, which only takes seconds) and compare the size of various programs that will have the same source code compiled regardless of the ISA e.g. bash, emacs, less ... take your pick. Use the `size` command and look at the TEXT size.

licensing

RISC-V is an open source license-free specification. Anyone with the necessary skills and funding is free to implement RISC-V in anything from bit-serial microcontrollers implementing only RV32I (SeRV) to supercomputers. You don't have to ask anyone's permission, or even tell anyone, or pay anyone any money.

If you want to use the RISC-V name and logo commercially and maybe give your input on future directions and ISA extension then you need to join RISC-V International. This is free for individuals and community organisations, $2k/year for startups with fewer than 10 people in their first 2 years, $5k/year for companies with up to 499 employees. What's the salary you're paying for 10 employees? 500 employees?

Or, you can just license the RTL for a ready-made core from an IP company such as SiFive or Andes, the same as you would from Arm. Except they can't make you, you're free to switch to another provider if the quality or features or price don't suit you, and they can't stop you designing your own instead.

You simply can't get into the kind of mess that Qualcomm is finding itself in with the Nuvia acquisition and using the core they designed. We kind of knew that Qualcomm and Apple (etc) can't license their Arm-compatible cores to someone else, but no one (including Q) had any idea that you also could not use someone else's Arm-compatible core by buying the company.

performance
power consumption
suitability as the foundation for mobile / portable devices (silicon size I guess is what's important here)...

riscv64 and arm64 are so similar technically that you should just assume they are identical in terms of the above, given equal investment and employing equal engineers, at least at the high end.

There is absolutely no reason that someone (probably several someone's) can't make RISC-V cores equal to Apple M1/M2/M3/M4.

There might be a slight difference, but we haven't yet seen the results of ex Apple / AMD / Intel / Qualcomm / Arm engineers building high performance RISC-V cores. Most of those teams started in 2021/2, so will probably have something you can buy in 2026/7/8 or so.

At the low end, RISC-V has a lot of optional extensions that you can leave out if you don't need them, saving silicon area, cost, and energy consumption. Arm doesn't allow subsetting their ISAs. There are some optional things, but the compulsory base is very large -- similar to RISC-V's RVA23 etc profiles that specify a fixed and largish set of extensions for machines running Linux / Android etc.

And there are some things Arm simply doesn't offer, such as a 64 bit version of the Cortex-M0. Arm's 32 bit and 64 bit ISAs are very different (and they've got 3 or 4 32 bit ISAs). RISC-V's are virtually identical other than the different register/address size.

I just don't know much about it, or had any experience with it other than the old days of extracting data sets off of Sun Microsystems running RISC cpus and Unix.

That's SPARC, not RISC-V which is a specific and new ISA. Both are based on RISC principles, but are very different in details.
 
  • Like
Reactions: SonoraTechnical
Yes. And realize that Broadcom bought VMWare to virtualize the Intel code base. And that the whole RISC thing will move based upon virtualization because there is a bunch of code (games, apps, etc) based upon the old architecture.
 
This is the one in which facts are unequivocal. In 64 bit ISAs RISC-V code is consistently on average 20% or more smaller than x86_64 or arm64.

Anyone can verify this. Download the same version of the same OS e.g. Ubuntu 24.04 (or run them in Docker w qemu emulation, which only takes seconds) and compare the size of various programs that will have the same source code compiled regardless of the ISA e.g. bash, emacs, less ... take your pick. Use the `size` command and look at the TEXT size.
That's a flawed experiment. One should compare the same code compiled using either -O0 or -Os. The reason being that current RISC-V cores are tiny, often lack vectorization, and don't benefit as much from loop unrolling as current ARM and x86 cores that do have vector instructions and much larger physical register files.

I'm not saying it would completely upturn your findings, but if we take the example of vectorization - any time a compiler vectorizes a loop without knowing whether the number of iterations will be a multiple of the vector size, it has to generate another set of code to deal with the remainder. Because RISC-V isn't being compiled with vector support, it doesn't get an extra copy of loops like that.

You simply can't get into the kind of mess that Qualcomm is finding itself in with the Nuvia acquisition and using the core they designed. We kind of knew that Qualcomm and Apple (etc) can't license their Arm-compatible cores to someone else, but no one (including Q) had any idea that you also could not use someone else's Arm-compatible core by buying the company.
Actually, that's an oversimplification. Nuvia knew quite well that their Architecture License was non-transferrable. The reason Qualcomm thought it wouldn't be an issue is that they also had an Architecture License. One significant point in the trial was how Nuvia also had a Technology License and their design contained a tiny bit of IP from it. It seems more clear that this wouldn't be transferable, but they apparently decided not to worry about that.

This would be like someone licensing an IP core from SiFive, but then designing their own core and using a little bit of SiFive's IP in it, but then treating the core as their own. In fact, such a situation might've actually happened with Tenstorrent, who did start out using SiFive's cores before designing their own Ascalon cores. I doubt they'd be that brazen, but then I wouldn't have expected Nuvia to have an ARM Technology License, either.

At the low end, RISC-V has a lot of optional extensions that you can leave out if you don't need them, saving silicon area, cost, and energy consumption. Arm doesn't allow subsetting their ISAs. There are some optional things, but the compulsory base is very large -- similar to RISC-V's RVA23 etc profiles that specify a fixed and largish set of extensions for machines running Linux / Android etc.
That's a little bit of an apples vs. oranges comparison, because if you're designing a tiny embedded ARM, you'd use ARM's M-profile, not A-profile.

Also, what ARM does is to have a base ISA with lots of optional extensions you can take or leave. Granted, ARMv9-A rolls in a lot of the optional extensions from the ARMv8-A family. I think most distros' ARM images are still targeting baseline ARMv8-A, which is roughly akin to compiling for x86-64. That leaves out a lot of extensions.

I think most distros for RISC-V target RVA23, which is a similar situation to the above, except you don't even get NEON/SSE-level vector instructions.

And there are some things Arm simply doesn't offer, such as a 64 bit version of the Cortex-M0. Arm's 32 bit and 64 bit ISAs are very different
What ARM seems to be doing is focusing on Cortex-R as their embedded 64-bit ISA. Perhaps this is partly a consequence of RISC-V eating up so much of the low-cost microcontroller market than ARM doesn't want to invest a lot more in it.

As for AArch64 differing from AArch32, that's called evolution. It's because the 64-bit ISA was designed long after the 32-bit one, not unlike how x86-64 came long after the 80386 extended x86 to 32-bits. Such ISA evolution will certainly happen to RISC-V world, as well. There are limits to what you can do with ISA add-ons - sometimes, you need to introduce incompatibilities or do an even bigger reworking. Maybe it'll be called RISC-VI, by that point, but it will still put implementers in an awkward position of probably having to support both, for a time.
 
Last edited:
That's a flawed experiment. One should compare the same code compiled using either -O0 or -Os. The reason being that current RISC-V cores are tiny, often lack vectorization, and don't benefit as much from loop unrolling as current ARM and x86 cores that do have vector instructions and much larger physical register files.

I'm not saying it would completely upturn your findings, but if we take the example of vectorization - any time a compiler vectorizes a loop without knowing whether the number of iterations will be a multiple of the vector size, it has to generate another set of code to deal with the remainder. Because RISC-V isn't being compiled with vector support, it doesn't get an extra copy of loops like that.

No, that's not true. RISC-V vectors make code smaller and simpler. No duplication for the tail is needed because a partial vector is handled by the same code as a full one.

For example here's a vectorised memcpy() test I did on the AWOL Nezha when it came out in mid 2021 with an Allwinner D1 chip with THead C906 core.

https://hoult.org/d1_memcpy.txt

The standard glibc memcpy() code is 622 bytes of differently specialised unrolled loops. The vector code is 24 bytes and runs twice as fast.

Code:
0000000000000000 <memcpy>:
   0:    86aa                    mv    a3,a0

0000000000000002 <.L1^B1>:
   2:    00267757              vsetvli    a4,a2,e8,m4,d1
   6:    12058007              vlb.v    v0,(a1)
   a:    95ba                    add    a1,a1,a4
   c:    8e19                    sub    a2,a2,a4
   e:    02068027              vsb.v    v0,(a3)
  12:    96ba                    add    a3,a3,a4
  14:    f67d                    bnez    a2,2 <.L1^B1>
  16:    8082                    ret

Small, simple, fast, universal.

That's RVV draft 0.7.1 code by the way. Some of the mnemonics and options have changed in the ratified RVV 1.0, but with a single change that doesn't change anything ... replacing `vlb.v` with `vlbu.v` ... that code is binary compatible with RVV 1.0 implementations and runs optimally on e.g. the SpacemiT K1.

I think most distros for RISC-V target RVA23, which is a similar situation to the above, except you don't even get NEON/SSE-level vector instructions.

Vector instructions are compulsory in RVA23. Also, there won't be any RVA23 hardware for a couple of years yet. Ubuntu have said that 26.04 LTS will require RVA23.

What ARM seems to be doing is focusing on Cortex-R as their embedded 64-bit ISA. Perhaps this is partly a consequence of RISC-V eating up so much of the low-cost microcontroller market than ARM doesn't want to invest a lot more in it.

I've seen a claim that Arm will not further develop Cortex-M in several places now, for example in a recent AheadComputing blog post. Some people have gotten very angry and defensive when I've pointed to that.
 
Then I have no reason to get excited.

They aren't developing desktop processors.

Nowdays there is no distinction at all between smartphone, tablet, laptop, desktop, and server CPU cores. It all comes down to the SoC, process node, MHz and TDP. You have exactly the same Cortex-X cores as the "big" cores in flagship phones as you do in datacenter servers.