News Ubitium announces development of 'universal' processor that combines CPU, GPU, DSP, and FPGA functionalities – RISC-V powered chip slated to arrive...

The article said:
RISC-V startup Ubitium says it’s working on a single architecture that can rule them all.
It seems like we hear about one of these, every couple of years. The most recent, best known example was probably Tachyum, which so far appears to be vaporware.

...that's not to say that Tachyum is fraudulent, just that they were way overly optimistic about how quickly they could get systems to market, by which point established players had already passed them by and they had to go back and design something even newer and better, to have a chance at competing. Pretty much exactly what I said would happen.

The article said:
Ubititum claims all of the transistors in its Universal Processor can be reused for everything; no “specialized cores” like those in CPUs and GPUs are required.
I'll be interesting to hear how this meaningfully differs from what CPUs do. CPU cores have a vector/floating-point pipeline that fuses most of their heavy-duty floating-point and integer vector operations. The scalar integer stuff doesn't take up much space and is therefore fine to keep separate.

Actually, with RDNA3, AMD took the approach of using its vector compute hardware to implement WMMA (thanks to those who set me straight, on this point), meaning they don't even have dedicated tensor cores. Also, I think their RT performance lags Nvidia, in large part because they've tried to minimize the dedicated hardware for RT and relied mostly on tricks like tweaking their texture engines to do BVH traversal.

Where FPGAs can come out ahead is by using more functional parallelism than what CPUs or GPUs can do. Normal, programmable processors are limited mostly to data parallelism. However, FPGAs have overheads the others don't, which is why they tend to be replaced by hardwired logic, for anything sufficiently well-defined, and "soft cores" on FPGAs never perform as well as the hard-wired cores in CPUs.

The article said:
So far the company has raised $3.7 million
That's known as "seed funding". We rarely hear about startups, during this phase. I guess they're having trouble raising their series-A funding, which is why they've gone out of stealth mode so soon.

Well, I look forward to seeing if they're truly doing something new, so I hope they at least make it that far.
 
Last edited:
This sounds like stuff I've seen repeatedly over the last 40 years. On the occasions when there is an actual product that can be evaluated by engineers outside the producer, the tech is found to be good at versatility and little else. It can do a lot of things but none of them especially well. Jack of all trades but master of none. There may be a niche market for that but I wouldn't want be the one assigned to convince investors of that.

Transmeta, Mpact!, and the IBM, Toshiba, and Sony CELL project was another. The original PS3 concept would have several CELL processors in the box and no dedicated GPU or other major functions. The CELLs would do everything, with the developer allocating bandwidth as needed for the app. This is why the first E3 demos were separate CELL and Nvidia demos. There wasn't enough time to produce anything on the actual prototype PS3 after Sony found that 1. the CELL was way too expensive to use multiples in a game console, and 2. the multiprocessing functionality failed above two processors. The concept wouldn't work and would be prohibitively expensive if it did. (Never mind the really absurd claims Sony made early on about having many CELL driven devices in you home, all networked, that could be enlisted to add their capacity to running games.)
 
  • Like
Reactions: P.Amini
X-silicon was touting the same achievements/goals. An agnostic PU that can use its cores to do any task. On paper, this is a dream, you can have a single chip in the system and the transistors are used with the most optimized distribution. And risc-v as a base ISA means clean assembly and no legal issues.

That sounds almost too good to be true.
 
This is a really bad idea All they're going to end up doing is creating terrible bottle necks by trying to put all operations through a single processor.
When I get sick I want to see a specialist not a GP.
 
Intel, AMD, Snapdragon, and Apple all have this functionality, don't they?
That's what the NPU is afterall, isn't it? an FPGA.

But I would love to see a RISC-V join the group.
 
Intel, AMD, Snapdragon, and Apple all have this functionality, don't they?
That's what the NPU is afterall, isn't it? an FPGA.
Actually, no. None of them are FPGAs.

I can understand some confusion on this point, since AMD got its XDNA IP from Xilinx, which principally makes FPGAs. However, the NPU cores are hard-wired cores originally designed to be included in a FPGA, but don't themselves include FPGAs in any way.

https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad5a4165-8109-4fc1-b6a7-c39dfe749ba2_990x539.png


You can find the XDNA NPUs described here, if you search or scroll down to the XDNA section:
 
  • Like
Reactions: P.Amini and Notton
The original PS3 concept would have several CELL processors in the box and no dedicated GPU or other major functions. The CELLs would do everything, with the developer allocating bandwidth as needed for the app. This is why the first E3 demos were separate CELL and Nvidia demos. There wasn't enough time to produce anything on the actual prototype PS3 after Sony found that 1. the CELL was way too expensive to use multiples in a game console, and 2. the multiprocessing functionality failed above two processors. The concept wouldn't work and would be prohibitively expensive if it did. (Never mind the really absurd claims Sony made early on about having many CELL driven devices in you home, all networked, that could be enlisted to add their capacity to running games.)
Imho Sony idea was not absurd, but only too visionary. Today all modern CPUs have a SIMD part that is very powerful and flexible, the step from this to put multiple SIMD subsystems in one single CPU is short. When we will have a lot of SIMD subsystems, each with a dedicated local cache, practically we will have a CELL.
 
Funny to me how using "the AI era" as a backdrop is supposed the add use-cases for this project. If anything, the AI era needs more specialized AI accelerators and not just traditional GPU's, right?
 
  • Like
Reactions: bit_user
Imho Sony idea was not absurd, but only too visionary. Today all modern CPUs have a SIMD part that is very powerful and flexible, the step from this to put multiple SIMD subsystems in one single CPU is short. When we will have a lot of SIMD subsystems, each with a dedicated local cache, practically we will have a CELL.
Imagine a composite GPU with its components connected over 100Mb Ethernet or slower wireless. There may be one such item that can be enlisted or there may be many on the same network. So anything using it has to have excellent scaling. The idea that this would somehow be useful for any kind of real-time interactive app is madness. Using cloud resources was something Microsoft hyped quite a lot for a while but rarely had any demonstrable value. That at least was a predictable outboard processing resource over a slow connection.

They idea was just too niche for practical use by most developers. In real world use, almost everything happens on the client or everything important happens on the server with the client app giving a pretty presentation to the user.