News Ascenium Wants to Reinvent the CPU - And Kill Instruction Sets Altogether

D

Deleted member 14196

Guest
What is really amusing is that they have the audacity to believe they could displace the other CPUs
 

TJ Hooker

Titan
Ambassador
100% misleading title.
This is never going to be a CPU replacement, just like the xeon phi that is mentioned in the article, it will only be good for very specific things.
No, the title is accurate, Ascenium plans for their chip to be a CPU replacement, not a co-processor. Whether they'll actually succeed in doing that is of course a very different question.

Reading the interview that was linked in this article, it appears they're using an EPIC approach, where they're basically relying on the compiler to do all the heavy lifting. This is not unlike Intel's Itanium, which didn't really work out, in part because the magical compilers that would perfectly parallelize and optimize the code apparently never appeared (or at least weren't available when it was released). Ascenium claims to already have a working compiler prototype that can successfully optimize programs on the order of 100K lines of code for their Aptos architecture, but we'll see if they'll be able to get it working well for real world programs.
 
  • Like
Reactions: Krotow and Briny
No, the title is accurate, Ascenium plans for their chip to be a CPU replacement, not a co-processor. Whether they'll actually succeed in doing that is of course a very different question.

Reading the interview that was linked in this article, it appears they're using an EPIC approach, where they're basically relying on the compiler to do all the heavy lifting. This is not unlike Intel's Itanium, which didn't really work out, in part because the magical compilers that would perfectly parallelize and optimize the code apparently never appeared (or at least weren't available when it was released). Ascenium claims to already have a working compiler prototype that can successfully optimize programs on the order of 100K lines of code for their Aptos architecture, but we'll see if they'll be able to get it working well for real world programs.
A CPU replacement maybe, just like xeon-phi was made to boot up on its own,
but not a CPU replacement for many people.

This is just an FPGA in principle, one that is made up of lots and lots of individual "cores" it's going to be great at parallelism but it's going to suck really hard at running anything and everything that almost all PC users know.

People will have to reinvent anything they want this thing to run efficiently.
 

Pytheus

Prominent
Oct 28, 2020
85
22
545
Correct me if I'm wrong, but isn't this is just a hybrid FPGA? Instead of instruction sets they're attempting to reconfigure the chip for every task they're running? I've thought of something like this but that sounds tedious for software engineers.
 

Chung Leong

Reputable
Dec 6, 2019
494
193
4,860
Reading the interview that was linked in this article, it appears they're using an EPIC approach, where they're basically relying on the compiler to do all the heavy lifting. This is not unlike Intel's Itanium, which didn't really work out, in part because the magical compilers that would perfectly parallelize and optimize the code apparently never appeared (or at least weren't available when it was released).

No compiler can infer what hasn't been expressed by the programmer, implicitly or explicitly. The EPIC approach really had no chance when most programs were written in bare-metal languages like C and C++. Availability of pointers messed everything up. The programming landscape has changed quite a bit in the last 20 years, so the same approach could succeed this time around.
 
The programming landscape has changed quite a bit in the last 20 years, so the same approach could succeed this time around.
So has the hardware landscape, this thing only has a chance at something that doesn't have any dedicated hardware/software package yet. Something that can run great on a GPU but somehow didn't get implemented yet by anybody.
 

Chung Leong

Reputable
Dec 6, 2019
494
193
4,860
So has the hardware landscape, this thing only has a chance at something that doesn't have any dedicated hardware/software package yet. Something that can run great on a GPU but somehow didn't get implemented yet by anybody.

There're plenty of opportunities for extracting parallelism in general purpose code. For example, we want to do a search on a list of 1000 objects that would yield 10 results. Of the 1000 operations, 990 don't have cross-dependency. Order of execution comes into play only when a match occurs. If a compiler can somehow allow the misses to occur in parallel, the potential gain is quite large.
 
There're plenty of opportunities for extracting parallelism in general purpose code. For example, we want to do a search on a list of 1000 objects that would yield 10 results. Of the 1000 operations, 990 don't have cross-dependency. Order of execution comes into play only when a match occurs. If a compiler can somehow allow the misses to occur in parallel, the potential gain is quite large.
Run 1000 extremely light threads on a 16 core CPU that is highly optimized to run them or run 1000 threads on more cores that you yourself have to make as optimized as possible, that possibly (most certainly) will run much much lower clocks.

There is a lot of potential but you have to be better than the decades of optimization that has ben done on ILP for common CPUs already.
Or your code has to be that specific that it just runs so much better on generic hardware by default.
 

Chung Leong

Reputable
Dec 6, 2019
494
193
4,860
Run 1000 extremely light threads on a 16 core CPU that is highly optimized to run them or run 1000 threads on more cores that you yourself have to make as optimized as possible, that possibly (most certainly) will run much much lower clocks.

The overhead of dealing with a thread pool and synchronization objects generally means the reward is not worthwhile. For server-side code, that's especially true. It's usually cheaper to throw better hardware at the problem.
 

Findecanor

Distinguished
Apr 7, 2015
327
229
19,060
IMHO,there is nothing technically wrong with the EPIC concept but it needs to be executed well.
The Itanium failed because it was a quite ridiculous design in many ways, and because the market wanted x86.
Transmeta failed because it also wasn't quite as good as Intel and AMD on their turf.

There are many DSPs that do EPIC successfully, and several general-purpose CPU designs in the works that use it too.

I think that Ascenium's machine looks like a FPGA where elements are a bit larger than what is typical.
FPGAs cards have been sold as compute-accelerators before, but failed in the market — but that was before GPGPUs and neural networks broke through.
Modern GPUs are actually very wide WLIV processors, and already have EPIC compilers as part of their architecture but they can't be reconfigured to run general-purpose code.
 

Giroro

Splendid
I know all the overspeak might make a Venture Capitalist think Ascenium is explaining what they're doing, but they aren't, really.
All I'm seeing is a mess of wanky buzzwords. It's the engineering equivalent of listening to an HR rep talk about "taking the lead on revisiting silo disruptive core competencies in team building synergism".

Is it a mesh of FPGA's? An OpenCL alternative? Is their compiler interpreting software into a Hardware design language?

Your guess is as good as their investors'.