News Tachyum builds the final Prodigy FPGA prototype, delays Prodigy processor to 2025

Status
Not open for further replies.

Findecanor

Distinguished
Apr 7, 2015
330
230
19,060
There is currently no shortage of semiconductor design firms that claim to have designed wide CPU cores, often with proprietary solutions for accelerating AI.
Most of these are running RISC-V, and can thus take advantage of a shared ecosystem.
Yet, the adoption rate of these is worryingly low. Some firms have sitting on cores for two years that have yet to be taped out because there hasn't been a client that has found it worthwhile to fund its manufacture.

What sets Tachyum Prodigy apart from these?
I have yet to hear about it offering anything that RISC-V and ARM doesn't, and thus it should have an even worse uphill battle.
Are there benchmarks that prove its superiority?
How can it benefit customers' applications in ways that others can't?
 
  • Like
Reactions: gg83

slightnitpick

Upstanding
Nov 2, 2023
237
156
260
Tachyum asserts that its processor can achieve up to 4.5 times the performance of the top x86 processors for cloud tasks, three times the performance of the leading GPUs for high-performance computing, and six times the performance for AI applications.
It's a huge claim to say that a general-purpose machine is more capable at dedicated tasks than a dedicated machine. How are they measuring this, and what are they measuring against? What are "cloud tasks", and are x86 processors the baseline for cloud tasks? What is GPU use in high-performance computing? And what kind of AI applications (training or inference)? And how is having this all on one processor better than having it on discrete processors? Just the speed of the data interconnect?
 
  • Like
Reactions: gg83

bit_user

Titan
Ambassador
It's a huge claim to say that a general-purpose machine is more capable at dedicated tasks than a dedicated machine. How are they measuring this, and what are they measuring against? What are "cloud tasks", and are x86 processors the baseline for cloud tasks? What is GPU use in high-performance computing? And what kind of AI applications (training or inference)? And how is having this all on one processor better than having it on discrete processors? Just the speed of the data interconnect?
I've seen this story play out again and again. Some upstart thinks they can do a better job than the mainstream guys. However, they're woefully optimistic about how long it will take. By the time they finally get a chip taped out and debugged, the broader computing market has at least caught up to them, and usually even passed them by.

These guys started out with a more ambitious microarchitecture, but watered it down to better adapt it to handle general-purpose "cloud" compute loads. IIRC, what they ended up with doesn't strike me as all that different from anything else on the market, except for their integrated tensor product units. Unfortunately, it still has a proprietary ISA, which was a really bad move. I doubt it has much benefit over RISC-V, if any, yet it requires custom tooling and lots of software porting to get an ecosystem up and running on it.

If they're lucky, they'll get enough government contracts to string them along, but I doubt they'll ever get a product to market while it's truly compelling.
 
  • Like
Reactions: slightnitpick

gg83

Distinguished
Jul 10, 2015
759
355
19,260
I've seen this story play out again and again. Some upstart thinks they can do a better job than the mainstream guys. However, they're woefully optimistic about how long it will take. By the time they finally get a chip taped out and debugged, the broader computing market has at least caught up to them, and usually even passed them by.

These guys started out with a more ambitious microarchitecture, but watered it down to better adapt it to handle general-purpose "cloud" compute loads. IIRC, what they ended up with doesn't strike me as all that different from anything else on the market, except for their integrated tensor product units. Unfortunately, it still has a proprietary ISA, which was a really bad move. I doubt it has much benefit over RISC-V, if any, yet it requires custom tooling and lots of software porting to get an ecosystem up and running on it.

If they're lucky, they'll get enough government contracts to string them along, but I doubt they'll ever get a product to market while it's truly compelling.
The only contender I expect to win is tenstorrent. They have some really smart people and smart money behind them. What do you think?
 

bit_user

Titan
Ambassador
When will the investors just back out?
I think a good example to watch is the Japanese supercomputing industry. They've long maintained their indigenous capability to build and deploy supercomputers probably as much for geopolitical reasons as economic ones. Therefore, the mere idea of a European HPC/cloud CPU makes sense. However, it doesn't exist in a vacuum. There's also this:

Tachyum seems further along, but that might not always be the case. However, even if they do get passed by the EPI, Slovakia doesn't necessarily share the same geopolitical orientation as backers of the EPI and therefore might choose to maintain Tachyum on its own or pursue export markets that don't interest the EPI.

In summary, if we looked at them merely a business, I'd say they're probably headed off a cliff. However, supercomputing initiatives shouldn't be viewed absent its political context.
 
  • Like
Reactions: slightnitpick
Status
Not open for further replies.