News Tachyum's ‘Industry’s First Universal Processor’ Gets $25 Million in Funding

"It claims the chips outperform "

They claim the chips WILL outperform. Future tense. They barely got funding, so they haven't actually designed or built anything yet, so anything they claim right now are just a wishlist of hypothetical design goals.
At this point, they can basically say whatever they want.
 
Last edited:
"It claims the chips outperform "

They claim the chips WILL outperform. Future tense. They barely got funding, so they haven't actually designed or built anything yet, so anything they claim right now are just a wishlist of hypothetical design goals.
At this point, they can basically say whatever they want.
Apparently Tachyum has been around since 2016, I would assume they have a foundation of design work in place. It's not like they only sprung into existence and started working on this after the series A funding round completed. Which makes sense, because otherwise there's no way they'd be taping out this year and coming to market next year.

Not to say I'm not highly skeptical of their claims though.
 
  • Like
Reactions: bit_user
IMO, $25 M is a pretty small series-A funding round, for a chip startup.

The chip is a bit lacking on the cache front, though, with just 32MB of fully coherent L2/L3 cache, but that is understandable given its small die size. That shouldn’t take away from its performance
Um, what? Performance across a sufficiently broad range of benchmarks is certainly affected by cache size!

Based on the slides, this seems like a VLIW with branch prediction. In just about every other respect, it looks a heck of a lot like Xeon Phi (KNL). That apparently didn't do too hot, as it's now a goner.

While I agree that we need to move beyond x86, which has gotten too cluttered with legacy instructions and relies too much on energy-intensive hardware-assist to run at high speeds, I'm not sure this one is going to come out on top. I think they'll get a few design wins in EU-based supercomputers, but I'm skeptical that the big cloud players are really going to take any big risks with such a small upstart, and I'm not sure their architecture includes enough of the potential efficiency wins.

Also, the clock speed is definitely too high to beat Nvidia's tensor cores and purpose-built AI chips. You get better energy efficiency at lower clocks, which allows GPUs to run much larger dies, in a roughly similar power envelope. With the net effect being better efficiency and far higher throughput.

History is littered with the bones of promising, next-gen HPC/hyperscale CPUs that have gotten much further than these guys. Some of the more recent casualties include SiCortex and Tilera.

 
  • Like
Reactions: TJ Hooker
180W on a 290mm² die is going to call for some interesting cooling solutions.
In supercomputing, there seems to be quite a lot of custom-loop water cooling and fully-immersive solutions.

Anyway, Vega 20 (from Radeon VII and the Instinct MI60) is ~300 W in 331 mm^2. So, they can probably use conventional forced air cooling with a vapor-chamber-based heatsink, like high-end GPUs have adopted.
 
This guys remember me TRANSMETA ,I bet if they have luck they will sell the developed ips to AMD and Intel for some million dollars .