News Here's What Gaming on Centaur's Forgotten x86 CPU Looks Like


Jan 16, 2008

CyrixInstead (Remember the 6x86?)
TransmetaCPU (Remember the Crusoe?)
NexGenDriven (Remember the Nx586?)
RiseRiseRise (Remember the... uh... mP6? Ok, probably not)

AMD used to hide fun stuff in their CPUID registers. Early K5 era chips had AMDisbetter! and they snuck this one in during the Piledriver generation.

"Specific to AMD K7 and K8 CPUs, this returns the string "IT'S HAMMER TIME" in EAX, EBX, ECX and EDX,[28] a reference to the MC Hammer song U Can't Touch This. "

Last edited:


Jun 19, 2020
Please, why do you keep beating the Centaur for being bad at a job it wasn't designed to do?

The x86 cores on this chip only serve one function: feed the inference monster!

The neural processing engine you mention as if it were a minor secondary trait of this SoC is actually the main reason for its existence.

You're putting a light tractor designed to feed and clean a stable full of cows on a race track and have a laugh at how badly it performs, when it was optimized to keep the cows happy with minimal trouble.

The Centaur design was to optimize production cost, by using an older process and low energy cost for inference, by using a highly optimized neural accelerator. The target clients I believe to have been Chinese Internet giants, which have a large scale demand for inference at the cloud's outer edge.

So why did it fail? That's I question I've asked myself for years now and that's where you could have contributed something valuable as a tech reporter, instead of just having fun at beating a tractor failing at being a race car.

These are my guessses:
One hit wonder: The Centuar might have very well been an ideal solution for a given point in time. But these days nobody adopts anything significantly new, unless the vendor can demonstrate how several generations will continue provide sufficient value to invest a user's engineering resources.

Opposing trends: Part of the edge neural inference processing is going into the handsets, where the web giants have to pay neither the hardware invests nor the power consumption, both of which are actually paid by "the product" (aka consumer). And the other part is now increasingly handled by "ordinary" CPUs, which are increasingly being augmented with low-precision inference optimized vector extensions, that achieve a similar energy efficiency as the Centaur neural accelerator.