News Google Has Developed Its Own Data Center Server Chips

bit_user

Polypheme
Ambassador
And more of intels lunch gets eaten
I wonder how much of their x86 fleet is currently Intel vs. AMD. Google's Stadia used AMD CPUs and GPUs, for instance.

The timeframe seems a little crazy, to me. I wonder what's taking them so long - are they actually designing custom cores?? If they're only doing a rollout of their TSMC N5 CPUs in 2025, AMD will already be on TSMC N4 or N3 by then. And Intel will be on 18A or whatever.

I wonder if these leaks were intended to stave off investor calls for more job cuts. I've heard some investors want Google to layoff up to 3x as much as it's done so far, and the hardware division seems like it might be a juicy target.
 
Last edited:

bit_user

Polypheme
Ambassador
Competition is good for us consumers!
I suppose, if you're using Google's cloud services or a downstream consumer of a service that does. But, don't expect them to sell these CPUs on the open market.

The downside of this trend towards all cloud providers making their own CPUs is that if Intel's or AMD's volumes significantly drop, they might have to increase prices due to having fewer units over which to recoup their engineering costs.
 

bit_user

Polypheme
Ambassador
Remember, Intel hopes to be fabricating the chips for all those who do their own thing...
Sure, they're trying to stand up a foundry business and need more volume. I get that, but manufacturing is typically a lower-margin affair than IP. So, what Intel would really prefer is to sell you their chips. Obviously, if they can't do that, then they'd at least prefer you use their foundries instead of TSMC or Samsung.
 
  • Like
Reactions: AnendTech

jkflipflop98

Distinguished
That's right. The workload determines the best use of hardware.

If you're only ever going to do one thing and one thing only for the rest of time, then designing an ASIC that does that one thing is going to be your best path. You get max performance and efficiency in exchange for reduced flexibility.

If you're in an environment where you have to pivot to different workloads depending on the situation then you need a general processor like x86. It burns more power but you can do literally anything with it. Anything from file servers to complex physics simulations.
 

bit_user

Polypheme
Ambassador
Bit weird. X86 or X64 CPU's are good at "everything" but for specific workloads your throwing away effiency.
There are some ARM server CPUs that are also quite versatile. Amazon's Graviton 3 processors have 256-bit SVE, which gives them floating point chops to match their integer performance. Compared to any other server CPU made on a comparable process node (TSMC N7), they're surely the most efficient. 64 cores, with 8-channel DDR5 and PCIe 5.0, in just 100 W.


And it's already been in service for more than a year!

When you design a chip specific for your type of workload you make huge steps in efficiency. Simple as that.
Partly. I think because Google has their TPUs, they won't bother wasting a lot of die space with wide vector or matrix arithmetic.

Other than that, I would expect them to simply use ARM Neoverse N2 cores. Then again, considering the timeframe, I really have to wonder if they're make a custom design. Google certainly has the ego to try and undertake such a task, whereas Amazon was very smart not to.

To be honest, if Ampere's new custom cores are any good, I think it would have just been cheaper for Google to acquire that company. Amazon actually bought Annapurna Labs, which is how their Graviton program came into being.
 
  • Like
Reactions: AnendTech