News Nvidia CEO Comments on Grace CPU Delay, Teases Sampling Silicon

waltc3

Reputable
Aug 4, 2019
423
226
5,060
I enjoyed reading him saying "System builders and OEMs are building systems with them right now" except for the fact that they won't be shipping until the latter half of the year. JHH is a hoot with "It just works" (marketing from Apple), and--what was the latest?--"nVidia everywhere" (previous marketing for Microsoft Windows.) JHH needs to work on his presentations a bit, imo.
 

ikjadoon

Distinguished
Feb 25, 2006
1,983
44
19,810
Seems clearer the uArch, and less the nodes, is what drives perf/W/core.

AMD Genoa (Zen4) on TSMC N5P still isn't a big enough efficiency leap. Frankly, I don't think Intel or AMD have a chance in perf/W/core until a clean sheet uArch.

The power consumption is higher with these new Socket SP5 processors but as shown by many of the performance-per-Watt metrics, when it comes to the power efficiency it's often ahead of AMD EPYC 7003 "Milan" or worst case was roughly similar performance-per-Watt to those prior generation parts

When NVIDIA, Amazon, Apple, Google, Microsoft, Jim Keller, and even bloody Linux are giving up on the x86 monopoly...and the only laggards are AMD & Intel...sounds pretty obvious money & perf talk.

Thank God NVIDIA couldn't buy Arm Ltd. They didn't need it.

All right, that should be enough feathers rusted.
 

bit_user

Polypheme
Ambassador
@PaulAlcorn , thanks for the coverage!

Huang also remarked that Nvidia has only been working on the chips for two years, which is a relatively short time given the typical multi-year design cycle for a modern chip.
Nvidia has made no secret of the fact that they're using ARM's Neoverse V2 cores. That means most of their work on this was probably just a matter of integration, rather than ground-up design. ARM tries to make it as fast & easy as possible for its customers to get chips to market that use its IP.



For reference, Amazon's Graviton 3, which launched about 15 months ago, uses ARM's Neoverse V1 cores.

Nvidia's use of the Arm instruction set also means there's a heavier lift for software optimizations and porting, and the company has an entirely new platform to build. Jensen alluded to some of that ...
This feels more like an excuse than whatever was their main issue. They've officially supported CUDA on ARM for probably 5 years, now. They've shipped ARM-based SoCs for at least 15 years. All their self-driving stuff is on ARM. All the big hyperscalers have ARM instances. And Fujitsu even launched an ARM-based supercomputer, using their A64FX, which I'm sure prompted some HPC apps to receive ARM ports & optimizations.

It's weird that their Genoa-comparison slide compares against NPS4, as if it's the only option. You don't have to use that - you could also just use NPS1 or NPS2, if you have VMs big enough.
 
Last edited:

bit_user

Polypheme
Ambassador
Thank God NVIDIA couldn't buy Arm Ltd. They didn't need it.
This was designed back before the deal fell-through. Of course they didn't need to own ARM, in order to have access to its IP, but doing so would've enabled them to effectively lower their unit costs (i.e. since they'd have been paying themselves the license fee, and probably would've been tempted to cut themselves a better deal).
 
D

Deleted member 14196

Guest
That may be true however, consumer prices would not have went down, rather, they would’ve went up if they had purchased the company. Nvidia has proven they are the extreme of greed.
 

bit_user

Polypheme
Ambassador
That may be true however, consumer prices would not have went down, rather, they would’ve went up if they had purchased the company. Nvidia has proven they are the extreme of greed.
My point was purely about the cost to Nvidia - not to their customers. By now, I think we all know that Nvidia will price its products in a profit-maximizing way, irrespective of their costs.