News Rivals in Arms: Nvidia's $199,000 Ampere PC Taps AMD Epyc CPUs

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Yeah, I'm sure they would have preferred to use Intel, were it a viable option. That might not be the case in a couple of years, after Intel attempts to wade onto the GPU battlefield once more. If Intel steals a lot of share from Nvidia, their attitude towards AMD could shift quite a bit. Fascinating times.
Competitors working together, for the technological and financial benefit of both companies, is already commonplace in the car industry. No reason why it should not become commonplace in the CPU/GPU industry.
 
I'm going to say TDP is most likely.

Agreed. And it all boils down to how surprisingly and soundly TSMC has outclassed Intel's Fabs.

Throw back to 2014, TSMC is starting production of 16nm while Intel was announcing their 14nm production that would start shipping by 2015. Intel had suspended construction of Fab 42 due to lower demand for desktops and notebooks (rise of the tablets). They favored upgrading existing factories for 14nm and struggled implementing 10nm for the following years. In time, it became clear that 10nm would arrive too late to the party.

Because of the 10nm fumble, they watched as TSMC got 7nm working and finally bit the bullet on Fab 42 by 2017, which would only be ready in 2020. By 2018 TSMC had 7nm in the market, and Intel had nothing to dispute it.

Considering Intel has historically had the upper hand in x86 fabrication process, with its 10+ fabs against a couple on AMDs side, this is rather new territorry for them. As much as Lisa Su has been fantastic, this battle was won back in 2008 when AMD decided to split its business.
 
Last edited:
  • Like
Reactions: bit_user
Competitors working together, for the technological and financial benefit of both companies, is already commonplace in the car industry. No reason why it should not become commonplace in the CPU/GPU industry.
I never said nor implied it wasn't a thing that happened. They do it when it is the best choice. But let's say a different, non-competitor (or less direct competitor) had an EQUALLY viable option... that would be your first choice. There's a series of factors that are weighed in these decisions.

In this case Intel just didn't have a viable product to even consider, so there really wasn't any alternative.
 
However, Amazon just showed how strong ARM can compete, in the server domain, with their latest 64-core offering. So, perhaps the successor to the DGX A100 will be ARM-based.
Eventually, I think so! However AMD and Intel aren't sitting still either, so it is hard to say when ARM solutions become equally viable for a given project/platform. If it is a proprietary architecture, all bets are off on whether you would actually be able to strike a deal to use the chip in the first place. For example, if you wanted to use a future Apple ARM processor, good luck. Then again, never say never. 😛
 
  • Like
Reactions: bit_user
Don't forget that servers already have redundant power supplies. Then if 1 PSU fails all you have to do is order the new one and install it, typically next business day at the slowest.
Right. I was thinking I should've mentioned this. So, we're talking about probably dual-PSUs, probably each 4 kW.

In many cases, they're even hot-swappable. So, you could order a replacement when one PSU fails, and then just swap it in whenever it arrives. Doesn't even have to be shipped overnight.

In the lab, at work, we had an interesting use for dual/redundant PSUs. We moved a couple servers from one set of power strips to another, one power cord at a time. So, the operation didn't require taking them down.
 
Man, I don't know what's biting Intel in the rear harder: Their foot dragging on PCIe 4.0, crap PCIe lane counts, or their absurd TDP.
Don't forget about core counts and price.

The EPYC CPUs are 64-core. So, that's 128 cores, which Intel can only approach with 4x 8280's or two of their monstrosity dual-die CPUs that are basically two of those chips per package.
 
But with 128 cores you don't even need a gpu to do graphics.
In fact, this is probably what will happen if you try to run normal OpenGL stuff on it - it'll just use llvmpipe, which is CPU-based.

Phoronix has some benchmarks on doing software OpenGL rendering with modern multi-core CPUs. It's fast enough to be interesting, but not something you'd ever prefer.


If one could customize the software in question, just a little, I'm sure you could use one of the A100 chips to do off-screen rendering, and then display the result on the BMC graphics, in much the same way as how CPU-based rendering would work. There might be a tiny latency hit, but certainly less than with CPU-only rendering.
 
If it is a proprietary architecture, all bets are off on whether you would actually be able to strike a deal to use the chip in the first place. For example, if you wanted to use a future Apple ARM processor, good luck. Then again, never say never. 😛
Amazon's ARM CPU is using mostly off-the-shelf IP from ARM. However, so is Ampere Computing (by which I mean the company - not Nvidia's GPU), so their next eMAG CPU should comparable.

As Ampere Computing is an independent company (with some of their equity held directly by ARM), I'm sure they're a viable option for partnerships of this sort.

There's also the group formerly known as AMCC/X-Gene and maybe a couple other ARM server chips out there. Plus, a line of them from Huawei, though there's about 0% chance any US company is going to partner with them, these days.

That's not even mentioning Nuvia, but I suspect the litigation by Apple has probably frozen pretty much any and all business development activity between them and any prospective partners.
 
  • Like
Reactions: alextheblue