News Rivals in Arms: Nvidia's $199,000 Ampere PC Taps AMD Epyc CPUs

Wow, between AMD and Intel, I guess this shows who Nvidia is more concerned about, as a competitive threat.

I wonder how much leeway AMD has to refuse to work with someone, though. Could they have shut this down, if they really wanted to?
 
Wow, between AMD and Intel, I guess this shows who Nvidia is more concerned about, as a competitive threat.

I wonder how much leeway AMD has to refuse to work with someone, though. Could they have shut this down, if they really wanted to?
You beat me to it - yeah looks like Nvidia sees a friendlier port with AMD rather than Intel. With Intel getting into the GPU market - has Nvidia thinking strategically. AMD would not turn down any business - they need every single $$ that they can get - they cannot afford to be choosy.
 
5 petaFLOPS AI and 10 petaOPS INT8, damn!

I'll buy two of them to manage my cooking recipes and Surprizamals collection.

Comforting thought: these things will end up on eBay at ~$1,999 in 15+, 20+ or 25+ years. By that time your hyper-quantum smart phone might be more powerful though.
 
  • Like
Reactions: gg83
Wow, between AMD and Intel, I guess this shows who Nvidia is more concerned about, as a competitive threat.

I wonder how much leeway AMD has to refuse to work with someone, though. Could they have shut this down, if they really wanted to?

The way I see Intel had nothing to do with Nvidia's decision. It was a choice between using an AMD product in their platform or not having a platform at all, since PCIe 4.0 was definitely needed for the amount of bandwith required. What this tells us is that Nvidia will not hurt its own business to avoid supporting AMD.

I'm sure we'll see Intel variants of this platform as soon as PCIe 4.0 capable processors are available from them.

As to shutting it down, I don't think its feasible, nor is it a good idea. The added market share for EPYC CPUs is very welcome and so is the marketing fodder from Nvidia using their tech.
 
Last edited:
Bit_User, Decidium ...
Did either of you RTFArticle ??
It clearly states Intel doesn't support PCIe4.0 but AMD does.
Made CPU choice clear cut.
Why would AMD CPU division let GPU competition undermine product placement in prestigious server racks?
 
I mean Intel used AMD GPU for that big SoC thing they made right? Don't tech company's license/collaborate with each other all the time?
Yes, all the time. Tech companies are not run by fanboys. Back before Ryzen was released, and AMD CPU's sucked for gaming, AMD used to benchmark their new GPU's using Intel systems for their promotional material. Companies want their products to be the best they can, if that means using parts from a competitor, so be it.

Obviously Nvidia went with AMD here because they have a superior product. Intel doesn't have a dual CPU config that offers 128 cores with that level of power efficiency nor 128 lanes of PCIE 4.0.
 
Last edited:
if its the AST25xx series then it can barely run notepad much less Crysis
LOL! OMG, I tried to run a little OpenGL 3.x program on a machine with one, running Ubuntu 18.04.4 and it hosed the X server hard.

I don't even fully understand why, because I think it should've been using llvmpipe, but that's the first and last time I'll try it.
 
Probably a higher wattage Corsair CX. You can run down to the local Best Buy and get a replacement the same day. You don't want your 200 grand system out of commission for any length of time waiting for a proprietary replacement part.
Just the GPUs in that thing use 3.2 kW, and that's before adding in 9x 200 Gbps NICs, 6x NVMe SSDs, and 2x EPYC 7742 (225 Watts, each).

So, no. No COTS PSUs, here. I'm sure you were joking about that, but just in case anyone didn't pick up on the sarcasm.

As for down time, there are contract service organizations that will do same-day on-site service, in many major metro areas. However, I'd bet the service contracts on these machines are a significant fraction of the base hardware cost.
 
Bit_User, Decidium ...
Did either of you RTFArticle ??
It clearly states Intel doesn't support PCIe4.0 but AMD does.
Made CPU choice clear cut.
Yeah, yeah, you have to go and be all reasonable and logical and fact-based.
; )

In truth, I hadn't read the full article, when I made that post, but I also didn't delete or edit it afterwards. I dunno, playing up the whole partisanship thing is a little fun.

Honestly, I did not expect them to use AMD, though. I even said as much, in a comment on their CEO's "oven" video, a couple days ago. I feel like they might've had to swallow a little pride in doing that. Jensen strikes me as hyper-competitive, like that.
 
  • Like
Reactions: Murissokah
Comforting thought: these things will end up on eBay at ~$1,999 in 15+, 20+ or 25+ years.
Are you buying 15-25 year old supercomputers on ebay, today?

You really have to love old hardware to do that sort of thing, because some of the issues you encounter are:
  • lack of software support & compatibility
  • hardware failures are both common and difficult to debug in complex, proprietary systems with no tech support to call and limited access to documentation.
  • power & air conditioning costs.
It's a serious PITA. I had friend who once tried to get an old 1990's supercomputer running, and it was just nuts what he had to go through, just to get the thing booted and kinda working. Oh, and then you want to actually use it for anything useful? Good luck. Your time is so much better spent using modern, well-supported stuff that's comparatively simple and basically just works.

And if you don't believe me about hardware failures, this thing has 8 huge GPUs, 2 huge CPUs, exotic, proprietary switches and interconnects - that's a lot of opportunities for stuff to go wrong.

By that time your hyper-quantum smart phone might be more powerful though.
We'll see about that. I doubt quantum computers will ever be something you'll have in your home, let alone your pocket. And Dennard scaling broke down a while ago. Technology will continue improving for the foreseeable future, but the rate of improvements in hardware will certainly taper off.
 
I don't know where the 5 petaFlops comes from that you quote, but this article shows 9.7 TFlops DP, while Intel's Ponte Vecchio estimate was 66 TFlop DP ... which would be important for HPC.

https://www.anandtech.com/show/15801/nvidia-announces-ampere-architecture-and-a100-products

Double Precision
9.7 TFLOPs
(1/2 FP32 rate)​

Direct your attention to this spec, from the same article:

INT8 Tensor
624 TOPs​

Now, consider there are 8 GPUs in this machine. 8 * 624 = 4,992 ~= 5 POPS.

So, definitely not FLOPS. They're just counting multiplies and accumulates involved in computing int8 tensor products. Useful for AI inferencing, but it's a mistake to equate them with any kind of FLOPS, much less general-purpose fp64.
 
I feel like they might've had to swallow a little pride in doing that. Jensen strikes me as hyper-competitive, like that.
Yeah, I'm sure they would have preferred to use Intel, were it a viable option. That might not be the case in a couple of years, after Intel attempts to wade onto the GPU battlefield once more. If Intel steals a lot of share from Nvidia, their attitude towards AMD could shift quite a bit. Fascinating times.
 
If Intel steals a lot of share from Nvidia, their attitude towards AMD could shift quite a bit. Fascinating times.
Hey, let's not forget POWER and ARM, both of which are fully supported by Nvidia's software stack.

Now, it seems that POWER has well and truly been left in the dust. However, Amazon just showed how strong ARM can compete, in the server domain, with their latest 64-core offering. So, perhaps the successor to the DGX A100 will be ARM-based.
 
  • Like
Reactions: alextheblue
Just the GPUs in that thing use 3.2 kW, and that's before adding in 9x 200 Gbps NICs, 6x NVMe SSDs, and 2x EPYC 7742 (225 Watts, each).

So, no. No COTS PSUs, here. I'm sure you were joking about that, but just in case anyone didn't pick up on the sarcasm.

As for down time, there are contract service organizations that will do same-day on-site service, in many major metro areas. However, I'd bet the service contracts on these machines are a significant fraction of the base hardware cost.
Don't forget that servers already have redundant power supplies. Then if 1 PSU fails all you have to do is order the new one and install it, typically next business day at the slowest.
 
  • Like
Reactions: bit_user
Man, I don't know what's biting Intel in the rear harder: Their foot dragging on PCIe 4.0, crap PCIe lane counts, or their absurd TDP.

I'm going to say TDP is most likely. While they could shoe-horn in 4.0 support, and quickly expand lane count (by letting the chiplets use more of their own lanes), their thermal design is absolute garbage in their newest and future offerings, chiefly because they're not even hiding that they're just stuffing the older designs in pairs under the lid. Remember when they blasted AMD for doing that?
 
  • Like
Reactions: Murissokah