News Nvidia Switches Gears, Chooses Sapphire Rapids for DGX H100

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Intel can do whatever they want, they will lose more market share. Too bad it's not 2005 anymore. Or even earlier. Performance or bust.
 
Even if, you’re still wrong, I’ve already posted a video that debunks this. Nvidia has no hardware scheduling comparable to the one of AMD, that’s why overhead is way higher in recent games.
That overhead probably comes from something else not just scheduler. Starting from volta nvidia actually reintroduce hardware scheduler for their gpu. What we are seeing with nvidia is the price they pay for going with more compute oriented architecture even on gaming card. Before RDNA we see the opposite. with GCN we saw AMD gaining ground or perform much better as we go up the resolution. Fury X is one such card that heavily affected by this. In some games at 1080p we see Fury X end up performing in the ballpark of RX480/GTX1060. but at 4k it will flex it's muscle and will perform as fast as GTX980Ti or GTX1070.
 
That overhead probably comes from something else not just scheduler. Starting from volta nvidia actually reintroduce hardware scheduler for their gpu. What we are seeing with nvidia is the price they pay for going with more compute oriented architecture even on gaming card. Before RDNA we see the opposite. with GCN we saw AMD gaining ground or perform much better as we go up the resolution. Fury X is one such card that heavily affected by this. In some games at 1080p we see Fury X end up performing in the ballpark of RX480/GTX1060. but at 4k it will flex it's muscle and will perform as fast as GTX980Ti or GTX1070.
This still isn’t right. Nvidia is mostly software scheduling, hardware only in limited parts, not comparable to AMD.

GCN and especially big GCN like Fury X had a different problem and it’s not a feature: the shading cores were underutilized due to the suboptimal driver of AMD in DX11, which meant that it needed higher resolution for the CPU limit to be lifted. This is also why theoretical performance numbers (tflops) of GCN rarely reached reality, Nvidia had much more real life performance compared to the number, it was way closer to the theory, due to the DX11 driver being able to utilize multiple cores and thus feed the GPU better.

RDNA never had the problem of GCN, not only was it introduced in a time where DX12 was already popular, it’s also a vastly more balanced engine for gaming, more comparable to Nvidia architectures like Maxwell or Pascal.

Why do you think Vega 64 aged better than GTX1080 - again it is just because of DX12 and Vulkan. The 4096 shading cores had massive problems being utilized in DX11, but with time came the savior DX12 and made the gpu better. This forward looking technology is what people call “finewine”. With RDNA and its successors AMD has a more balanced architecture now that is NOT dependent on finewine anymore because it’s actually made for the present, not the future. This is also why RDNA2 is vastly more popular aside from the fact that the drivers are simply better and the competitive performance with anything Nvidia has.

Nvidia on the other hand is going a universal architecture route now since Turing, where gaming and compute architectures are very similar. This is also why AMD was able to catch up, since they have 2 different architectures now, one optimized for gaming and one for data center.