News Doom Eternal Runs Faster on Old AMD GPUs Than Comparable Nvidia GPUs

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

TJ Hooker

Titan
Ambassador
its about cuda to memory clocks, shared to isolated cache ratios ... magic of cuda cores is that you can slap more of them for rasterization than shaders etc... if you move those a bit left or right, you can utilize card 99.8% where on AMD level you utilize shaders 100% and others at 80-70% because each game does not use same ratios between components same way. play a bit with voltages, speeds, what have more cache and this is what they gained.
This... does not seem right. Nvidia has fixed ratios of their various hardware blocks, e.g. shaders (which Nvidia calls CUDA cores), TMUs, ROPS, etc., same as AMD. It's not that the entire GPU is just shaders that magically transform to perform all the other functions on demand. So in your example, no, the GPU can't just start using a shader/CUDA core as a rasterizer rather than a shader.

In general Nvidia might have a more optimal ratio of these components for gaming, I don't know. But that ratio is going to be constant for a given GPU (maybe all the GPUs with the same microarchitecture, not sure). Drivers have nothing to do with it.
 
  • Like
Reactions: Rdslw and salgado18

folem

Distinguished
Oct 9, 2011
58
4
18,635
I think drivers are actually the opposite of the cause here. With GCN (and the previous Terascale) architectures, AMD wasn't really building graphics cards, they were writing graphics drivers for general purpose compute cards (with display adapters added on almost as an afterthought). NVIDIA tried to do that with Fermi, but when that turned out as horrible as it did they immediately reversed course with Keppler, and completely separated their Tesla compute cards from GeForce and Quadro with Maxwell. Yes that's a simplification, but it's accurate enough, don't sperg on me.

The point being that AMD's hardware is vastly more powerful than NVIDIA's, but NVIDIA has usually made up for that by being more specialized and having better software. And that really shows when the drivers have no optimizations and have to fall back to their generic profile. With RDNA2 and CDNA AMD is following NVIDIA's lead; don't expect RDNA cards to age nearly as gracefully as GCN cards did.

Someone else mentioned memory capacities being an issue, and as another mentioned, the performance of the 2GB GTX 1050 beating the 3GB GTX 780 would seem to disprove that.
 

evolucion888

Distinguished
Aug 19, 2008
11
0
18,510
Terascale isn't even close to GCN or RDNA. Terascale is VLIW based where as GCN and RDNA are SIMD based. All of the nVidia since at least Fermi until now are similar in design as the entire GCN family.

As a matter of fact, SIMD is nearly every architecture out there, GCN is Vector, Fermi was very similar to GCN but Maxwell and Kepler had a very similar software based scheduler as Terascale as funnily enough, both architectures are superscalar. Pascal and Turing are pretty much scalar.
 

evolucion888

Distinguished
Aug 19, 2008
11
0
18,510
NVidia also calles the cores in the GPUs CUDA cores. Starting in 2007 with the G80 that called them Stream Processors which were very different from previous versions of GPUs. Beginning with at the latest the GTX 580, in 2010, nVidia started calling them CUDA cores. "Fundamentally GF110 is the same architecture as GF100, especially when it comes to compute. 512 CUDA Cores are divided up among 4 GPCs " https://www.anandtech.com/show/4008/nvidias-geforce-gtx-580/2 essentially these are very similar to the Stream Processors that they were called back at the G80. While yes CUDA is an API, it is also what nVidia calls their GPU cores since before GCN was made.

CUDA is just an API, nothing to do with architectures overall.