News Nvidia counters AMD DeepSeek benchmarks, claims RTX 4090 is nearly 50% faster than 7900 XTX

We can only guess why these clowns run rtx on llama-cuda and compare radeon on llama-vulcan instead of rocm. Probably they didn't want to expose their garbage perf/$ to public view 😂
At least they updated the driver to 25.1.1
 
Last edited:
Nvidia’s results are a slap in the face to AMD’s own benchmarks featuring the RTX 4090 and RTX 4080. The RX 7900 XTX was faster than both Ada Lovelace GPUs except for one instance, where it was a few percent slower than the RTX 4090. The RX 7900 XTX was up to 113% quicker and 134% faster than the RTX 4090 and RTX 4080, respectively, according to AMD.

But if he can't even understand what he's looking at, how can he write a serious article? When have those numbers ever been indicated in the AMD graph... just so you know, 100% is considered the baseline, therefore, the numbers indicated by AMD are +13% and +34%. What on earth did he write?

Well, it's just an article or better say a "rant" written by an NVIDIA fanboy...
 
AMD didn’t run their tests well and nVidia got the opportunity to refute them. Major smack down.
And how do you know that NVIDIA's numbers are the exact ones?
As I wrote above, he doesn't even know what he's writing, just imagine if he could have done even a minimal critical analysis of what NVIDIA has published.
 
And how do you know that NVIDIA's numbers are the exact ones?
As I wrote above, he doesn't even know what he's writing, just imagine if he could have done even a minimal critical analysis of what NVIDIA has published.
It almost doesn't matter. AMD made a mistake to take a swipe at nVidia (or anyone for that matter) and leaving themselves open to a smack down.
 
There are lies, damn lies and benchmarks.

Trust no one, especially manufacturer’s benchmarks.

That said it’s a lazy article.
Yup... seems like independent benchmarks are going to be veeery needed.

Interestingly enough, I'd say that Strix Halo will probably trounce 5090 and 7900 on large models thanks to the GPU having access to huge 128 GB of fairly speedy memory (Quad-channel LPDDR5x-8000). Maybe even the integrated NPU could give it an additional bump.
 
  • Like
Reactions: thisisaname
It almost doesn't matter.
How do you mean 'it doesn't matter'? Are you saying that what AMD published is false just because now NVIDIA reports different data and claims it's false? Is what NVIDIA publishes considered an absolute truth? or worse it's not even important if it's true? And by the way, they only publish 'truthful' data and slides? Like in the case of the 5070=4090 performance...

C'mon everything can be discussed, and the data will be evaluated, but this is definitely an article written by a fanboy, not by a serious editor of a magazine. It would have been fine as a post on some forum, but have you read it? It's full of insinuations like: "Nvidia’s results are a slap in the face to AMD’s" or "The icing on the cake (for Nvidia) is that the RTX 5090 more than doubled the RTX 4090’s performance results, thoroughly crushing the RX 7900 XTX."
Even AMD's data has been reported incorrectly. He probably didn't even take the time to read the charts, he just wanted to write something in favor of NVIDIA. More clear than this.

Read only the title of his previous article on AMD regarding the same matter, it speaks for itself:
"Nvidia usually has better AI performance, but with DeepSeek AI, the tables have turned (according to AMD)"
or in the article itself:
"This should all be taken with a pinch of salt, of course, as we can't be sure how the Nvidia GPUs were configured for the tests (which, again, were run by AMD). Not all AI workloads take advantage of a GPU's full computational throughput. We saw this in our Stable Diffusion tests, where Stable Diffusion did not use FP8 calculations or TensorRT code for processing."

A slightly different way of writing, don't you think? Where is the slap in the face for NVIDIA? For me, the reality is more than evident. We had to write the first article, but we much prefer the latest one.
 
Last edited:
How do you mean 'it doesn't matter'? Are you saying that what AMD published is false just because now NVIDIA reports different data and claims it's false? Is what NVIDIA publishes considered an absolute truth? or worse it's not even important if it's true? And by the way, they only publish 'truthful' data and slides? Like in the case of the 5070=4090 performance...

C'mon everything can be discussed, and the data will be evaluated, but this is definitely an article written by a fanboy, not by a serious editor of a magazine. It would have been fine as a post on some forum, but have you read it? It's full of insinuations like "Nvidia’s results are a slap in the face to AMD’s", and even AMD's data has been reported incorrectly. He probably didn't even take the time to read the charts, he just wanted to write something in favor of NVIDIA. More clear than this.

Read only the title of his previous article on AMD regarding the same matter, it speaks for itself:
"Nvidia usually has better AI performance, but with DeepSeek AI, the tables have turned (according to AMD)"
or in the article itself:
"This should all be taken with a pinch of salt, of course, as we can't be sure how the Nvidia GPUs were configured for the tests (which, again, were run by AMD). Not all AI workloads take advantage of a GPU's full computational throughput. We saw this in our Stable Diffusion tests, where Stable Diffusion did not use FP8 calculations or TensorRT code for processing."

A slightly different way of writing, don't you think? Where is the slap in the face for NVIDIA? For me, the reality is more than evident. We had to write the first article, but we much prefer the latest one.
Because if you punch the biggest guy in the face you better be sure you knock him out.
 
  • Like
Reactions: SirStephenH
Because if you punch the biggest guy in the face you better be sure you knock him out.
You are right on this, but that's exactly the point. Just because NVIDIA says otherwise now, it doesn't mean that AMD was wrong to publish those data and knock them. What were they supposed to do, not publish anything at all? Perhaps their findings did strike a nerve with NVIDIA, and that's certainly evident...

I hope, however, that you will agree with me that the article written does not inspire much impartiality...
 
It almost doesn't matter. AMD made a mistake to take a swipe at nVidia (or anyone for that matter) and leaving themselves open to a smack down.
It's not a mistake to take a swipe at Nvidia, when Nvidia is increasingly peddling half truths and outright bullshit. To nvidia, a 20-30% performance uplift really means a 2.2x performance gain! The 5080 has an 8% lead over the 7900xtx, which also outperformed the 4080 by 15% at stock, and yet somehow the 5070 is going to offer the same performance as a 4090!

Intel, apple and Nvidia all have the same playbook. Carpet bomb consumers with bullshit marketing claims, don't respond to anyone who acknowledges reality and sleep tight knowing that the average consumer is too lazy, or ignorant to pay attention to the truth.
 
In LM studio when I checked between a 7900XTX and the 4090 the radeon is slighly faster at the smaller models with the 4090 taking a small lead at 32B.
Thanks for your report! That seems to me more or less what AMD said (+13% on 7B, +11% on 8B, +2% on 14B and -4% on 32B) obviously some figures may differ slightly depending on the configurations.
Slightly faster than 4090 at the smaller models and slightly slower at the larger one.
 
Last edited:
  • Like
Reactions: Makaveli
Would like to see more information on the test posted here.

In LM studio when I checked between a 7900XTX and the 4090 the radeon is slighly faster at the smaller models with the 4090 taking a small lead at 32B.
Agreed. In LM Studio the my 7800xt even beats my 4070 super in smaller models (less parameters) and obviously smokes it in models that need between 12GB and 16GB. My 4070 super only beats in the few more complex models that will still run on 12GB. I generally only test in LM Studio because it’s the only solution I know of that runs very easily on both AMD and Nvidia and gives access to thousands of models. The only other testing I’ve done is Stable Diffusion in CUDA and ROCm. In that testing the 4070 super generally constantly maintained at least a 10% advantage but I’m 99% sure that’s mostly down CUDA being more optimized than ROCm.
 
If money is your sole metric for comparing, then yes, you are correct.
It's all about the price 2 performance ratio to way too many caught up in the spreadsheets.

If I had to pay $5 per frame for 120fps, vs $3 per frame for 30fps, I would save my money up and take the 120fps. What the heck am I really going to do with an average 30fps GPU? Clunkity clunk perhaps isn't worth the cheap price of clunk.

Many times a better experience is worth paying more for. But I don't view price as the metric here anyway, only the performance of the challenger, challenging the others performance.