News AMD: RDNA 4 coming in early 2025, set to deliver ray tracing improvements, AI capabilities

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

bit_user

Titan
Ambassador
And I thought for a moment you don't care about transistor counts. Oh lord
The specific claim I was trying to address is that that AMD has been using a "small die strategy", since HD 5000. Because the nodes don't always match, the closest way to compare is by transistor count. It's not perfect, but it's the least-bad option, if we're trying to work out what AMD is willing to do on the "cost" side of the equation.

Also, GPU performance correlates much more directly with transistor count, because the individual processing elements are a lot simpler and more similar between competitors and generations than with CPUs. So, it's actually more relevant, which is part of the reason "die size" was even a legitimate question.
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
The specific claim I was trying to address is that that AMD has been using a "small die strategy", since HD 5000. Because the nodes don't always match, the closest way to compare is by transistor count. It's not perfect, but it's the least-bad option, if we're trying to work out what AMD is willing to do on the "cost" side of the equation.
And who's argument was that a couple of weeks ago....

Anyways, Im actually agreeing with your method so all well.
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
Your not saying something different in your addendum. Both GPU and CPU performance correlate with transistor count. If all else being equal, the CPU / GPU with the higher transistor count will perform better, period.
 

ezst036

Honorable
Oct 5, 2018
750
627
12,420
** sigh **

This isn't the first time I've seen people spouting such rubbish, so I just wasted an hour of my life to make some nice plots to help you see the light.

First, let's look at the specific claim of die sizes:
Uw2zylE.png

You can clearly see several points when AMD's die is either similar or bigger than Nvidia's:
  • HD 7970
  • R9 Fury X
  • RX Vega

Of course, die sizes are only comparable at the points when AMD and Nvidia are on roughly the same node. Starting with Radeon VII, there was some significant divergence. So, now let's look at actual transistor counts:
ZXT7oEx.png

Here, we can see that:
  • HD 7970 had significantly more transistors than the GTX 680
  • R9 Fury X had 11% more transistors than the GTX 980 Ti
  • RX Vega had a few more transistors than the GTX 1080 Ti
  • Radeon VII still had 71% as many transistors as RTX 2080 Ti
  • RX 6950 XT had 94.7% as many transistors as the RTX 3090
  • RX 7900 XTX has 75.6% as many transistors as the RTX 4090

So, it really wasn't until the RTX 2080 Ti, in 2018, that Nvidia really started pulling ahead on transistor count. I'm sure by no coincidence, that's also the first to feature DLSS and ray tracing. Even so, AMD nearly matched the transistor count of the RTX 3090, with the RX 6950 XT.

Sources:

Thank you for the in-depth response. I was not aware of charts such as these.
 
  • Like
Reactions: bit_user

bit_user

Titan
Ambassador
Thank you for the in-depth response. I was not aware of charts such as these.
Well, I made the charts to help illustrate the data. I was going to just cite some key examples, but I thought the plots would give a clearer picture. Plus, I also wanted to see it for myself.

Where I think people might have gotten the idea that AMD gave up on big dies is from the Polaris era (RX 400 & RX 500 series) and RDNA1, which were both focused on mid-range and entry level. If you forget about Vega (and Radeon VII was more of a footnote) and weren't paying attention to the competitiveness of RDNA2, then you could be forgiven for thinking AMD has only been playing for the mid-tier for quite some time.

The other thing about Vega is that it performed like an upper mid-tier GPU, in spite of its die size and memory bandwidth numbers. So, people might not have realized that it was intended to reprise the Fury X vs. 980 Ti battle, but Vega underperformed and the GTX 1080 Ti overperformed. So, Vega 64 could only face off against the regular GTX 1080. I still credit Vega for scaring Nvidia into launching the 1080 Ti at a lower price than they otherwise would've. If you just compare its fps/$ with the regular GTX 1080, it was a relative bargain. That was probably the last time Nvidia overestimated AMD.
 

ezst036

Honorable
Oct 5, 2018
750
627
12,420
Where I think people might have gotten the idea that AMD gave up on big dies is from the Polaris era (RX 400 & RX 500 series) and RDNA1, which were both focused on mid-range and entry level. If you forget about Vega (and Radeon VII was more of a footnote) and weren't paying attention to the competitiveness of RDNA2, then you could be forgiven for thinking AMD has only been playing for the mid-tier for quite some time.
Yeah, that's about where I get it from. Also look at the latest three generations in your own charts. VII, and in particular 6950XT and 7900XTX are all smaller than the Nvidia chip(20x0, 30x0, 40x0). Perhaps its not as much as it could look on paper, but it is true that more shader units, more tensor cores, and more RT units do produce more performance.

https://www.tomshardware.com/reviews/amd-radeon-rx-7900-xtx-and-xt-review-shooting-for-the-top

The 4090 has basically 4000 more shader units, something like triple the tensor, and 30% more RT cores.

Somehow I think that if the XTX had over 20,000 shaders plus scaled up the other numbers accordingly because its a bigger chip, it would possibly win.

I won't lie though, AMD traditionally has driver issues (come and go, per generation some more than others) and that does catch them on a lot. But they can't beat both a better driver and simply bigger chip that has larger muscles baked into the die etching.

16384 GPU shaders doesn't lie, it also doesn't lose.
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
Yeah, that's about where I get it from. Also look at the latest three generations in your own charts. VII, and in particular 6950XT and 7900XTX are all smaller than the Nvidia chip(20x0, 30x0, 40x0). Perhaps its not as much as it could look on paper, but it is true that more shader units, more tensor cores, and more RT units do produce more performance.

https://www.tomshardware.com/reviews/amd-radeon-rx-7900-xtx-and-xt-review-shooting-for-the-top

The 4090 has basically 4000 more shader units, something like triple the tensor, and 30% more RT cores.

Somehow I think that if the XTX had over 20,000 shaders plus scaled up the other numbers accordingly because its a bigger chip, it would possibly win.

I won't lie though, AMD traditionally has driver issues (come and go, per generation some more than others) and that does catch them on a lot. But they can't beat both a better driver and simply bigger chip that has larger muscles baked into the die etching.

16384 GPU shaders doesn't lie, it also doesn't lose.
No. What's missing from the chart is the 4080, if it was in the chart you'd realize that the 4080 is smaller than the 7900xtx while delivery similar raster performance and much higher RT performance.
 
  • Like
Reactions: ezst036

bit_user

Titan
Ambassador
VII, and in particular 6950XT and 7900XTX are all smaller than the Nvidia chip
As I mentioned, Radeon VII was 71% of the 2080 Ti. What complicates the comparison is that the RTX 2080 Ti had RT and Tensor cores. Clearly, a lot of the transistor budget went to those, because I recall at the time how underwhelmed people were by the improvement in raster performance vs. its predecessor (GTX 1080 Ti), yet the price jumped from $700 to $1000. RT was new and almost unused; DLSS 1.0 wasn't very good and pretty widely derided.

For its part, Radeon VII wasted some die space on fp64, since it was mainly aimed at datacenter. It was AMD's last dual-purpose gaming/datacenter GPU. AMD even said they didn't originally even plan on releasing it as a gaming GPU.

As for the RX 6950X, I had also pointed out that it's 94.7% of the RTX 3090 and easily its equal on raster performance.

True, if AMD thought the RX 7900 XTX was going to beat Nvidia, they underestimated how big the RTX 4090 would be. You can't blame them, since most people were pretty surprised by just how big and fast it is.

Perhaps its not as much as it could look on paper, but it is true that more shader units, more tensor cores, and more RT units do produce more performance.
At times in the past, AMD had a lead in the TFLOPS race, but it didn't translate well to gaming performance. The RDNA architecture was devised partly as a way to address that.

For instance, Fury X had 4096 of the equivalent of what Nvidia calls "CUDA Cores", yet it was neck and neck with the GTX 980 Ti, which had only 2816 CUDA Cores.

Somehow I think that if the XTX had over 20,000 shaders plus scaled up the other numbers accordingly because its a bigger chip, it would possibly win.
Could be. I think AMD messed up something with the cache architecture, when they moved L3 out onto the MCD chiplets. I'd love to get the inside dirt on where its real performance bottlenecks are.

That said, I think people have gotten pretty good performance out of them by overclocking.
 

bit_user

Titan
Ambassador
What's missing from the chart is the 4080,
I purposely just compared flagships. That was the point I was responding to.

if it was in the chart you'd realize that the 4080 is smaller than the 7900xtx while delivery similar raster performance and much higher RT performance.
As I said in the above post, the RX 7900 XTX has some bottleneck(s) I assume are largely due to its chiplet architecture. I can't shake the feeling they messed up the L3 cache by that.

Regarding RT performance, it's no secret that AMD hasn't been prioritizing that. The RX 8000 series is supposed to gain a lot of ground there, so let's see. I doubt it'll be enough for them to catch Nvidia, but I think the minimum bar is for it to be enough for RT titles to be playable.
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
I purposely just compared flagships. That was the point I was responding to.


As I said in the above post, the RX 7900 XTX has some bottleneck(s) I assume are largely due to its chiplet architecture. I can't shake the feeling they messed up the L3 cache by that.

Regarding RT performance, it's no secret that AMD hasn't been prioritizing that. The RX 8000 series is supposed to gain a lot of ground there, so let's see. I doubt it'll be enough for them to catch Nvidia, but I think the minimum bar is for it to be enough for RT titles to be playable.
RT ain't going to be playable for at least 15 years. You need the best card on every resolution and then make some concessions on top to make it work. If AMD half asses it again - oh well, it jus won't be playable.
 
  • Like
Reactions: TesseractOrion