News AMD RDNA 3 GPU Architecture Deep Dive: The Ryzen Moment for GPUs

As history has proven over and over and over: you don't need to have the best product, but one that is reasonably good/effective and it's way more affordable than the competition.

Chiplets embraces that in full without sacrificing much of the "monolithic goodness". Yes, this bears repeating because people misses the forest for the trees a lot.

Like AdoredTV mentioned: if not RDNA3, RDNA4 will be the Zen3 moment in the GPU market. RDNA3 is akin to Zen2's moment. We all win, I'd say. The pressure is now on nVidia to get creative, much like it was with Intel.

Regards.
 
$999$899$1,099$1,599$1,199


Who exactly are these $1,000+ GPU for?

AMD's client sales were down 40% last quarter, the biggest QoQ sales loss in AMD's history. AMD's stock is down 50% since the beginning of the year.

But AMD and Nvidia still think they can charge $1000 for GPU and ignore mainstream sub $300 GPU ? GPU that have outrageous prices outside of the US due to the strong $.

I'm calling these tech companies their bluff and skipping any PC upgrades until prices come down drastically. Tech companies are overplaying their hand and they're getting burned with record sales losses in the PC market, the percentage of gamers who can afford $1,000 GPU or even $600 GPU is minuscule.

The 6-year-old GTX 1060 is still the most used GPU on Steam. Gamers have stopped upgrading PC parts, they can no longer afford to.
 
Last edited:
  • Like
Reactions: The_Git and Amdlova
$999$899$1,099$1,599$1,199


Who exactly are these $1,000+ GPU for?

AMD's sales were down 40% last quarter, the biggest QoQ sales loss in AMD's history. AMD's stock is down 50% since the beginning of the year.

But AMD and Nvidia still think they can charge $1000 for GPU and ignore mainstream sub $300 GPU ? GPU that have outrageous prices outside of the US due to the strong $.

I'm calling these tech companies their bluff and skipping any PC upgrades until prices come down drastically. Tech companies are overplaying their hand and they're getting burned with record sales losses in the PC market, the percentage of gamers who can afford $1,000 GPU or even $600 GPU is minuscule.

The 6-year-old GTX 1060 is still the most used GPU on Steam. Gamers have stopped upgrading PC parts, they can no longer afford to.
I can't answer on behalf of the big majority of the GPU market, but I can tell you people that is into VR is in the market for these.

The ultimate frontier is no longer the regular PC (EDIT: to clarify, I mean 8K or higher FPS), but VR-PC.

Regards.
 
$999$899$1,099$1,599$1,199


Who exactly are these $1,000+ GPU for?

AMD's sales were down 40% last quarter, the biggest QoQ sales loss in AMD's history. AMD's stock is down 50% since the beginning of the year.

But AMD and Nvidia still think they can charge $1000 for GPU and ignore mainstream sub $300 GPU ? GPU that have outrageous prices outside of the US due to the strong $.

I'm calling these tech companies their bluff and skipping any PC upgrades until prices come down drastically. Tech companies are overplaying their hand and they're getting burned with record sales losses in the PC market, the percentage of gamers who can afford $1,000 GPU or even $600 GPU is minuscule.

The 6-year-old GTX 1060 is still the most used GPU on Steam. Gamers have stopped upgrading PC parts, they can no longer afford to.

For rich-ish people. But they always launch at the high end, where they can charge enough to cover the process refinement period. Once they are building the parts efficiently, they launch the cut-downs. The cheaper parts will come. The real reason you (and everyone) are able to consider skipping this generation is that there is no dx13 with a new feature all the top games will be using. Which means that this generation of parts and the last will be competing purely on price/performance rather than features on all major titles.
 
I agree with sentiments here. IMO there's less prestige in being known for having the highest performing GPU when pricing puts it out of range of almost everyone. Nobody buying a $300-$400 GPU is (should be) making their purchasing decision on the sole notion/metric that the RTX4090 is the worlds fastest GPU, so they should choose Nvidia regardless of the performance tier.
 
The 6-year-old GTX 1060 is still the most used GPU on Steam. Gamers have stopped upgrading PC parts, they can no longer afford to.
This is technically incorrect. Steam lumps ALL GTX 1060 variants under a single label — 3GB, 6GB, and laptop parts. The latest survey shows 7.62% for the GTX 1060, in first place. Fourth place shows the RTX 3060, while sixth place shows the RTX 3060 for Laptops. Add those together and it's a combined 8.86%. If the 1060 series were ungrouped — or all other GPUs were grouped — the RTX 3060 would take first place. Does that really matter, though? Probably not.

The perhaps more important aspect to consider is that the 3080 (1.82%), 3080 Ti (0.72%), and 3090 (0.48%) combined represent 3.02% of the surveyed market. Steam reportedly had around 134 million active monthly users in 2021.

If — big caveat as Valve doesn't provide hard statistics — if 3% of 134 million users have a 3080 or higher Nvidia card right now, that's still four million people in the world. Even if only 10% of those were to upgrade, that would be 400,000 people, and that's almost certainly more than enough to sell all 4090 and 4080 cards that get made in the next few months. Also note that anyone with a 3080 or above likely paid scalper prices for it, exactly the sort of people who would pay over $1000.

It's definitely high pricing, from AMD and Nvidia both. We'll see how the 7900 cards perform in a month, and whether or not they sell out at launch. 4090 cards are still holding more or less steady at over $2,100. :'(
 
For rich-ish people. But they always launch at the high end, where they can charge enough to cover the process refinement period. Once they are building the parts efficiently, they launch the cut-downs. The cheaper parts will come. The real reason you (and everyone) are able to consider skipping this generation is that there is no dx13 with a new feature all the top games will be using. Which means that this generation of parts and the last will be competing purely on price/performance rather than features on all major titles.

It's probably for the best that there aren't any new features coming out. Pretty much every 'new version' of software that was developed over the last 2 years sucks compared to the old pre-2020 versions. Android 12/13, Windows 11, YouTube, Discord, Maps, Metaverse, games in general... I think some engineers must have gone a bit too crazy during lockdown.
I don't know how they would have turned DX13 into some derivative games-as-a-service Crypto NFT ad-tracker, but some greedy idiot-in-charge would have probably tried it. Also, there would have definitely been some useless upsell product called "DX13 Pro".

It makes sense that sales would be down across the board since the trend is to grossly over-monetize products that are blatantly inferior to what people are used to using.
 
  • Like
Reactions: citral23
The games I have played on the last year's still on direct X 9c. My graphics supports direct X 12 but the games doesn't support Me. These prices on today graphics are insane. I will get a new graphics when stalker 2 launch. But will be on 6700xt level or lower.
 
Agreed with the general sentiment on pricing in general. I have been waiting a long time for a decent $200 GPU and there don't seem to be any new. I finally went with an RX6600 and am very happy with that but had to pay ~$240. Especially with energy prices soaring, finding a decent card that didn't consume 200W was important to me.
 
  • Like
Reactions: bit_user and prtskg
I think AMD is doing well with the design of these new GPUs.
But it is dishonest to claim they are "chiplets" as we have come to know them with Ryzen. It is a redefinition of the word similar to the word "Nougat" which now can be literally anything in a candy bar, sweetbread or casserole. If Ryzen only ever had one cpu die and who cares how many I/O dies there would not have been the same performance improvement and few would think positively of them. 8 core server chips, not good enough. Multiple compute dies like in Ponte Vecchio are chiplets, not ram controllers broken off and buffered with some cache. GPUs have had separate ram chips forever and motherboards used to have these little gpus that used the northbridge for a discrete ram controller just like these. Just that was for additional available system memory.

The only thing innovative is the manufacturing cost savings. Which is still worth it IMO, but going along with AMD's redefinition of chiplet to try to throw some of the positive mindshare over Ryzen to an undeserving implementation in their latest graphics products is knowingly spreading a mistruth.

RDNA3 should still be excellent GPUs and their relative increase in shaders will help. Hopefully their cost savings plan doesn't produce much stutter because of their novel memory controller segmentation. But if you can redefine these GPUs as having chiplets then AMD was very late to the CPU chiplet party. And everything has chiplets now.

Edit: Including all GPUs that can also use system memory via the CPU chiplet memory controller that also has cache. That also reads from the NAND chiplets via the NAND controller chiplets which also have cache, which gets stuff from the network connected chiplets.

Either RDNA3 doesn't have chiplets like Ryzen or chiplets are made from nougat.
 
Last edited:
  • Like
Reactions: eichwana
Thats all great.
Will they release a 7500xt that outcompetes the rx 6600 for 200 euro? 99 percent no. They will release a 200 dollar gpu 2 years from now that has 4 pcie lanes and is a carbon copy of a rx 580 like they are doing for the past 4 years literally. Literally.
And the rx6600 is a garbage tier card anyway.
So what reason do I have to be excited for their "ryzen moment". Their ryzen moment made them have 300 euro cpus.
 
Who exactly are these $1,000+ GPU for?

https://en.wikipedia.org/wiki/Aspirational_brand

Flagship products (let's call it product A) tend to be aspirational products, ie the average buyer wishes to own one, but can't because of its premium price. It creates a want, which is translated to the buyer buying a lower-line product (product B), which has fewer features, but has higher "value" ($/feature).

A's high pricing is intentional, because that's how it creates the perception of "good deal" for B. If A were priced at $1000, and B has 90% of A's features but is priced at $300, and B is "very good deal." Whereas if A were $400, then B's $300 is just an "OK deal, not too great." The higher A's price, the better B's price is perceived as a good deal. Again, the premium pricing is deliberate, and is a fundamental marketing strategy.

Now you why the RTX 4090 is $1600+.

Buying is not a matter of dollars and cents, but is about the psychology of buying behavior, and it can be manipulated through savvy marketing.

The above example is oversimplified to illustrated the point. Things get more complicated when competitors' pricing come into play. In this instance, there are only two GPU vendors, neither of whom wants to rock the boat with a price war, so the above example still holds pretty well.
 
You ever see how old school cartoons are made? Individual cells are drawn and layered or composited to form a separate scene. The simple elements are composted together to make complex scenes.

Doing games are a lot like animations with draw calls. The individual draw calls are fairly simple. Thus, a good amount of the GPU isn't needed, and resources go unused. You can send multiple draw calls at once, but with a unified memory io, you often bottleneck on access as different draw calls require different resources. With independent draw calls and memory controllers these bottles necks to independent parallel draw calls are removed. This makes the compositing of individual elements is faster.

If I'm correct, I think the reviewers will find a large increase in the memory usage and it being largely dependent on PCIe bus speed. Hence the large memory size.

It's a different approach than I thought AMD would take. But I'm still impressed.
 
Once AMD does it's product refresh for 27 Gbps GDDR6, it should get VRAM throughput of 1,296 GBps.
It's interesting how much the gap between GDDR6 and GDDR6X has narrowed with this generation, at least for shipping configurations.
AMD's 960 GB/s on the RX 7900 XTX is only 5% less than the 1008 GB/s of the RTX 4090 now,
whereas with the RX 6900 XT and RTX 3090 were only pushing 512 GB/s compared to Nvidia's 936 GB/s back in 2020.
Hopefully, that bandwidth increase from VRAM would justify another stack of Infinity Cache to boost burst speed transfer to L2 cache.

This is technically incorrect. Steam lumps ALL GTX 1060 variants under a single label — 3GB, 6GB, and laptop parts. The latest survey shows 7.62% for the GTX 1060, in first place. Fourth place shows the RTX 3060, while sixth place shows the RTX 3060 for Laptops. Add those together and it's a combined 8.86%. If the 1060 series were ungrouped — or all other GPUs were grouped — the RTX 3060 would take first place. Does that really matter, though? Probably not.

The perhaps more important aspect to consider is that the 3080 (1.82%), 3080 Ti (0.72%), and 3090 (0.48%) combined represent 3.02% of the surveyed market. Steam reportedly had around 134 million active monthly users in 2021.

If — big caveat as Valve doesn't provide hard statistics — if 3% of 134 million users have a 3080 or higher Nvidia card right now, that's still four million people in the world. Even if only 10% of those were to upgrade, that would be 400,000 people, and that's almost certainly more than enough to sell all 4090 and 4080 cards that get made in the next few months. Also note that anyone with a 3080 or above likely paid scalper prices for it, exactly the sort of people who would pay over $1000.

It's definitely high pricing, from AMD and Nvidia both. We'll see how the 7900 cards perform in a month, and whether or not they sell out at launch. 4090 cards are still holding more or less steady at over $2,100. :'(

I think what nVIDIA & AMD learned from the Global COVID LockDown was that there is a larger market for the Top 16% of Gamers than they thought, ergo the push for the upper tiers of the Product stack.

t0atuiT.jpg

The interesting real progress will be seeing how the RTX 4060 vs RX 7600 will stack up and how does it compare to it's predecessors.

That's the real "minimum bar" that you need to watch.

Rising tide lifts all boats and the "Rising Tide" is the 60/600 tier of Video Cards.
 
Last edited:
  • Like
Reactions: Bamda
Once AMD does it's product refresh for 27 Gbps GDDR6, it should get VRAM throughput of 1,296 GBps.

Hopefully, that bandwidth increase from VRAM would justify another stack of Infinity Cache to boost burst speed transfer to L2 cache.
I may be mistaken, but when I look at 6-64 bit vram controllers, each with 16MB cache, apparently controlling a pair of 2GB modules each I see variable speed, capacity and bandwidth depending on how the GPU splits up the memory load.

AMD is getting good performance so this is likely well managed in the games tested, but if split to 3 controllers your size and speed would be cut in half.

I imagine most newer games should be high enough in priority to get special optimizations, but not all will. Maybe not all can.

Maybe splitting up the data into small parts is an important step towards the goal of multiple GPU shader dies.
 
You ever see how old school cartoons are made? Individual cells are drawn and layered or composited to form a separate scene. The simple elements are composted together to make complex scenes.

Doing games are a lot like animations with draw calls. The individual draw calls are fairly simple. Thus, a good amount of the GPU isn't needed, and resources go unused. You can send multiple draw calls at once, but with a unified memory io, you often bottleneck on access as different draw calls require different resources. With independent draw calls and memory controllers these bottles necks to independent parallel draw calls are removed. This makes the compositing of individual elements is faster.

If I'm correct, I think the reviewers will find a large increase in the memory usage and it being largely dependent on PCIe bus speed. Hence the large memory size.

It's a different approach than I thought AMD would take. But I'm still impressed.
Seems like the game/engine would have to operate this way to take advantage. Being a relatively new frontier in architecture, I'd imagine the advantages would/will go largely unrealized until later games/patches can be implemented when the architecture is more ubiquitous. I'm not sure if/how you can "create" multiple draw calls at a driver level. Or is this related to asynchronous compute?
 
Seems like the game/engine would have to operate this way to take advantage. Being a relatively new frontier in architecture, I'd imagine the advantages would/will go largely unrealized until later games/patches can be implemented when the architecture is more ubiquitous. I'm not sure if/how you can "create" multiple draw calls at a driver level. Or is this related to asynchronous compute?
Yes and no. Depends on the driver and if they want a copy of the rendered object in system memory.

Like async web calls you can fire and forget and still have the client running. As the GPU is responsible for storing rendered objects for efficiency sake, the only time it becomes an issue is if you need to pull a copy of the rendered object into system memory. It's very rare that you generate a d3d object demanding system memory. Most programmers let the driver decide. (This is one form of driver optimization for games in terms of texture and compiled shader caching in spare memory)

Think of it like an ooe for the gpu. Nvidia is doing it on an instruction level. AMD is doing it on a draw call level if I'm correct with my guesses.
 
I think AMD is doing well with the design of these new GPUs.
But it is dishonest to claim they are "chiplets" as we have come to know them with Ryzen. It is a redefinition of the word similar to the word "Nougat" which now can be literally anything in a candy bar, sweetbread or casserole. If Ryzen only ever had one cpu die and who cares how many I/O dies there would not have been the same performance improvement and few would think positively of them. 8 core server chips, not good enough. Multiple compute dies like in Ponte Vecchio are chiplets, not ram controllers broken off and buffered with some cache. GPUs have had separate ram chips forever and motherboards used to have these little gpus that used the northbridge for a discrete ram controller just like these. Just that was for additional available system memory.

The only thing innovative is the manufacturing cost savings. Which is still worth it IMO, but going along with AMD's redefinition of chiplet to try to throw some of the positive mindshare over Ryzen to an undeserving implementation in their latest graphics products is knowingly spreading a mistruth.

RDNA3 should still be excellent GPUs and their relative increase in shaders will help. Hopefully their cost savings plan doesn't produce much stutter because of their novel memory controller segmentation. But if you can redefine these GPUs as having chiplets then AMD was very late to the CPU chiplet party. And everything has chiplets now.

Edit: Including all GPUs that can also use system memory via the CPU chiplet memory controller that also has cache. That also reads from the NAND chiplets via the NAND controller chiplets which also have cache, which gets stuff from the network connected chiplets.

Either RDNA3 doesn't have chiplets like Ryzen or chiplets are made from nougat.
I cant agree. Chiplet isnt tied to compute die.
As is defined also on wiki:
A chiplet[1][2][3][4] is a tiny integrated circuit (IC) that contains a well-defined subset of functionality. It is designed to be combined with other chiplets on an interposer in a single package.
AMD definitely has chipset technology on CPU and GPU platforms. And it allows them to scale products better with also better manufacturing costs. Of course everyone hoped AMD will also make two GCD version of RDNA3. There were some rumours and maybe AMD is testing similar product. Who knows... But anyway if GCDs already have connection over IF, it is same concept to that used in Zen products, where there are from 1 to i think 12 compute only chiplets.

So only difference between chiplets and chips, that are connected to each other over some sort of usual bus technology is packaging (chiplets are packed into one chip interface).
 
AS people struggle for cash, the progress moves on.
Charging money when most cannot buy is nothing new. But it does beg the question, how do both companies stay afloat when sales aren't great?

just look at both company earning. i haven't seen the full break down on AMD Q3 yet but in Q2 gaming GPU only count less than 7% of AMD Q2 revenue. on nvidia side geforce sales used to count 60% or even 70% of the company total revenue. in Q2 (nvidia did not report their Q3 earning yet) nvidia non gaming segment already generate more than twice the revenue of geforce (4.7 billion vs 2 billion).
 
"According to AMD, RDNA 3 GPUs can hit the same frequency as RDNA 2 GPUs while using half the power, or they can hit 1.3 times the frequency while using the same power."
. . . "The solution ends up being almost the reverse of the CPU chiplets, with memory controllers and cache being placed on multiple smaller dies while the main compute functionality resides in the central GCD chiplet.

The GCD houses all the Compute Units (CUs) along with other core functionality like video codec hardware, display interfaces, and the PCIe connection. The Navi 31 GCD has up to 96 CUs, which is where the typical graphics processing occurs. But it also has an Infinity Fabric along the top and bottom edges (linked via some sort of bus to the rest of the chip) that then connects to the MCDs."
. . . . "in turn, dramatically cuts power requirements, and AMD says all of the Infinity Fanout links combined deliver 3.5 TB/s of effective bandwidth while only accounting for less than 5% of the total GPU power consumption."
Really good stuff, Team THG !!
:ouimaitre:
 
Last edited:
Time will tell how these new cards perform but from what I have seen so far I like what AMD has done. I have never cared about what the top performance card does (4090) since I know I will never spend my money on getting one. I care about the price to performance, so that is what I will be looking at. I just need to be patient for more of these cards to be released to determine what I will be purchasing.
 
I cant agree. Chiplet isnt tied to compute die.
As is defined also on wiki:
A chiplet[1][2][3][4] is a tiny integrated circuit (IC) that contains a well-defined subset of functionality. It is designed to be combined with other chiplets on an interposer in a single package.
AMD definitely has chipset technology on CPU and GPU platforms. And it allows them to scale products better with also better manufacturing costs. Of course everyone hoped AMD will also make two GCD version of RDNA3. There were some rumours and maybe AMD is testing similar product. Who knows... But anyway if GCDs already have connection over IF, it is same concept to that used in Zen products, where there are from 1 to i think 12 compute only chiplets.

So only difference between chiplets and chips, that are connected to each other over some sort of usual bus technology is packaging (chiplets are packed into one chip interface).
Ryzen doesn't have an interposer.
Your source is inaccurate.