News AMD RDNA 3 GPU Specs: Up To 12,288 ALUs, 96MB Infinity Cache

Hm... So if the original plan was to have more cache per group via vertical stacking, it'll mean to make up for that difference they'll probably have to clock the memory a tad higher or can't harvest the dies reducing the BUS width. This may lower their efficiency target or will have to play with the clocks a bit better.

I'm most curious how they'll strike that balance with the chiplets.

As for Navi33, it looks like a straightforward upgrade from the current Navi23. I just hope they give the damn thing a full X16 width* instead of the crappy X8. I know it's intended for mobile first and PCIe5 (supposedly?), but come on AMD. Make a Navi34 use the X8 instead! I know, I know; it's completely moot at this point and more than likely X8, but I'll be preemtively salty about it anyway! xD

Regards.
 
Apr 1, 2020
1,431
1,083
7,060
And as someone who was an early adopter of AMD's previous first MCM design, the Zen 1 based 1800X and the huge dumpster fire that turned out to be that ended up costing me over $150 more thanks to having to buy a new license for Windows and RMA shipping, I wouldn't touch these cards if they cost $1.
 
And as someone who was an early adopter of AMD's previous first MCM design, the Zen 1 based 1800X and the huge dumpster fire that turned out to be that ended up costing me over $150 more thanks to having to buy a new license for Windows and RMA shipping, I wouldn't touch these cards if they cost $1.
Sorry for your bad experience, but it'll be your loss if they're good. Recent AMD products have much improved reliability compared to their Zen 1 stuff.

On another note, I'm a little bit sad that they couldn't ship with more cache, but I guess we'll see how well they perform soon.
 
Apr 1, 2020
1,431
1,083
7,060
Sorry for your bad experience, but it'll be your loss if they're good. Recent AMD products have much improved reliability compared to their Zen 1 stuff.

On another note, I'm a little bit sad that they couldn't ship with more cache, but I guess we'll see how well they perform soon.

I had a number of negative experiences with AMD over the years, only owned AMD from the 9600XT to my Fury Nano because I disliked nVidia, but now, having switched, it would take a lot for AMD to impress me enough to switch back.
 

giorgiog

Distinguished
Jun 6, 2010
19
23
18,515
My experience with AMD video cards' drivers a decade ago still keeps me from considering them again. On the other hand, my 1st gen Ryzen 1950X Threadripper has been rock solid (OC'd to 4.0ghz) since day one (nearly 5 years ago.) So there's hope.
 
And as someone who was an early adopter of AMD's previous first MCM design, the Zen 1 based 1800X and the huge dumpster fire that turned out to be that ended up costing me over $150 more thanks to having to buy a new license for Windows and RMA shipping, I wouldn't touch these cards if they cost $1.
Zen 1 and the 1800X were not MCM designs. They were monolithic. The multi-chip approach didn't make an appearance until the 3000-series.

Also, it's unclear what any of that would have to do with buying a Windows license or paying for RMA shipping. >_>

As for Navi33, it looks like a straightforward upgrade from the current Navi23. I just hope they give the damn thing a full X16 width* instead of the crappy X8. I know it's intended for mobile first and PCIe5 (supposedly?), but come on AMD. Make a Navi34 use the X8 instead! I know, I know; it's completely moot at this point and more than likely X8, but I'll be preemtively salty about it anyway! xD
PCIe 5.0 x2. : D
 

jp7189

Distinguished
Feb 21, 2012
332
189
18,860
When I first heard about chiplet gpus I assumed that meant multiple GCDs tied together with cache. That concerned me from a frametime consistency point of view, but excited me with the idea of 1x 2x and 4x configs. 48k ALUs in a single package would change the face of gaming forever. Now that I see we're talking only a single GCD I'm kinda meh. I'm sure it will be a good and competitive product without glitches, but it won't be earth shaking.
 

dipique

Distinguished
Nov 25, 2013
14
17
18,515
I just hope they give the damn thing a full X16 width* instead of the crappy X8. I know it's intended for mobile first and PCIe5 (supposedly?), but come on AMD. Make a Navi34 use the X8 instead! I know, I know; it's completely moot at this point and more than likely X8, but I'll be preemtively salty about it anyway! xD

I'm confused why you would want them to consume additional pcie lanes when they can't saturate the ones they already have. And this is on machines with limited pcie lanes.
 

InvalidError

Titan
Moderator
Because people buying low end cards are people, more than likely, upgrading old systems which may not even have PCIe4.
As long as you keep your settings at a level where resources can stay in the GPU's VRAM, there is almost no difference between 3.0x8 and 4.0x16 until you reach 150-200fps where the amount of scene setup traffic and associated latency start becoming an issue in some titles. It is far more problematic at the 4GB low-end where 4.0x4 vs 3.0x4 can be a 50-80% loss due to having to do asset swaps from system memory much sooner and more often.
 

Lorien Silmaril

Distinguished
Jul 18, 2014
36
32
18,570
According to Angstronomics, Navi 33 outpeforms Intel's highest-tier Arc Alchemist offering and commands only half the cost of production while also being more power efficient.

If true, this means Arc won't even compete on the cost/value front... if it ever comes out.
 
As long as you keep your settings at a level where resources can stay in the GPU's VRAM, there is almost no difference between 3.0x8 and 4.0x16 until you reach 150-200fps where the amount of scene setup traffic and associated latency start becoming an issue in some titles. It is far more problematic at the 4GB low-end where 4.0x4 vs 3.0x4 can be a 50-80% loss due to having to do asset swaps from system memory much sooner and more often.
DOOM proves you wrong though. Even without saturating the VRAM, it does depend a lot on PCIe bandwidth. I have no idea why, but it's the easiest counte example I could think of.

Regards.
 

InvalidError

Titan
Moderator
DOOM proves you wrong though. Even without saturating the VRAM, it does depend a lot on PCIe bandwidth. I have no idea why, but it's the easiest counte example I could think of.
Anything is going to "push PCIe bandwidht" when it runs at 200+fps, which I had already explicitly covered - read the second half of the first sentence in the post you quoted. No mystery there.
 
Anything is going to "push PCIe bandwidht" when it runs at 200+fps, which I had already explicitly covered - read the second half of the first sentence in the post you quoted. No mystery there.
It's not as clear cut as that though... I think you're over simplifying the problem/issue, but I'll concede for the specific case of "X8", it may not be a big issue anyway.

https://www.techpowerup.com/review/amd-radeon-rx-6500-xt-pci-express-scaling/28.html

Regards.