News AMD Addresses Controversy: RDNA 3 Shader Pre-Fetching Works Fine

I am so tired of these pseudo CONNOISSEURS talking about leaks with ZERO business knowledge about the industry and ZERO knowledge about the engineering aspect.

Thanks for this article, it just shown how some of these Youtubers have ZERO knowledge about semiconductors...

If you are following MLID, RGT, Coreteks, Kopite, momomo, Kepler, Greymon... then you are the problem propagating garbage that this article is adressing.
 
Dec 17, 2022
2
3
15
I am so tired of these pseudo CONNOISSEURS talking about leaks with ZERO business knowledge about the industry and ZERO knowledge about the engineering aspect.

Thanks for this article, it just shown how some of these Youtubers have ZERO knowledge about semiconductors...

If you are following MLID, RGT, Coreteks, Kopite, momomo, Kepler, Greymon... then you are the problem propagating garbage that this article is adressing.
please may i have a list of people that i can and can't listen to?
 

rluker5

Distinguished
Jun 23, 2014
621
374
19,260
I've got a hypothesis as to why RDNA3 shaders underperform relative to RDNA2 . The infinity cache has been diminished. RDNA2 performed unexpectedly well vs RDNA1 when it got it, and now that cache has been split into little 16 chunks.
The cache helped not just bandwidth, but latency as well. Like the X3D.
It would be interesting to see a comparison of the three arches and see if 3 acts more like a large 1 than 2. If there are games that favor cache on the GPU side.
 
  • Like
Reactions: bit_user

Makaveli

Splendid
I am so tired of these pseudo CONNOISSEURS talking about leaks with ZERO business knowledge about the industry and ZERO knowledge about the engineering aspect.

Thanks for this article, it just shown how some of these Youtubers have ZERO knowledge about semiconductors...

If you are following MLID, RGT, Coreteks, Kopite, momomo, Kepler, Greymon... then you are the problem propagating garbage that this article is adressing.

This idiots on reddits and arm chair GPU architects who have no idea what they are talking about. They really need to start sending letters from their lawyers out to these people to STFU or they will be held liable for hearsay and spreading FUD.

please may i have a list of people that i can and can't listen to?

Its not about who you can and can't listen to its about uneducated people spreading rumors.
 

nimbulan

Distinguished
Apr 12, 2016
37
32
18,560
That's all well and good, but then why is Navi 31 producing such underwhelming performance, falling far short of both expectations and AMD's official numbers from their announcement just a month ago? Drivers are certainly part of it, but I have a hard time believing that's the entire problem.
 

tamalero

Distinguished
Oct 25, 2006
1,132
138
19,470
I am so tired of these pseudo CONNOISSEURS talking about leaks with ZERO business knowledge about the industry and ZERO knowledge about the engineering aspect.

Thanks for this article, it just shown how some of these Youtubers have ZERO knowledge about semiconductors...

If you are following MLID, RGT, Coreteks, Kopite, momomo, Kepler, Greymon... then you are the problem propagating garbage that this article is adressing.
Well, many of them are just hunting for viewers at this point.
I have seen some outrageous videos like gamers meld that make zero literal sense. With dumb "hip" titles to attract dumb ppl like "RIP X PRODUCT" almost in the same caliber as those dumb videos claiming "X company does not want you to know this" "Most well known secret is out!" and so on..
 
  • Like
Reactions: King_V and bit_user
Calling the A0 stepping "unfinished silicon" indeed seems malicious. The bit about the pre fetcher on the other hand seems on point. There were revisions where this feature worked and those didn't make it to market. "Wasn't targeted for inclusion in this generation" sounds a lot like "we couldn't get it to work on time".
 

Giroro

Splendid
I am so tired of these pseudo CONNOISSEURS talking about leaks with ZERO business knowledge about the industry and ZERO knowledge about the engineering aspect.

Thanks for this article, it just shown how some of these Youtubers have ZERO knowledge about semiconductors...

Personally, I'm completely fed up with YouTube's clickbait "race to the bottom". Every video has to be a bigger troll and more misleading than the last. The executives in control of YouTube are to blame. It's their algorithm that is paying uneducated morons to flamebait, lie, self-promote, advertise, and spam their platform with unethical fake news videos that are ultimately just paid advertisements.
The content creators are just acting the way they were trained by YouTube's manipulative skinner box of an algorithm.
There's exactly one way YouTube lets people make money. Everybody tries, and 99.99998% fail.
 

atomicWAR

Glorious
Ambassador
Drivers are certainly part of it, but I have a hard time believing that's the entire problem.

I don't. AMD gpu drivers have notariously been slow to extract the full power of their GPUs. This is where the mis-'belief' AMD GPUs age like fine wine. In reality Nvidia has a much larger software team dedicated to drivers and they tend to extract most out of their GPUs right away whereas due to AMDs more limited team size it takes them months to years to fully utilize their silicon. So yeah if history is any indicator in two years time these 7000 series GPUs could gain an additional 10 percent or more in performance.
 

TechieTwo

Notable
Oct 12, 2022
234
209
960
IME the majority of keyboard "experts" online regardless of the subject matter that have never worked as a paid professional in the field of discussion, constantly post false info. perceived as fact when it's anything but. Unfortunately this results in ASSumptions and misinformation that is often detrimental to enthusiasts who have no means to decipher between fact and meritless opinion.
 
  • Like
Reactions: bit_user

d0x360

Distinguished
Dec 15, 2016
115
47
18,620
I've got a hypothesis as to why RDNA3 shaders underperform relative to RDNA2 . The infinity cache has been diminished. RDNA2 performed unexpectedly well vs RDNA1 when it got it, and now that cache has been split into little 16 chunks.
The cache helped not just bandwidth, but latency as well. Like the X3D.
It would be interesting to see a comparison of the three arches and see if 3 acts more like a large 1 than 2. If there are games that favor cache on the GPU side.
Uhmm no.

RDNA2 didn't have GCN as its base like 1 did.

RDNA3 will be fine. We are just seeing growing pains since it does things differently than before. Firmware and driver updates will significantly improve performance over it's lifecycle as will developers learning how to optimize their code for the architecture.

RDNA3 will teach both AMD and developers significantly and any issues found will be fixed for RDNA4. Speaking of 4 it will also handle RT differently than 2&3 which use the cores for calculating everything unlike nVidia and their method which uses tensor cores to calculate that kind of thing.
 

bit_user

Polypheme
Ambassador
You can never trust what a company's PR department says. If they can deny something, they will. So, I put zero stock in their messaging about this.

There's also the undeniable fact that these GPUs launched at the very end of their market window. We can't rule out the possibility that AMD wanted to launch with more features working, but had simply run out of time to do a respin. On the flip side, maybe they expected to launch in Q1/2023 but couldn't resist the temptation to ship something in the holiday shopping season.

Whatever the case, we just have to evaluate these products as they are. AMD decided they were good enough to ship and sell at their set price points, so that's how they must be judged.

BTW, full respect to @PaulAlcorn for reaching out to AMD, instead of just riding the rumor bandwagon. Even if I'm skeptical, the perspectives provided in this article are genuinely worthwhile.
 

bit_user

Polypheme
Ambassador
I've got a hypothesis as to why RDNA3 shaders underperform relative to RDNA2 . The infinity cache has been diminished. RDNA2 performed unexpectedly well vs RDNA1 when it got it, and now that cache has been split into little 16 chunks.
The cache helped not just bandwidth, but latency as well. Like the X3D.
It would be interesting to see a comparison of the three arches and see if 3 acts more like a large 1 than 2. If there are games that favor cache on the GPU side.
Yeah, but also consider that RTX 4000 has a lot more L2 cache than its predecessors. To the point that the RTX 4090 actually has more cache than the RX 7900 XTX has Infinity Cache!

So, when you say RDNA 3 underperformed, how much of that is comparing it with RDNA 2 versus comparing it with a new RTX that's might have done more to catch up than we expected?
 
  • Like
Reactions: atomicWAR
Yeah, but also consider that RTX 4000 has a lot more L2 cache than its predecessors. To the point that the RTX 4090 actually has more cache than the RX 7900 XTX has Infinity Cache!

So, when you say RDNA 3 underperformed, how much of that is comparing it with RDNA 2 versus comparing it with a new RTX that's might have done more to catch up than we expected?
It is a fact the 7900XTX (to focus on just one for now) is relatively under-performing in 1080p vs 4K in terms of scaling compared to a 6950XT. Is it a big difference or, more importantly, relevant? Nah, I don't think so. I doubt there's any game out there that won't be CPU-bottleneck'd by anything above a 6900XT/3090 at 1080p. Well, for Rasterization at least.

One of the things MLiD and others leaked was they weren't going to use more cache on these initial ones, because the added cost didn't justify the extra performance, which seems to be reflected in all benchmarks and products (the specs) being sold. I haven't seen anything even hinting at a version with added cache (hi1) in their MCDs coming out soon from anywhere.

And the cache, looking at RDNA2's implementation, seemed to be most effective at lower resolutions anyway. I'd imagine they'd have to give the cards a lot more cache to make it worthwhile at higher resolutions where it makes sense and, looks like, the GPU (shaders) can't keep up anyway. Still, it would be interesting to see models with more cache and compare one day.

And just food for thought: unless you yourself talk to someone form AMD, Intel or nVidia directly or have access to confidential data, you have to doubt anything and everything you read and watch until people can get their hands on with the hardware. I like leaked information and, much like with everything, needs to be handled with caution.

Regards.
 
  • Like
Reactions: King_V and bit_user

bit_user

Polypheme
Ambassador
Anyone in Engineering knows there is no flawless product. There all sorts of issue when something is released... everything... To the layman, it's #EXPOSEd finished and bankrupt. The the people who design it, it's just another cycle.
There's truth to this, but we also know of other examples, like that Vega shipped with some broken features... and we know that wasn't intentional, because they advertised them in their pre-launch messaging about it.

So, we always have to be skeptical about what we're being told, but we should also keep some perspective that chips have bugs, most of those bugs have workarounds, and most of them aren't performance-critical or else there would've been another respin.
 

bit_user

Polypheme
Ambassador
IME the majority of keyboard "experts" online regardless of the subject matter that have never worked as a paid professional in the field of discussion, constantly post false info. perceived as fact when it's anything but. Unfortunately this results in ASSumptions and misinformation that is often detrimental to enthusiasts who have no means to decipher between fact and meritless opinion.
Not only that, but they're incentivized to sensationalize everything, because that gets more clicks, likes, and follows. Furthermore, they have very little disincentive for leaking incorrect information. That produces a somewhat toxic information ecosystem, which is fairly harmless when it comes to gaming hardware, but much more consequential if you're dealing in politics or health-related information (5G, anyone?).
 
  • Like
Reactions: atomicWAR

umeng2002_2

Commendable
Jan 10, 2022
185
169
1,770
As for the broken Vega features, the engineering solution is to do another stepping or redesign. Only business decisions made AMD launch it with that particular feature not working. Nothing comes out 100% like it was designed.
 
  • Like
Reactions: atomicWAR