News AMD Radeon RX 7000-Series and RDNA 3 GPUs: Everything We Know

husker

Distinguished
Oct 2, 2009
1,207
221
19,670
I wonder what would happen if AMD put an RDNA 3 chip out there that just totally ignored ray tracing. I don't mean just disable it, I mean design it from the ground up to achieve the best traditional rasterization performance known to man. I know, I know, huge marketing blunder... but marketing is often the bane of engineering. Would this result in a chip that was less expensive, use less power, or perform better (or maybe all three)? After all, ray-tracing effects are still very much a novelty feature in games and really have a minor impact on visuals while costing a lot in compute cycles. Why not offer an option that does one thing really well and see who wants it? Besides, anyone who really wants ray tracing in a bad way is going to buy an Nvidia card. Again, don't counter with an argument that talks about marketing, company image or the future of gaming. I'm talking pure best rasterization performance for the here and now.
 
  • Like
Reactions: sherhi

wifiburger

Distinguished
Feb 21, 2016
613
106
19,190
When they start having feature parity with Nvidia RTX & lower price then maybe I'll care.
They keep prices too close to Nvidia and lack features & driver optimization; so I personally don't care about AMD GPUs.

Hype / marketing around new AMD GPUs launches are a big meh...
 
I wonder what would happen if AMD put an RDNA 3 chip out there that just totally ignored ray tracing. I don't mean just disable it, I mean design it from the ground up to achieve the best traditional rasterization performance known to man. I know, I know, huge marketing blunder... but marketing is often the bane of engineering. Would this result in a chip that was less expensive, use less power, or perform better (or maybe all three)? After all, ray-tracing effects are still very much a novelty feature in games and really have a minor impact on visuals while costing a lot in compute cycles. Why not offer an option that does one thing really well and see who wants it? Besides, anyone who really wants ray tracing in a bad way is going to buy an Nvidia card. Again, don't counter with an argument that talks about marketing, company image or the future of gaming. I'm talking pure best rasterization performance for the here and now.
I suspect the actual amount of die space dedicated to ray tracing in RDNA 3 is relatively minor. Nvidia (and now Intel) have both RT hardware as well as matrix hardware, which does bloat things a bit more. Still, consider this: GA102 (used in RTX 3080/3090 series) measures 628mm square and has 84 SMs. It also has video codec stuff and other elements that are basically completely separate from the die size. Using this actual die photograph, it's relatively easy to determine that of the total 628mm square die size, only 42–43% of that is used for the SMs. All 84 SM's thus use about 266mm square.

Each SM houses 128 FP32 cores, four tensor cores, and one RT core. The actual die size of an Ampere SM is around 3.2mm square. Based on die photos of earlier GPUs, I'd say only a relatively small amount of that area actually goes to the RT and tensor cores. Actually, I can do better than that on the estimate.

Here's a die shot of TU106, and here's a die shot of TU116. TU106 has RT cores and Tensor cores, TU116 does not. Using published die sizes and checking the area for just the SMs on those two GPUs, I got an SM size (on 12nm) of 5.5mm square for TU106 and 4.5mm square for TU116. There's also a die shot of TU104 (but none on TU102), which gave an approximate size of 5.4mm square per SM. So, 5.4/5.5mm square per SM looks consistent and the entire SM is about 22% larger due to the inclusion of RT cores and Tensor core.

AMD doesn't have Tensor hardware, which means it's probably only ~10% more die area to add the Ray Accelerators. Best-case, then, AMD could improve shader counts and performance in rasterization by about 10–15% if it skipped all ray tracing support. Or put another way, certain complex calculations (those associated with ray tracing) run about ten times faster for a die space cost of 10%.
 

Makaveli

Splendid
"RX 7700 XT is new, but it's no faster than the old RX 6900 XT, which I can now pick up for just $450..." That sort of thing. "

If performance projections come out to be true especially on the Ray tracing side. I think it will be much easier to pull the trigger this time around than to go with previous gen even with huge discounts. But of course YMMV depending on your budget and what you are using now.
 
Guys do you think the rumours are true that the 7600XT will have 6900XT performance at around $400?
Realistically, no, but we'll have to see. I'd bet 7700 XT (or whatever it's called) will be close to the 6900 XT, probably better in some areas (RT) and worse in others. The presented "paper specs" also have me worried something is way off, as going from a theoretical ~23 teraflops with RX 6900 XT to over 60 teraflops seems like far too big of a jump. Granted, Nvidia went from 13.4 teraflops with 2080 Ti to a theoretical 40 teraflops with 3090 Ti, even if real-world performance didn't match that increase (at all!) Anyway, this is all rumored performance and AMD might actually have worse per-clock performance on its new GPU shaders and make up for it with more than double the shader count.
 

user7007

Commendable
Mar 9, 2022
36
27
1,560
RDNA3 seems like AMD's most competitive offering in years. I'm curious what range of prices multiple dies creates, what the ray tracing and encoding performance look like etc. The steam hardware surveys show AMD lost market share to nvidia the entire pandemic /3000 series and I'm pretty sure that's largely because RTX/DLSS was faster and more mature than the AMD alternatives in the 3000 series vs RDNA2.

It sounds like all those issues will be addressed with RDNA3 in addition to a big perf/watt improvement and speed improvements (big teraflop numbers and wider memory bus). But nvidia is moving back to TMSC so they will also get a big perf/watt improvement and probably a big clock speed improvement as freebies on top of the planned architecture improvements. RDNA3 has potential to be great, disruptive, etc whereas the 4000 series is almost assured to be a great product. Personally I think AMD needs to be in the order of 30% cheaper or 20% faster to move the needle much.
 
  • Like
Reactions: JarredWaltonGPU

escksu

Reputable
BANNED
Aug 8, 2019
878
354
5,260
RDNA3 seems like AMD's most competitive offering in years. I'm curious what range of prices multiple dies creates, what the ray tracing and encoding performance look like etc. The steam hardware surveys show AMD lost market share to nvidia the entire pandemic /3000 series and I'm pretty sure that's largely because RTX/DLSS was faster and more mature than the AMD alternatives in the 3000 series vs RDNA2.

It sounds like all those issues will be addressed with RDNA3 in addition to a big perf/watt improvement and speed improvements (big teraflop numbers and wider memory bus). But nvidia is moving back to TMSC so they will also get a big perf/watt improvement and probably a big clock speed improvement as freebies on top of the planned architecture improvements. RDNA3 has potential to be great, disruptive, etc whereas the 4000 series is almost assured to be a great product. Personally I think AMD needs to be in the order of 30% cheaper or 20% faster to move the needle much.

To be realistic, I really don't think 7000 series is able to compete with with Nvidia 4000 in terms of ray tracing. But I do expect to be closer. Maybe a 7900XT could be as good as 4070/4080 in terms of ray tracing.

As for DLSS stuff, both camps are now very similar in terms of quality so its more of support from games. Btw, Its technically possible to have both DLSS and FSR. This is because FSR is done on the image after its rendered while DLSS is before.

Disruptive wise, don't expect any magic. Btw, all these 6900XT, 6800XT, 6950XT are more of cards for gaining glory aka king of the hill. They don't have much volume. The real stars are actually the lower end ones. For Nvidia it will be cards like 1650, 1660, 3050,3060 These are the models raking up huge volume and marketing share esp. for OEM.

Lastly, no. Don't expect any big changes. You won't see an AMD card thats 20% and 30% cheaper. IT will never happen. The last thing both camps want is to get into a price war.
 

escksu

Reputable
BANNED
Aug 8, 2019
878
354
5,260
I wonder what would happen if AMD put an RDNA 3 chip out there that just totally ignored ray tracing. I don't mean just disable it, I mean design it from the ground up to achieve the best traditional rasterization performance known to man. I know, I know, huge marketing blunder... but marketing is often the bane of engineering. Would this result in a chip that was less expensive, use less power, or perform better (or maybe all three)? After all, ray-tracing effects are still very much a novelty feature in games and really have a minor impact on visuals while costing a lot in compute cycles. Why not offer an option that does one thing really well and see who wants it? Besides, anyone who really wants ray tracing in a bad way is going to buy an Nvidia card. Again, don't counter with an argument that talks about marketing, company image or the future of gaming. I'm talking pure best rasterization performance for the here and now.

6900XT is already faster than 3090 in non-ray tracing games. But is it really very popular? Nope....

There are many other factors than just product performance.
 
6900XT is already faster than 3090 in non-ray tracing games. But is it really very popular? Nope....

There are many other factors than just product performance.
Yes. Bias and brand loyalty are the biggest ones.

I'm not even being sarcastic here, as they're really important and nVidia knows it. The "technical" pluses from nVidia are still more of a "on paper" thing. Keep in mind AMD has FSR working in the Consoles and the Steam Deck, whereas nVidia will try adding DLSS in the next Switch. And you can't realistically use RT in every game (new ones, I mean) yet.

This being said, AMD is creeping up on them and I'd imagine they're half aware of it. This is a "Tesselation" thing all over again in terms of "features". Hell, even another "PhysX" thing, I'd say.

Regards.
 

Tugrul_512bit

Distinguished
Nov 19, 2013
43
6
18,545
AMD is like Mercedes engine with Lada hull driven by nanny while Nvidia is like Porsche engine on a Chrysler hull driven by Schumacher. When nanny does pedal to the metal, Schumacher eats dust.
 

msroadkill612

Distinguished
Jan 31, 2009
202
29
18,710
"The last thing both camps want is to get into a price war. "
Emotive & untrue in essence IMO.

MCM is a huge cost/competitive advantage for AMD GPUs, just like it was an intel killer for Zen.

Nvidea has to push the practical economic limits of (monolithic) chip sizes to compete at the high end, whereas AMD can team easily & cheaply made multiple gpuS & specialist processors in a Fabric.

I think AMD are v realistic about their strengths & weaknesses, & will tend to offer fine wine muscle that Nvidia's higher costs dont allow it to match.
 
"The last thing both camps want is to get into a price war. "
Emotive & untrue in essence IMO.

MCM is a huge cost/competitive advantage for AMD GPUs, just like it was an intel killer for Zen.

Nvidea has to push the practical economic limits of (monolithic) chip sizes to compete at the high end, whereas AMD can team easily & cheaply made multiple gpuS & specialist processors in a Fabric.

I think AMD are v realistic about their strengths & weaknesses, & will tend to offer fine wine muscle that Nvidia's higher costs dont allow it to match.

if you look what happen the past few years AMD have tried to avoid price war with nvidia. and we still did not know how MCM going to give AMD the advantage. we already heard about AMD will not going with multiple GCD for Navi 3x ala MI200/250X.
 

aalkjsdflkj

Honorable
Jun 30, 2018
45
33
10,560
If the rumors are true about AMD and NVIDIA's power consumption on the next generation cards (I've seen rumors of 250-350W for a 4070), then as long as AMD's cards are more efficient then I'll jump over to team Red. With rising energy prices I simply cannot justify buying a space heater to play games. And I'm not even in Europe with their skyrocketing energy costs. I don't know how many GPUs are sold over there, but only people with excess solar panel power or substantial wealth will be able to game on a high TDP GPU. Efficiency really needs to be prioritized soon. I still don't care about ray tracing; for me it's all about gaming efficiently - lower ongoing costs, less excess heat in summer, and less noise are all higher priorities than a feature that doesn't significantly impact gameplay.
 

husker

Distinguished
Oct 2, 2009
1,207
221
19,670
6900XT is already faster than 3090 in non-ray tracing games. But is it really very popular? Nope....

There are many other factors than just product performance.
Agreed. Once such factor is the effective marketing to convince people that product performance should not be the main factor.
 

bigdragon

Distinguished
Oct 19, 2011
1,111
553
20,160
I think AMD has the potential to claw back market share from Nvidia if they can get the 7000 series out in sufficient quantities and not guzzle energy. The Nvidia 4000 series is going to create way too much heat. The lower price and healthy VRAM quantities in AMD cards also help them against Nvidia rivals.

However, a large number of us can't even consider AMD cards as an option right now due to the Blender HIP vs OptiX situation. AMD needs to improve the performance of their software. Game performance is close, but rendering performance is not.
 

PiranhaTech

Reputable
Mar 20, 2021
134
86
4,660
Chiplets for a GPU make a lot of sense. The strength of AMD's chiplets has been multicore, and a GPU is multicore. Still, AMD seems to have greatly improved the single-threaded performance as well with the Ryzen 5000 series. I wonder if the Radeon group was waiting until the CPU group chiplet making matured

Not only does this affect PC chips, but this has got to have an effect on console ones as well
 
Sep 10, 2022
1
0
10
I wouldn't mind a small bump of improvement with either AMD or Nvidia if it means they can significantly cut down on the TDP

AMD CPUs can REALLY go low on power with undervolting and PBO tuning and maintain pretty epic performance. The GPUs are a little touchier though.

Still, I expect you can drop top clock speeds considerably (2GHz @ 150w) and still achieve insane performance. So even if the cards are configured for high power high performance you can tune them to be efficiency monsters.

On another note, I guarantee AMD takes the performance crown this cycle IF MCM scales to 2+ modules simply because Nvidia can’t produce a bigger die to compensate for that kind of scaling since they are still monolithic. I think AMD wins this round easily on the top end due to MCM (AMD has far greater scalability this round).
 
I suspect the actual amount of die space dedicated to ray tracing in RDNA 3 is relatively minor. Nvidia (and now Intel) have both RT hardware as well as matrix hardware, which does bloat things a bit more. Still, consider this: GA102 (used in RTX 3080/3090 series) measures 628mm square and has 84 SMs. It also has video codec stuff and other elements that are basically completely separate from the die size. Using this actual die photograph, it's relatively easy to determine that of the total 628mm square die size, only 42–43% of that is used for the SMs. All 84 SM's thus use about 266mm square.

Each SM houses 128 FP32 cores, four tensor cores, and one RT core. The actual die size of an Ampere SM is around 3.2mm square. Based on die photos of earlier GPUs, I'd say only a relatively small amount of that area actually goes to the RT and tensor cores. Actually, I can do better than that on the estimate.

Here's a die shot of TU106, and here's a die shot of TU116. TU106 has RT cores and Tensor cores, TU116 does not. Using published die sizes and checking the area for just the SMs on those two GPUs, I got an SM size (on 12nm) of 5.5mm square for TU106 and 4.5mm square for TU116. There's also a die shot of TU104 (but none on TU102), which gave an approximate size of 5.4mm square per SM. So, 5.4/5.5mm square per SM looks consistent and the entire SM is about 22% larger due to the inclusion of RT cores and Tensor core.

AMD doesn't have Tensor hardware, which means it's probably only ~10% more die area to add the Ray Accelerators. Best-case, then, AMD could improve shader counts and performance in rasterization by about 10–15% if it skipped all ray tracing support. Or put another way, certain complex calculations (those associated with ray tracing) run about ten times faster for a die space cost of 10%.

The 10-15% estimate is around what I was calculating. It really doesn't matter that much especially on 5nm as these chips from AMD shouldn't be very large at all especially if the rumors of the memory controller(s) and L3 ie Infinity cache being on chiplets is true.
 

msroadkill612

Distinguished
Jan 31, 2009
202
29
18,710
if you look what happen the past few years AMD have tried to avoid price war with nvidia. and we still did not know how MCM going to give AMD the advantage. we already heard about AMD will not going with multiple GCD for Navi 3x ala MI200/250X.
"MCM is a huge cost/competitive advantage for AMD GPUs, just like it was an intel killer for Zen. " - I cant profitably add to this

not read but these search results make amd gpu MCM seem alive & well

https://www.tomshardware.com/news/amd-big_navi-rdna2-all-we-know
View: https://www.youtube.com/watch?v=Lb4UfGLhs44
 

creatorbros3

Honorable
Sep 15, 2017
16
5
10,515
I am just hopeful that these will perform better in Davinci Resolve and other creative applications than previous generations of AMD GPUs as well as Nvidia's offerings. I want to go red with my upgrade that is likely due within the next year. But even with the performance being comparable or better in gaming, I cannot justify going with one since they are worse in Davinci Resolve which is my main use for my PC. However, it is my understanding that some of that is Blackmagic Designs optimizations for Nvidia rather than AMD. So it is on BM to do some better optimization as well. I don't see that happening soon, so I may be forced to go with an RTX card as a replacement for my aging 1070... but I am really hoping that is wrong. I would love to have a 100% red system.