Review AMD Radeon RX 7900 XTX and XT Review: Shooting for the Top

Page 6 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

test_123

Distinguished
Feb 22, 2012
97
2
18,535
Have you guys at Tom's Hardware ever thought of doing a review of the 7000's and 4000's a few months later after launch. Basically, the review would show the driver fixes and performance improvements.
 

mjbn1977

Distinguished
Have you guys at Tom's Hardware ever thought of doing a review of the 7000's and 4000's a few months later after launch. Basically, the review would show the driver fixes and performance improvements.

Well, some reviewers might do that. But, come on, sorry, if you put a product on the market it should perform they way it will perform in 6 months. Sure, there is always little improvements to be done optimization within drivers (even with Nvidia games), but if your product needs to age "like good wind" its clearly not ready to be released. They whole notion of "it might be better in the future" is BS, because it only shows that they were not ready and released it way too early. Look for example at the Guardians of the Galaxy performance (not that this is a game I play, but some do). Clearly nobody at AMD bothered to epitomize any drivers for that....

And Nvidia is not sleeping neither. They already working on increasing performance as well. Unreal Engine 5 raytracing with Nvidia is going to be the bomb....
 

Phaaze88

Titan
Ambassador
Well, some reviewers might do that. But, come on, sorry, if you put a product on the market it should perform they way it will perform in 6 months. Sure, there is always little improvements to be done optimization within drivers (even with Nvidia games), but if your product needs to age "like good wind" its clearly not ready to be released. They whole notion of "it might be better in the future" is BS, because it only shows that they were not ready and released it way too early. Look for example at the Guardians of the Galaxy performance (not that this is a game I play, but some do). Clearly nobody at AMD bothered to epitomize any drivers for that....
Look at the state some AAA games launch in these days, backlash > 'apologies' > all is forgiven, with the publisher continuing to launch titles when they're not ready - patches pending.
Software is one of the biggest roadblocks to hardware, if not #1.
 
  • Like
Reactions: RodroX

mjbn1977

Distinguished
I feel like the 7900xt is not a bad card. If you compare it to the 3080 at a recent MSRP ($830), its a great deal. No other nivida cards sell for less than $1000. So if your peak budget is $1000, the 7900xt is only competing with the 6950xt. But I wanted something a little better than 6950xt. Better codecs, better ray tracing, more power efficient, better for 4k especially. The 7900xt ticks all those boxes.

Agreed, nobody said they are bad. Most reviewer agree that they actual pretty good. But they have the same problem as Nvidia.....they are overpriced. Even so they cheaper than the 4080, they still at least $200 dollar too expensive (the 4080 at least $300 to expensive). And we both are guilty of buying them anyway...LOL
 

Colif

Win 11 Master
Moderator
Look at the state some AAA games launch in these days, backlash > 'apologies' > all is forgiven, with the publisher continuing to launch titles when they're not ready - patches pending.
Software is one of the biggest roadblocks to hardware, if not #1.
the number of gamers who don't complain seems to out numbered those that do, so companies just keep repeating the same steps. If people stopped buying games that are broken on release, they might stop and think before releasing them. It was better before the internet let them release half made software.
 

mjbn1977

Distinguished
Look at the state some AAA games launch in these days, backlash > 'apologies' > all is forgiven, with the publisher continuing to launch titles when they're not ready - patches pending.
Software is one of the biggest roadblocks to hardware, if not #1.

To be honest....the biggest road block for high end GPUs is the performance of the available CPUs. What happens if the next GPU generation in two years is doing 40-50% increase.....what CPU in single core performance is keeping up with that? I think that is why technologies like frame generation getting more and more important.....
 

Phaaze88

Titan
Ambassador
To be honest....the biggest road block for high end GPUs is the performance of the available CPUs. What happens if the next GPU generation in two years is doing 40-50% increase.....what CPU in single core performance is keeping up with that? I think that is why technologies like frame generation getting more and more important.....
It's the software.
It's too bad that core parallelization never took off, then a single core wouldn't be responsible for everything, including instructions to other cores. Must be complicated and time consuming to implement - I'm not programmer though.
 

systemBuilder_49

Distinguished
Dec 9, 2010
58
15
18,545
Trifecta of problems here .. RX 7900 XTX
  1. Reference: Horrific Coil Whine and Poor power management
  2. Often unplayable ray-tracing frame rates
  3. Blender and other 3D and video processing software rendering times are often painfully slow
The reference AMD hopefully is not carried over and some of the AIBs, but nothing is going to fix ray-tracing and slow production rendering.
I keep hearing that the new cards are not good at puddle rendering (acronym: RTX) and for producing professional static 3D graphics images, and that people always have $200 more to spend, and that performance per dollar on rasterization never matters. If you spend the extra $200 imho you're stuck with whatever you buy one extra year. The fact that this new card is the smallest and renders the most frames per dollar at almost all resolutions 1440p and higher "never matters". Can you guys even hear yourself speak? Do you play any games at all? How many puddles do you shoot per day? Because if you can't shoot at it what good is enhancing the rendering?
 
  • Like
Reactions: clsmithj

mjbn1977

Distinguished
It's the software.
It's too bad that core parallelization never took off, then a single core wouldn't be responsible for everything, including instructions to other cores. Must be complicated and time consuming to implement - I'm not programmer though.

I am not sure if it is only software. Frame generation for example. This is a technology this can really help in very GPU limited situations. For example in Microsoft Flight Simulator it really helps. Also tested it last night in Witcher 3 next gen update last night (very impressive, don't see artifacts or ghosting, but getting absolute smooth 110fps on Raytracing Ultra in 1440p in Novigrad with smooth frametimes). But as far as I understand it is very much hardware based using the tensor cores and the new optical flow accelerators which are added to the Ada Lovelace architecture. AMD is sayin that they working on FSR 3 which will have their frame generation technology, but I haven't heard anything in terms of specific hardware on their silicone to support this. Will be interesting to see how they doing it....
 

Phaaze88

Titan
Ambassador
I am not sure if it is only software. Frame generation for example. This is a technology this can really help in very GPU limited situations. For example in Microsoft Flight Simulator it really helps. But as far as I understand it is very much hardware base using the tensor cores and the new optical flow accelerators which are added to the Ada Lovelace architecture. AMD is sayin that they working on FSR 3 which will have their frame generation technology, but I haven't heard anything in terms of specific hardware on their silicone to support this. Will be interesting to see how they doing it....
Without software to tell the hardware what to do, none of these features work, or work correctly... frame generation is no different.

When some folks say, 'idle on the desktop', it's not idle. If it were actually idle, the OS wouldn't work.
Instead, it's in a low load state.
 

systemBuilder_49

Distinguished
Dec 9, 2010
58
15
18,545
Nvidia charges $1200 because it can — some people (enough?) are willing to pay that. AMD basically matches them on performance and charges $1000. But how much do the GPUs actually cost?

380 mm2 on 4N (tweaked N5P) for Nvidia, simple packaging (no chiplet stuff)
300 mm2 on N5P plus 220 mm2 on N7P for AMD, more complex packaging for chiplets

I look at that and can't help but think the cost of 80 mm2 on latest gen process probably ends up being less than 220 mm2 on n-1 process, plus the additional cost of the high performance fanout bridge. So rip out Infinity Fabric (about 15% of the die area), shrink things down, and a monolithic Navi 31 implementation probably ends up right around 400 mm2, maybe less. And because AMD can't charge as high of a premium, and because it probably ends up being close to the same cost for manufacturing, AMD still charges $1000.
You forgot about yield. The yield on the monolithic 380nm chip can be lower than on the 300mm + 220mm combined.
 

mjbn1977

Distinguished
I keep hearing that the new cards are not good at puddle rendering (acronym: RTX) and for producing professional static 3D graphics images, and that people always have $200 more to spend, and that performance per dollar on rasterization never matters. If you spend the extra $200 imho you're stuck with whatever you buy one extra year. The fact that this new card is the smallest and renders the most frames per dollar at almost all resolutions 1440p and higher "never matters". Can you guys even hear yourself speak? Do you play any games at all? How many puddles do you shoot per day? Because if you can't shoot at it what good is enhancing the rendering?

Puddle rendering? You know that RTX is more than just Battlefield 5 fancy puddles. To be honest. The difference that raytracing is creating in games that implement it for global illumination is breathtaking. I spend a few hours with Witcher 3 Next Gen update last night and for comparison started up AC: Valhalla (which has amazing graphics), but in comparison to Witcher 3 next gen its world look looks ultra flat. Also, the raytracing reflections on the water when swimming around in the harbor of Novigrad let even my wife drop a comment of being impressed. And she is hardly impressed by anything when it comes to video games.

If you have Cyberpunk, go in the club Afterlife and turn off raytracing.....see how it looks. Than put back raytracing on Ultra/Psycho and see how it looks. It totally adds to immersion and atmosphere....your new card should get pretty good results with FSR 2 activated.....

I don't play fast multiplayer shooter games. I prefer realistic beautiful open world single player games. The more beautiful and more realistic they look, the better. I even just walk around and suck in the landscape.....I am just amazed by the technology and can't wait how those things will look in 10 years from now. Raytracing is the future.......we now have over 100 games with raytracing and this is not going to change. But if I would play multiplayer shooter I probably would turn it off.....then you're not looking at anything besides your opponent through the crosshair....
 
Last edited:
Look at the state some AAA games launch in these days, backlash > 'apologies' > all is forgiven, with the publisher continuing to launch titles when they're not ready - patches pending.
Software is one of the biggest roadblocks to hardware, if not #1.

Cyber....puke (back in the days) for example.

Indeed, many (most of the) time software (driver and/or firmware) can have a huge impact on hardware performance. Go see intels GPU after a few more months vs day one.
 
  • Like
Reactions: Why_Me

JarredWaltonGPU

Senior GPU Editor
Editor
You forgot about yield. The yield on the monolithic 380nm chip can be lower than on the 300mm + 220mm combined.
This is why redundancies and/or the ability to disable functional units are a part of every recent GPU. Both the 7900 XTX and XT use the same 300mm2 Navi 31 GCD, but the XT disables one MCD and 12 CUs. The RTX 4080 uses AD103, but with only 76 of the potential 80 SMs enabled. Same for the RTX 4090, which only has 128 of a potential 144 SMs turned on. That sort of thing can massively boost yields.

Getting "perfect" chips at 300mm2 and larger would be relatively hard, so I wouldn't be surprised if less than 60–70% of all Navi 31 GCDs can meet the requirements for the XTX. But with the harvested XT variants, the total number of usable chips from a wafer almost certainly exceeds 90%.
 

mjbn1977

Distinguished
This is why redundancies and/or the ability to disable functional units are a part of every recent GPU. Both the 7900 XTX and XT use the same 300mm2 Navi 31 GCD, but the XT disables one MCD and 12 CUs. The RTX 4080 uses AD103, but with only 76 of the potential 80 SMs enabled. Same for the RTX 4090, which only has 128 of a potential 144 SMs turned on. That sort of thing can massively boost yields.

Getting "perfect" chips at 300mm2 and larger would be relatively hard, so I wouldn't be surprised if less than 60–70% of all Navi 31 GCDs can meet the requirements for the XTX. But with the harvested XT variants, the total number of usable chips from a wafer almost certainly exceeds 90%.

In the case of Nvidia, they already collecting all the good AD103 and AD102 chips that have all SMs working and clocking good and will eventually release them as 4080Ti and 4090Ti.
 
This is why redundancies and/or the ability to disable functional units are a part of every recent GPU. Both the 7900 XTX and XT use the same 300mm2 Navi 31 GCD, but the XT disables one MCD and 12 CUs. The RTX 4080 uses AD103, but with only 76 of the potential 80 SMs enabled. Same for the RTX 4090, which only has 128 of a potential 144 SMs turned on. That sort of thing can massively boost yields.

Getting "perfect" chips at 300mm2 and larger would be relatively hard, so I wouldn't be surprised if less than 60–70% of all Navi 31 GCDs can meet the requirements for the XTX. But with the harvested XT variants, the total number of usable chips from a wafer almost certainly exceeds 90%.

That will indeed boost available chips at the beginning of a new product launch.

Also on the long run, as manufacturing process gets better and better, will allow Nvidia (in this case) to save the best chips and launch new TI/Super models at higher prices.
 

JarredWaltonGPU

Senior GPU Editor
Editor
In the case of Nvidia, they already collecting all the good AD103 and AD102 chips that have all SMs working and clocking good and will eventually release them as 4080Ti and 4090Ti.
I believe Nvidia's professional cards and some mobile solutions are already shipping with fully enabled AD102 and AD103. Those of course cost way more than even an RTX 4090.
 

umeng2002_2

Prominent
Jan 10, 2022
108
61
670
Once AMD cracks the nut of splitting the compute die for GPUs - the actual compute units and not just the cache and memory controllers, they can scale them like Epyc CPUs.
 

sycoreaper

Honorable
Jan 11, 2018
457
150
12,820
Raw power the 7900XTX fights the 4080 and can come out on top in many test, However the 4080 makes up a ton of ground with raytracing (discussed and known) and DLSS 3 is the Crème de la Crème pushing it into victory IMO.

That said, it comes down to preference. I don't think either card is a good value, nor the 4090.
 

Colif

Win 11 Master
Moderator
WTB more reviews of any of the cards apart from the reference models.
I had intended to read reviews of the card I was buying before I got it, but that isn't going to happen. Sure, I have an XT model, the one no one was meant to buy... according to reviewers. Shame its about all you can/buy right now

Everyone seems to just get the reference card model and say, well, all XT are the same... Pretty sure the Sapphire Nitro+ is way better than a reference model. Better fans, quieter. Probably doesn't have the weird overclocking power draw issue they do

It seems Asus aren't releasing anything other than the reference models this year.
 

clsmithj

Distinguished
Nov 30, 2011
40
6
18,535
Puddle rendering? You know that RTX is more than just Battlefield 5 fancy puddles. To be honest. The difference that raytracing is creating in games that implement it for global illumination is breathtaking. I spend a few hours with Witcher 3 Next Gen update last night and for comparison started up AC: Valhalla (which has amazing graphics), but in comparison to Witcher 3 next gen its world look looks ultra flat. Also, the raytracing reflections on the water when swimming around in the harbor of Novigrad let even my wife drop a comment of being impressed. And she is hardly impressed by anything when it comes to video games.

If you have Cyberpunk, go in the club Afterlife and turn off raytracing.....see how it looks. Than put back raytracing on Ultra/Psycho and see how it looks. It totally adds to immersion and atmosphere....your new card should get pretty good results with FSR 2 activated.....

I don't play fast multiplayer shooter games. I prefer realistic beautiful open world single player games. The more beautiful and more realistic they look, the better. I even just walk around and suck in the landscape.....I am just amazed by the technology and can't wait how those things will look in 10 years from now. Raytracing is the future.......we now have over 100 games with raytracing and this is not going to change. But if I would play multiplayer shooter I probably would turn it off.....then you're not looking at anything besides your opponent through the crosshair....
But it is mostly just fancy puddles. Snipped the image below from some fanboy bragging about RT perf that looks kind of silly.

cca61a5091dcced7f06371432cd86e6c00363d43f807fb8213bbe78ccdc98e4e.jpg


Not only that, people are acting like similar affects are not possible on cards without RT cores.

GTX 1660 Ti running CP2077 at 1080p:

c4fdba76b0a49fa3a80c09b72df01228fffaa8da91de8f1c794241e41ce03b6c.png
 

mjbn1977

Distinguished
again
But it is mostly just fancy puddles. Snipped the image below from some fanboy bragging about RT perf that looks kind of silly.

cca61a5091dcced7f06371432cd86e6c00363d43f807fb8213bbe78ccdc98e4e.jpg


Not only that, people are acting like similar affects are not possible on cards without RT cores.

GTX 1660 Ti running CP2077 at 1080p:

c4fdba76b0a49fa3a80c09b72df01228fffaa8da91de8f1c794241e41ce03b6c.png

Well, this is screen space reflections and if you would look down on the crosswalk every reflection of buildings, items, and light that are not in the field of few will disappear. for graphics aficionados this is immersion breaking. Just go to place with good reflections and look up and down and look what happens to the reflections....

Again. But you, as many others, only use raytracing as a synonym for reflections. But in case of cyberpunk you have three areas that use Raytracing: Reflections, Shadows and Lighting. Lighting is the one that makes the game actually much different. It is really hard explain and it needs to be experienced. you have to keep switching it on and off in real time and try it out. My example from above (Afterlife club) really shows the difference. Before I had the RTX 4080 (had the 2080 before) I was not able to play the game with Raytracing. I tested a lot and wanted to play in raytracing, but didn't get good frame. Raytracing Ultra even with DLSS only gave me 30s fps. now I can play Raytracing in Ultra (and Psycho) with 60fps without DLSS and 100fps with DLSS quality. BIG WIN....and game definitely looks way better with raytracing. Yes, the game also looks great without raytracing......but hey, we want improve graphics. And raytracing is the best and only way to improve game video graphics.....so lets go with it.

Here is a final thoughts in this video:

View: https://www.youtube.com/watch?v=U0Ay8rMdFAg&t=791s

And quality comparison from the same video:

View: https://www.youtube.com/watch?v=U0Ay8rMdFAg&t=123s


View: https://www.youtube.com/watch?v=Xf2QCdScU6o


I don't understand why raytracing gets so bad press. By know it really runs good on a lot of hardware (yeah, high end and expensive, I know). But raytracing also will make creating games for developers much easier....so they might have more time to spent on actually gameplay and story development.
 
Last edited:

zx128k

Reputable
again


Well, this is screen space reflections and if you would look down on the crosswalk every reflection of buildings, items, and light that are not in the field of few will disappear. for graphics aficionados this is immersion breaking. Just go to place with good reflections and look up and down and look what happens to the reflections....

Again. But you, as many others, only use raytracing as a synonym for reflections. But in case of cyberpunk you have three areas that use Raytracing: Reflections, Shadows and Lighting. Lighting is the one that makes the game actually much different. It is really hard explain and it needs to be experienced. you have to keep switching it on and off in real time and try it out. My example from above (Afterlife club) really shows the difference. Before I had the RTX 4080 (had the 2080 before) I was not able to play the game with Raytracing. I tested a lot and wanted to play in raytracing, but didn't get good frame. Raytracing Ultra even with DLSS only gave me 30s fps. now I can play Raytracing in Ultra (and Psycho) with 60fps without DLSS and 100fps with DLSS quality. BIG WIN....and game definitely looks way better with raytracing. Yes, the game also looks great without raytracing......but hey, we want improve graphics. And raytracing is the best and only way to improve game video graphics.....so lets go with it.

Here is a final thoughts in this video:

View: https://www.youtube.com/watch?v=U0Ay8rMdFAg&t=791s

And quality comparison from the same video:

View: https://www.youtube.com/watch?v=U0Ay8rMdFAg&t=123s


View: https://www.youtube.com/watch?v=Xf2QCdScU6o


I don't understand why raytracing gets so bad press. By know it really runs good on a lot of hardware (yeah, high end and expensive, I know). But raytracing also will make creating games for developers much easier....so they might have more time to spent on actually gameplay and story development.

Ray Tracing gets bad pressure because people have AMD stocks and make money if AMD do well. Reviews are still focusing on raster and ignoring maximum settings with RT to make AMD cards seems faster and better than they are. Really the RX 7900XTX is close to the RTX 3090 in performance in more games. In Portal RTX a RTX 3090 is far faster than a RX 7900XTX. This is why raster is focused on when its a dead end and is replaced by RT. Most new games and remasters of old game will involve Ray Tracing/path tracing. AMD are far behind in performance. Cards should be focused on RT and AI upscaling. AMD is focused on raster, thus performance will suck as new games appear. That's why RT is played down and made to look unimportant. Meanwhile RT performance is all you should care about and AI upscaling performance. Raster is legacy, that a card gets 255 and not 240 fps has no logical point on monitors that are 144hz. Meanwhile a card that gets 30-50%+ less performance in RT is a really big deal as this means you can't get a decent frame rate with AI upscaling.
 
  • Like
Reactions: heartlessnoob
Been hearing that the 7000 series have had some driver issues. If you are going amd this time around, a 6800xt or 6900xt seems like a good bet to get most if that performance and save some money. Also the drivers for 6000 series and the cards themselves should be pretty mature.

To be honest, didn’t think I wanted an nvidia card or that I’d care about ray tracing. But got a little Christmas money and I find that I found a good deal on an rtx 3080 which is now on the way to me. Sort of interested to try things out a bit and puts a bit more life in my am4 rig.