Have you guys at Tom's Hardware ever thought of doing a review of the 7000's and 4000's a few months later after launch. Basically, the review would show the driver fixes and performance improvements.
Look at the state some AAA games launch in these days, backlash > 'apologies' > all is forgiven, with the publisher continuing to launch titles when they're not ready - patches pending.Well, some reviewers might do that. But, come on, sorry, if you put a product on the market it should perform they way it will perform in 6 months. Sure, there is always little improvements to be done optimization within drivers (even with Nvidia games), but if your product needs to age "like good wind" its clearly not ready to be released. They whole notion of "it might be better in the future" is BS, because it only shows that they were not ready and released it way too early. Look for example at the Guardians of the Galaxy performance (not that this is a game I play, but some do). Clearly nobody at AMD bothered to epitomize any drivers for that....
I feel like the 7900xt is not a bad card. If you compare it to the 3080 at a recent MSRP ($830), its a great deal. No other nivida cards sell for less than $1000. So if your peak budget is $1000, the 7900xt is only competing with the 6950xt. But I wanted something a little better than 6950xt. Better codecs, better ray tracing, more power efficient, better for 4k especially. The 7900xt ticks all those boxes.
the number of gamers who don't complain seems to out numbered those that do, so companies just keep repeating the same steps. If people stopped buying games that are broken on release, they might stop and think before releasing them. It was better before the internet let them release half made software.Look at the state some AAA games launch in these days, backlash > 'apologies' > all is forgiven, with the publisher continuing to launch titles when they're not ready - patches pending.
Software is one of the biggest roadblocks to hardware, if not #1.
Look at the state some AAA games launch in these days, backlash > 'apologies' > all is forgiven, with the publisher continuing to launch titles when they're not ready - patches pending.
Software is one of the biggest roadblocks to hardware, if not #1.
It's the software.To be honest....the biggest road block for high end GPUs is the performance of the available CPUs. What happens if the next GPU generation in two years is doing 40-50% increase.....what CPU in single core performance is keeping up with that? I think that is why technologies like frame generation getting more and more important.....
I keep hearing that the new cards are not good at puddle rendering (acronym: RTX) and for producing professional static 3D graphics images, and that people always have $200 more to spend, and that performance per dollar on rasterization never matters. If you spend the extra $200 imho you're stuck with whatever you buy one extra year. The fact that this new card is the smallest and renders the most frames per dollar at almost all resolutions 1440p and higher "never matters". Can you guys even hear yourself speak? Do you play any games at all? How many puddles do you shoot per day? Because if you can't shoot at it what good is enhancing the rendering?Trifecta of problems here .. RX 7900 XTX
The reference AMD hopefully is not carried over and some of the AIBs, but nothing is going to fix ray-tracing and slow production rendering.
- Reference: Horrific Coil Whine and Poor power management
- Often unplayable ray-tracing frame rates
- Blender and other 3D and video processing software rendering times are often painfully slow
It's the software.
It's too bad that core parallelization never took off, then a single core wouldn't be responsible for everything, including instructions to other cores. Must be complicated and time consuming to implement - I'm not programmer though.
Without software to tell the hardware what to do, none of these features work, or work correctly... frame generation is no different.I am not sure if it is only software. Frame generation for example. This is a technology this can really help in very GPU limited situations. For example in Microsoft Flight Simulator it really helps. But as far as I understand it is very much hardware base using the tensor cores and the new optical flow accelerators which are added to the Ada Lovelace architecture. AMD is sayin that they working on FSR 3 which will have their frame generation technology, but I haven't heard anything in terms of specific hardware on their silicone to support this. Will be interesting to see how they doing it....
You forgot about yield. The yield on the monolithic 380nm chip can be lower than on the 300mm + 220mm combined.Nvidia charges $1200 because it can — some people (enough?) are willing to pay that. AMD basically matches them on performance and charges $1000. But how much do the GPUs actually cost?
380 mm2 on 4N (tweaked N5P) for Nvidia, simple packaging (no chiplet stuff)
300 mm2 on N5P plus 220 mm2 on N7P for AMD, more complex packaging for chiplets
I look at that and can't help but think the cost of 80 mm2 on latest gen process probably ends up being less than 220 mm2 on n-1 process, plus the additional cost of the high performance fanout bridge. So rip out Infinity Fabric (about 15% of the die area), shrink things down, and a monolithic Navi 31 implementation probably ends up right around 400 mm2, maybe less. And because AMD can't charge as high of a premium, and because it probably ends up being close to the same cost for manufacturing, AMD still charges $1000.
I keep hearing that the new cards are not good at puddle rendering (acronym: RTX) and for producing professional static 3D graphics images, and that people always have $200 more to spend, and that performance per dollar on rasterization never matters. If you spend the extra $200 imho you're stuck with whatever you buy one extra year. The fact that this new card is the smallest and renders the most frames per dollar at almost all resolutions 1440p and higher "never matters". Can you guys even hear yourself speak? Do you play any games at all? How many puddles do you shoot per day? Because if you can't shoot at it what good is enhancing the rendering?
Look at the state some AAA games launch in these days, backlash > 'apologies' > all is forgiven, with the publisher continuing to launch titles when they're not ready - patches pending.
Software is one of the biggest roadblocks to hardware, if not #1.
This is why redundancies and/or the ability to disable functional units are a part of every recent GPU. Both the 7900 XTX and XT use the same 300mm2 Navi 31 GCD, but the XT disables one MCD and 12 CUs. The RTX 4080 uses AD103, but with only 76 of the potential 80 SMs enabled. Same for the RTX 4090, which only has 128 of a potential 144 SMs turned on. That sort of thing can massively boost yields.You forgot about yield. The yield on the monolithic 380nm chip can be lower than on the 300mm + 220mm combined.
This is why redundancies and/or the ability to disable functional units are a part of every recent GPU. Both the 7900 XTX and XT use the same 300mm2 Navi 31 GCD, but the XT disables one MCD and 12 CUs. The RTX 4080 uses AD103, but with only 76 of the potential 80 SMs enabled. Same for the RTX 4090, which only has 128 of a potential 144 SMs turned on. That sort of thing can massively boost yields.
Getting "perfect" chips at 300mm2 and larger would be relatively hard, so I wouldn't be surprised if less than 60–70% of all Navi 31 GCDs can meet the requirements for the XTX. But with the harvested XT variants, the total number of usable chips from a wafer almost certainly exceeds 90%.
This is why redundancies and/or the ability to disable functional units are a part of every recent GPU. Both the 7900 XTX and XT use the same 300mm2 Navi 31 GCD, but the XT disables one MCD and 12 CUs. The RTX 4080 uses AD103, but with only 76 of the potential 80 SMs enabled. Same for the RTX 4090, which only has 128 of a potential 144 SMs turned on. That sort of thing can massively boost yields.
Getting "perfect" chips at 300mm2 and larger would be relatively hard, so I wouldn't be surprised if less than 60–70% of all Navi 31 GCDs can meet the requirements for the XTX. But with the harvested XT variants, the total number of usable chips from a wafer almost certainly exceeds 90%.
I believe Nvidia's professional cards and some mobile solutions are already shipping with fully enabled AD102 and AD103. Those of course cost way more than even an RTX 4090.In the case of Nvidia, they already collecting all the good AD103 and AD102 chips that have all SMs working and clocking good and will eventually release them as 4080Ti and 4090Ti.
But it is mostly just fancy puddles. Snipped the image below from some fanboy bragging about RT perf that looks kind of silly.Puddle rendering? You know that RTX is more than just Battlefield 5 fancy puddles. To be honest. The difference that raytracing is creating in games that implement it for global illumination is breathtaking. I spend a few hours with Witcher 3 Next Gen update last night and for comparison started up AC: Valhalla (which has amazing graphics), but in comparison to Witcher 3 next gen its world look looks ultra flat. Also, the raytracing reflections on the water when swimming around in the harbor of Novigrad let even my wife drop a comment of being impressed. And she is hardly impressed by anything when it comes to video games.
If you have Cyberpunk, go in the club Afterlife and turn off raytracing.....see how it looks. Than put back raytracing on Ultra/Psycho and see how it looks. It totally adds to immersion and atmosphere....your new card should get pretty good results with FSR 2 activated.....
I don't play fast multiplayer shooter games. I prefer realistic beautiful open world single player games. The more beautiful and more realistic they look, the better. I even just walk around and suck in the landscape.....I am just amazed by the technology and can't wait how those things will look in 10 years from now. Raytracing is the future.......we now have over 100 games with raytracing and this is not going to change. But if I would play multiplayer shooter I probably would turn it off.....then you're not looking at anything besides your opponent through the crosshair....
But it is mostly just fancy puddles. Snipped the image below from some fanboy bragging about RT perf that looks kind of silly.
![]()
Not only that, people are acting like similar affects are not possible on cards without RT cores.
GTX 1660 Ti running CP2077 at 1080p:
![]()
again
Well, this is screen space reflections and if you would look down on the crosswalk every reflection of buildings, items, and light that are not in the field of few will disappear. for graphics aficionados this is immersion breaking. Just go to place with good reflections and look up and down and look what happens to the reflections....
Again. But you, as many others, only use raytracing as a synonym for reflections. But in case of cyberpunk you have three areas that use Raytracing: Reflections, Shadows and Lighting. Lighting is the one that makes the game actually much different. It is really hard explain and it needs to be experienced. you have to keep switching it on and off in real time and try it out. My example from above (Afterlife club) really shows the difference. Before I had the RTX 4080 (had the 2080 before) I was not able to play the game with Raytracing. I tested a lot and wanted to play in raytracing, but didn't get good frame. Raytracing Ultra even with DLSS only gave me 30s fps. now I can play Raytracing in Ultra (and Psycho) with 60fps without DLSS and 100fps with DLSS quality. BIG WIN....and game definitely looks way better with raytracing. Yes, the game also looks great without raytracing......but hey, we want improve graphics. And raytracing is the best and only way to improve game video graphics.....so lets go with it.
Here is a final thoughts in this video:
View: https://www.youtube.com/watch?v=U0Ay8rMdFAg&t=791s
And quality comparison from the same video:
View: https://www.youtube.com/watch?v=U0Ay8rMdFAg&t=123s
View: https://www.youtube.com/watch?v=Xf2QCdScU6o
I don't understand why raytracing gets so bad press. By know it really runs good on a lot of hardware (yeah, high end and expensive, I know). But raytracing also will make creating games for developers much easier....so they might have more time to spent on actually gameplay and story development.