"We've managed to convince nVidia to allows to fix the generic coding paths used for AMD that don't work with their new GPU architectures, so they'll see their GPUs better utilized now instead of using basic drawcalls in the incorrect order overwhelming the hardware scheduler".
I think I've translated their statement to a more accurate statement? xD
Well, sarcasm and joking aside, it's good to see bugs of this type getting squashed. Free extra performance is always welcome, even if it's a tad late.
Regards.
Welcome to my book... You might have written that sarcastically but I wouldn't be shocked if it was done on purpose with the intent to quickly fix it but unfortunately said fix comes out after digital foundry has released their performance videos.
Nothing would surprise me when it comes to nVidia. I mean damn... I remember back when they cheated Benchmarking software by secretly overclocking the core and memory when the drivers detected a benchmark running. They did it for like 3 years before getting caught and it was enough time for them to gain a decent lead in sales over ATI despite ATI cards being on par or better in normal gaming use.
Then after they were caught they started working on gameworks. A suite of visual effects designed to run very good on their hardware and very badly on ATI/AMD hardware.
In fact the "fine wine" effect actually comes from and AMD driver team writing the code for an equivalent visual effect (like HBAO+) and then instead of running the gameworks shader it would inject the AMD alternative (which was open source) which usually looked identical or sometimes better and also ran significantly better. Generally it took AMD a few weeks after a game release to do this which is why it seemed like performance got better with time despite no game updates.
Witcher 3 is a fantastic example of nVidia being jerks. They knew GCN 3 wasn't great at tessellation so they set hair works on Geralt to run at 64x with an insane amount of MSAA applied after. If you forced 8x tessellation in the AMD cp the hair would look identical but you would gain like 20 fps.
Look at the difference between hair works and tress-fx. AMD's solution is open source, looks significantly better and uses half the resources.
Let's also not forget about the whole GTX 970 split VRAM debacle or nVidia being caught lowering performance on older cards via the drivers to try and get people to upgrade.
So... Despite my last 3 gpus being nVidia (GTX 1080, RTX 2080ti ftw3 ultra, RTX 3080ti ftw3 ultra) I have no trust in them. I'm really hoping RDNA3 knocks it out of the park. They already beat nVidia in rasterization performance so all they need to do is add tensor cores or an FPGA to handle ray tracing and a temporal based ai using alternative to DLSS.
FSR works the way it does because RDNA2 doesn't have the dedicated hardware for RT like the RTX series does but that's expected to change with RDNA3.
The end..lol