It's more about coding culture than anything else. We've seen this many times in history. With very limited hardware resources the development process has to be very focused on optimizations and proper resource utilization. As the hardware grows, there is less need for optimizations to get the same performance. Adding various features to hardware allows development teams to skip some steps. I remember times, when it was worth tinkering with branch prediction to improve performance, now it barely matters because of the way branch prediction was improved. Similarly when PS3 was released, games were very poorly performing because of the complexity of the hardware but over time it was possible to utilize it all and gain much more performance from it. On the other hand, newer consoles use much more common and forgiving architecture that does not punish you for making shortcuts and keeping it simple. We may be getting into this area with graphics and games, especially from companies like Bethesda. It's quite clear that AMD is going into a direction of making more forgiving hardware, especially with so much cache. Less thinking, better performance if you didn't care about cache too much. Nvidia stays with more conservative approach so far, developers should know how to code for efficient GPU usage. For me it is just Bethesda's fault. It's as if C++ engineer would make a copy of the same data all over the place because he does not have time to think about references and you should just get more memory, it's not that expensive anymore. Yes, it is viable option to add more hardware and cut cost on development but it's not a good justification. Weather it's business decision or negligence, it does not look good. The same with DLSS, it's higher level example of this thing. XeSS and DLSS are optimized for specific hardware, FSR is generic feature, going with just generic one is either laziness or malicious decision.
We will see more of such problems in the future though. Especially if AMD will focus on more forgiving hardware design than NVIDIA. After all, the main purpose of GPUs was to be specialized, to do that particular job very well, the whole design of GPUs is about efficiency in particular tasks, not general purpose processing. Expecting that it will behave more like CPUs with much more forgiving and universal design is wrong approach. What next, games rendered on CPU when their power is enough for 30fps at 1080p?