It's not fighting windmills, at all. It's an industry trend that consumers can change. It's not like nVidia is forcing you to use DLSS or buy the most expensive video card that promises the "biggest AI evah" or turn on FrameGen.
You're adopting/accepting something, from what I gather, you don't even embrace enthusiastically? Like reluctantly "because the industry is going there, so let's just eat it silently" kind of way?
In any case, I don't have a problem with techniques based on whatever buzzword they want to use that improves the experience without sacrificing visual fidelity. I mean, how do you reconcile that, on one hand, RT which is basically real-life light bounces (or as close the calculation can get) is in need of another technique that creates frames based on image analysis with a bit of information on movement from the graphical engine, so it "guesses" what you'll do next to compensate for the heavy calculation it requires to "bring the best realism" to your screen so you get an actual playable experience? Or, in other words, life-like light vs hallucinated frame on your screen only so that the FPS metric goes up, but the visual fidelity goes down. That is the main point that bothers me: there's a stupid contradiction here and seems like a lot of people are just ignoring it, because reasons. Also, because of the reasons I just explained/illustrated, it's not a "net positive" for me.
On the other hand, I'm not daft and I see where upscaling works best for people and, to a degree, frame interpolation (not generation) can help deliver a somewhat better experience, specially for people with motion sickness due to frame pacing being bad. And like I mentioned in another post: the other good things nVidia, AMD and Intel could be doing for games, but just focus on "bigger bar better" technologies instead.
Regards.