I mentioned in another thread that I suspect that future generations of cards are going to be less about adding more raw processing power, and more about improving the AI side of things - why add a few thousand more CUDA cores to scrape out an extra 5% performance when a few hundred Tensor cores can effectively double it? I know some people complain that they're not "real" pixels or frames and that it's a cheat, but pretty much every aspect of a modern rasteriser is a cheat already - parallax occlusion mapping, tessellation, screen space effects, they're all just as "fake" as DLSS!
Eh it depends. Using advanced pattern detection (that's all "AI" is) to interpolate additional rendering data is fine. Using that same technique to artificially advance a frame counter to market "performance" is definitely a cheat. My qualms has been marketing leaning on hype and the general ignorance of the public to sell products. Frame generation will never more then a gimmick because you are trying to render the future and that's not possible (with modern quantum physics), so you will always have weird artifacts and latency issues. Pattern detection upscaling on the other hand is an extremely useful tool, especially since display resolutions are going up much faster then graphics processing power. Doubling the screen resolution quadruples the processing requirements. If x is the required performance for 1920x1080 (2,073,600 pixels), you need 4x for 3840x2160 (8,294,400 pixels) and 16x for 7680x4320 (33,177,600 pixels). This means maintaining decent frame rates is going to become an absolute nightmare if not outright impossible without some sort of upscaling technology. An advanced pattern based upscaling algorithm isn't trying to guess the future, it's trying to guess what a 1080p rendered image would look like at 2160p, or a 2160p rendered image at 4320p.