Upscaling is here to stay, and is continuing to improve. It's not 2019 anymore, where you can just dismiss it out of hand. 540p to 1080p would be great for battery-saving handhelds.Something not worth "catching up" to IMO, hence my point
The scheduled forum maintenance has now been completed. If you spot any issues, please report them here in this thread. Thank you!
Upscaling is here to stay, and is continuing to improve. It's not 2019 anymore, where you can just dismiss it out of hand. 540p to 1080p would be great for battery-saving handhelds.Something not worth "catching up" to IMO, hence my point
Considering AMD is behind on this type of feature and other software/hardware based features, it's a good thing. Nivida dominates with DLSS and Intel is pretty good too. Just because it has the AI term in it doesn't negate the improvement we will see as consumers.Ok so it’s AI based, will rdna3’s AI focused matrix extensions allow it ton on there without discrete matrix accelerators?
If you honestly think there’s a significant difference in DLSS, FSR, and XeSS then you’ve fallen victim to the hype. Nvidia is ahead because they generally do everything better not because of DLSS. All three upscalers get so much information from sub-pixel detail and vectors they can all upscale a 1440p image to 4k and get a better image than native 4k coming straight out of the ROPs. Ada beats RDNA3 in every way but all modern upscaling is so good it doesn’t matter which you use. AI based models only really have benefits over FSR at the lower end of the resolution spectrum.Considering AMD is behind on this type of feature and other software/hardware based features, it's a good thing. Nivida dominates with DLSS and Intel is pretty good too. Just because it has the AI term in it doesn't negate the improvement we will see as consumers.
These kinds of features being so far behind compared to the competition is why I won't consider a Radeon card. If I'm paying a large chunk of money on a GPU, I wasn't all the bells and whistles. Whether I end up liking the visual in real world use isn't the thing. It's having the options.
So for AMD to be focused on this and also RT in RDNA4 is freaking good! It will improve our gaming experience as it matures and if it catches and up to Nvidia and Intel then that's so much the better!
You obviously haven't actually seen the implementations in the same games. There are all sorts of random artifacting type issues and FSR is usually the worst. The most common thing that still happens with FSR is ghosting. There's a limit to how good they can get it without using a specific hardware implementation. This is why there are two versions of XeSS with the DP4a not being as good. Both FSR and XeSS have been getting a lot better overall so it stands to reason AMD is doing this for a good reason and hopefully they'll continue development of current FSR with it.If you honestly think there’s a significant difference in DLSS, FSR, and XeSS then you’ve fallen victim to the hype. Nvidia is ahead because they generally do everything better not because of DLSS. All three upscalers get so much information from sub-pixel detail and vectors they can all upscale a 1440p image to 4k and get a better image than native 4k coming straight out of the ROPs. Ada beats RDNA3 in every way but all modern upscaling is so good it doesn’t matter which you use. AI based models only really have benefits over FSR at the lower end of the resolution spectrum.
You missed what I'm talking about. But honestly, DLSS and xess are better. But that's not my main point. If I'm going to spend let's say.... ~$800 USD on a GPU... I want the best I can get for my money. Not the one trailing behind (whether by a little or a lot).If you honestly think there’s a significant difference in DLSS, FSR, and XeSS then you’ve fallen victim to the hype. Nvidia is ahead because they generally do everything better not because of DLSS. All three upscalers get so much information from sub-pixel detail and vectors they can all upscale a 1440p image to 4k and get a better image than native 4k coming straight out of the ROPs. Ada beats RDNA3 in every way but all modern upscaling is so good it doesn’t matter which you use. AI based models only really have benefits over FSR at the lower end of the resolution spectrum.
You're forgetting that up to 240Hz 4K and 500+ Hz 1080p displays exist. And also, while there's a big delay in reacting to something seem on screen, I am absolutely sure people that play a lot of games are much faster than 250ms. I can absolutely feel the difference between framegen and native rendering and the additional input lag (two frames relative to the generated fps) when a game is running at <80 fps with framegen. So that means that I, a 50 year old, can perceive a 25~37.5 ms increase in latency quite easily.>I'd like to see something where you take the last rendered frame, sample user input, and predictively generate a next frame from that. Or at the very least something like asynchronous time warp coupled to AI to get user input right before the generation of a new frame.
I don't follow. Framegen (FG) can't be predicated on user input, because user input is much slower. For 60FPS, latency is 16.7ms/frame. Per Google, avg gamer response time ~250ms, or 15 frames' time before the response is registered.
You're totally missing the point here. In your example, it would be beneficial to have an intermediate frame 1.5, sure. The problem is that with framegen, that frame 1.5 will now show up on the display after the time that the PC has already rendered frame 2, and is in fact rendering frame 3 so that it can generate frame 2.5. So now you can see, "Oh, I should have dodged four frames ago!" instead of "Oops, I should have dodge one frame ago.">Basically, we need something that responds to user input somehow rather than just interpolating between two rendered frames for this to feel better and not just look better.
I agree in part. Yes, more responsiveness (ie decreased latency) is desirable, but higher framecount by itself also beneficial, because it lets you see more information in a given amount of time. It's not just for looking better.
Ex: Somebody is swinging a sword at you. In frame 1, he starts the swing; you can't dodge because you don't know where it'll land. In frame 2, the swing appears immediately above your head; you can't dodge because it's too late. If there's an intermediate frame 1.5 that shows sword in mid-swing, you can conceivably dodge it. This is independent of reaction time.
You assume wrongly. I'm talking about high FPS delivering better results in competitive shooters. Granted, that's often super light stuff like Counter-Strike or Overwatch, but Nvidia was showing that having a high refresh rate display with a high FPS game allowed you to better respond.>Nvidia has demonstrated the benefits of higher FPS (without using framegen, mind you!) on 120Hz, 240Hz, 360Hz, and 480Hz monitors...What's funny is that all of the "frames win games" marketing faded away when Ada Lovelace came out and offered framegen — because Nvidia itself knows that this is walking back on responsiveness!
I assume you're referring to DLSS upscaling. Recall that when DLSS 1.0 came out in late 2018, it was obviously a work in progress. 2.0 got upscaling to "workable" in 2020. DLSS 3.0 (2022) improves upscaling further, to the point where it is now considered integral to performance, as you pointed out above. FG now is where DLSS 1.0 was in 2018.
Fundamentally, they're related, sure, but how they get to where they end up uses very different paths. If we're running a race, upscaling is like getting everyone to run faster. Framegen is like calling a car to give you a ride to the finish line.I see FG as the logical continuation of upscaling. Whereas upscaling extends native rendering spatially (intraframe), FG extends native rendering temporally (interframe). The concept is the same. The key to FG being viable is to reduce latency to a "usable" threshold. Upscaling has reached that point, FG has not. My bet is that it will.
Spatial scaling has existed forever, and temporal scaling has existed for a long-time as well; It's nothing new, and its benefits and limitations have been known for a long time. It's just been overhyped w/ branding recently due to a band-aid being needed for increasingly poorly-performing games. The only thing that's actually new is "optical flow" upscaling; and technically the usage of AI to "enhance" other forms of scaling as well, but the benefits of the latter are minor and comes w/ a bit of its own issues.Upscaling is here to stay, and is continuing to improve. It's not 2019 anymore, where you can just dismiss it out of hand. 540p to 1080p would be great for battery-saving handhelds.
I use every upscaling option available on an Nvidia GPU in every game. I often find I prefer the extra performance of FSR over DLSS in CPU bound titles.You obviously haven't actually seen the implementations in the same games. There are all sorts of random artifacting type issues and FSR is usually the worst. The most common thing that still happens with FSR is ghosting. There's a limit to how good they can get it without using a specific hardware implementation. This is why there are two versions of XeSS with the DP4a not being as good. Both FSR and XeSS have been getting a lot better overall so it stands to reason AMD is doing this for a good reason and hopefully they'll continue development of current FSR with it.
Eh... spatial upscaling has a LOT of negatives. Specifically, it just doesn't look anywhere near as good as native rendering. If you upscaling from maybe 90% it can come close, and if you start with a poor baseline (like TAA 100% scaling with overly aggressive blur) it might be "better." But FSR1 and various other spatial upscaling algorithms — including DLSS 1.x — just never looked that great.Spatial scaling has existed forever, and temporal scaling has existed for a long-time as well; It's nothing new, and its benefits and limitations have been known for a long time. It's just been overhyped w/ branding recently due to a band-aid being needed for increasingly poorly-performing games. The only thing that's actually new is "optical flow" upscaling; and technically the usage of AI to "enhance" other forms of scaling as well, but the benefits of the latter are minor and comes w/ a bit of its own issues.
Spatial upscaling has no real negative side-effects other than the slight blur of non-fractional scaling (partially offsetable w/ sharpening, which has its own issues), and temporal scaling of *any* kind has negative side-effects that are endemic to how it works.
Isn’t “optical flow” basically just fancy talk for the vectors used in upscaling/decoding/encoding with a little extra info thrown in ?Eh... spatial upscaling has a LOT of negatives. Specifically, it just doesn't look anywhere near as good as native rendering. If you upscaling from maybe 90% it can come close, and if you start with a poor baseline (like TAA 100% scaling with overly aggressive blur) it might be "better." But FSR1 and various other spatial upscaling algorithms — including DLSS 1.x — just never looked that great.
And "optical flow upscaling" isn't a thing. The Optical Flow Accelerator in the RTX 40-series is exclusively used for frame generation, not upscaling. (It can also do some stuff for video, but that's a different topic. The OFA has been around since RTX 20-series, but it has become substantially more potent over time, and Nvidia hasn't really divulged much about where the OFA helped out before other than a nebulous "in video" comment.)
Yes, temporal upscaling has an “optical flow” but the 40-series specifically has an “Optical Flow Accelerator” that handles framegen. I’m not totally certain, but I think it just takes two frames and computes an optical flow from that. Or it’s possible that it gets the depth buffer and motion vectors from the game as well… but given modders have stuffed DLSS3 framegen into games that don’t already have DLSS support, I think it was designed as a black box that handles everything.Isn’t “optical flow” basically just fancy talk for the vectors used in upscaling/decoding/encoding with a little extra info thrown in ?