We take a look at AMD's new FidelityFX Super Resolution feature.
AMD FidelityFX Super Resolution Image Quality Investigated : Read more
AMD FidelityFX Super Resolution Image Quality Investigated : Read more
You mean like Nvidia's sharpening feature? Yeah, that already exists. So does Radeon Image Sharpening, which I think can be used via drivers. But this is supposed to do a bit more than simply upscale and sharpen (maybe).Honestly I'm just wondering something: How come none of this is driver based yet?
That would be the killer feature.
Try not to spread bogus information. DLSS 2.0 integration shouldn't be any easier or more difficult than FSR integration. It's just putting in the links to a third party library, plus some UI stuff to enable selecting the various modes. Both should be a couple of days of effort for any competent developer.Why would you "think" it wouldn't look good?... Anyway, the really big deal about it is that it is far simpler and easier for developers to use in their games than DLSS2, and as the article mentions, it isn't proprietary. Since it covers a broad spectrum of GPUs and the quality is excellent, there doesn't seem much in the way of a decision for developers--sort of a no-brainer, imo, to use FSR.
You mean like Nvidia's sharpening feature? Yeah, that already exists. So does Radeon Image Sharpening, which I think can be used via drivers. But this is supposed to do a bit more than simply upscale and sharpen (maybe).
Try not to spread bogus information. DLSS 2.0 integration shouldn't be any easier or more difficult than FSR integration. It's just putting in the links to a third party library, plus some UI stuff to enable selecting the various modes. Both should be a couple of days of effort for any competent developer.
But yes, DLSS is proprietary and FSR is 'open.' DLSS also tends to look better if you compare DLSS Quality to FSR Quality (Ultra Quality might be 'equivalent'), or DLSS Balanced to FSR Balanced, or Performance to Performance modes. The loss in image fidelity is very noticeable at the Balanced and Performance settings for FSR, less so for DLSS. Until we have a game that implements both, however, we can't say for certain how much faster FSR runs, or how much better DLSS looks. And unfortunately, given the nature of the business, I'd be pretty surprised to see many games implement both FSR and DLSS.
3 main reasons:Why would you "think" it wouldn't look good?... Anyway, the really big deal about it is that it is far simpler and easier for developers to use in their games than DLSS2, and as the article mentions, it isn't proprietary. Since it covers a broad spectrum of GPUs and the quality is excellent, there doesn't seem much in the way of a decision for developers--sort of a no-brainer, imo, to use FSR.
Here you go:Maybe my eyesight is getting bad in my old age, but even the Performance settings look like they'd be perfectly adequate when there's movement and action happening. It's easy to spot differences between that level and Native/Ultra-Quality in screenshots, but I'm not really sure how much I'd care during gameplay. The big thing for me is when these technologies allow me to play games that I might not otherwise be able to play on my current hardware (1070) at reasonable framerates. If the difference is struggling to play at 40-50 FPS natively vs. actually playing with slightly fuzzy or soft graphics at 60-70 FPS, I'll definitely take 60-70 FPS. Since DLSS isn't available on my hardware I'm incredibly happy that FSR works on the hardware I actually own. Now it just needs to be supported by games I want to play...
I meant more like when either DLSS or FFXSR will be integrated into the drivers. And from what I can gather, both DLSS and FFXSR work on the rendered frame with the only change in the input side being rendering resolution. It just feels kind of annoying when these features are touted as game changing, but the developer has to actually support it.You mean like Nvidia's sharpening feature? Yeah, that already exists. So does Radeon Image Sharpening, which I think can be used via drivers. But this is supposed to do a bit more than simply upscale and sharpen (maybe).
But that's the same as saying MSAA is (was?) not a game changer because Devs have to implement it (or their engines support it).I meant more like when either DLSS or FFXSR will be integrated into the drivers. And from what I can gather, both DLSS and FFXSR work on the rendered frame with the only change in the input side being rendering resolution. It just feels kind of annoying when these features are touted as game changing, but the developer has to actually support it.
It's like when NVIDIA saw potential in DX11's Deferred Context and eventually decided to just add it as a driver wide thing that happens automagically.
I'm not saying that DLSS or FFXSR are not gaming changing because it requires developers to actively support it. I'm saying it's annoying it's touted as game changing but AMD and NVIDIA don't seem to be doing anything to make it so developers don't have to actively support it.But that's the same as saying MSAA is (was?) not a game changer because Devs have to implement it (or their engines support it).
Whether or not the feature happens in a driver does not affect whether or not said feature can happen from things like ReShade. However, I would rather have it built into the drivers than have yet another thing to installAlso, since it happens outside the driver, theoretically you can add it much like you could alter shaders and system calls for games via modding in case the developer doesn't officially support it, no?
That's a strange way to look at it as, both DLSS and FSR do affect the rendering pipeline: they need to be inserted into it. If you run native resolution, you don't touch neither, unless I'm missing something here. Sure, it is different from implementing Ray Tracing or Tesselation or any other visual effect like lighting or shadows, but they're not comparable IMO. I can't say you can't feel conflicted, but I do believe you're mixing two things that shouldn't be?I'm not saying that DLSS or FFXSR are not gaming changing because it requires developers to actively support it. I'm saying it's annoying it's touted as game changing but AMD and NVIDIA don't seem to be doing anything to make it so developers don't have to actively support it.
EDIT: I should point out I only feel this way because DLSS and FFXSR don't affect the core rendering pipeline. So I don't feel the same way about say ray tracing because that is a change in the core rendering pipeline.
Here you go:
https://explore.amd.com/en/technologies/radeon-software-fidelityfx-super-resolution/survey
AMD seems to want to prioritize what Devs to approach, so why not give them a hand?
Regards,
That's an interesting point. While the current iteration of DLSS may have an edge at more moderate frame rates, it's possible that FSR could provide an edge for high frame rate gaming, due to the lower overhead. The best way to compare these two techniques would be to eventually test games that support both, and try to roughly match frame rates at a given native resolution using various levels of DLSS and FSR, then compare image quality at whatever levels those end up being for a given game. Performing some testing on more mid-range, GPU limited graphics hardware might make sense too, especially for lower resolutions. Something like an RTX 2060, for example.So if you're playing a game that comes in a bit short of 60 fps, DLSS and FSR can both get you into the fully smooth 60+ fps range. But if you're playing a game at 120 fps and you have a 240 Hz display, our experience is that DLSS won't generally scale that high—it becomes the limiting factor. On the other hand, FSR has no qualms about scaling to higher fps, and if you don't mind the loss of image quality, running in Performance mode often more than doubles performance. (So does running at 1080p instead of 4K.)
I'm not so sure. All it takes is a major game engine to integrate FSR support to make it even easier for developers to include in their games. There's no real reason why a game couldn't support multiple solutions, particularly since each have their own strengths and weaknesses depending on the hardware and settings being used. Aside from Nvidia probably encouraging them not to, to help push their newer cards, of course. DLSS isn't going to do much good for the majority of gamers that currently don't have compatible hardware to run it though. Going by the Steam Hardware Survey, the number of people with RTX cards currently only amount to around 17% of their userbase, so that remaining 83% is a pretty large market that could benefit from a better hardware-agnostic upscaling solution, especially since a lot of that will be lower-end hardware struggling to run newer games well at native resolution. Games often include the option for some form of upscaling, but its often not particularly good, so having a decent standard to work off of could be helpful.And unfortunately, given the nature of the business, I'd be pretty surprised to see many games implement both FSR and DLSS.
FSR and DLSS should be performing their upscaling before things like interface elements are drawn to the screen, allowing things like text to be rendered clearer and without artifacts at native resolution, as the developers have control over what it gets applied to. A universal, driver-based method would be applying the upscaling and sharpening after the final image is rendered, meaning those elements would be getting upscaled and sharpened too, which is less than ideal. And that's really something that's already largely covered by using the existing Radeon Image Sharpening or Nvidia's similar sharpening option combined with other forms of upscaling. It's possible that AMD will update Radeon Image Sharpening to behave more in line with the developer-integrated options though, even if it wouldn't work quite as well. Though unlike the built-in FSR feature, that also wouldn't be of much use to those without AMD hardware.Honestly I'm just wondering something: How come none of this is driver based yet?
That would be the killer feature.
In Nvidia's case, the original implementation of DLSS required AI training to be performed on a per-game basis, which likely required more time and effort to implement. The original DLSS was arguably worse than some existing forms of upscaling as well, and relatively few people had hardware capable of utilizing it, so there wasn't much incentive for developers to get on-board, aside from in Nvidia-sponsored titles. DLSS 2.0 has largely addressed most of those issues, though the number of systems capable of utilizing it is still in the minority. FSR has an advantage in that all graphics cards should be able to use it. Of course, if a game engine includes its own capable upscaling technique, like with the aforementioned UE5, games using it won't necessarily need to utilize FSR specifically to provide something similar or potentially better.The one thing that remains to be seen is if AMD takes ages to add supported games like Nvidia did or will they actually get a decent amount of games on board quickly, considering their current supported games are quite lackluster imo.
The thing is, most people barely notice much difference between games rendered at 4K compared to 1440p at common screen sizes, at least when not carefully analyzing still-frames. But 4K requires significantly more hardware resources for what ultimately only amounts to a slightly sharper image. So if you can render a game at around 1440p, but upscale it to 4K, while applying algorithms to make it look near-indistinguishable from a scene natively rendered at 4K, that opens up more hardware resources for making the game look better in other ways. For example, with raytraced lighting effects, or more detailed environments, things that are likely to provide more noticeable improvements to visuals than just a slight increase in sharpness. So it's not so much "compromising image quality", but rather shifting image quality from areas that matter less to those that matter more, allowing games to look better on a given level of hardware, not worse.I am really surprised that everyone seems to forget what these technologies do in the end. Compromise image quality to get performance...
If we cannot discern image quality while moving, in action etc, then why it is there at the first place? Were all those shiny graphics drawn in vain?
These technologies should be a search for better image quality with less performance impact, not sacrifice image quality for playable performance.
If we cheer to much on this I fear we may push amd and nvidia on wrong direction. Imagine with some dlss 3 or fsr pro you always get 200 fps.. By reducing quality, removing some stuff from the scene? Will we be happy?
These shouldn't be the leading factor of gpus.
The order is deliberately mixed up to see if you can spot the differences and identify yourself which quality setting was used for each. The key to the image galleries is at the end of the article in the 'Initial Thoughts' section.I think the images are not in ascending order of quality, at least the first ones. I opened them in order, but had to reorder the tabs to get them right. Could you please verify that?
The thing is, most people barely notice much difference between games rendered at 4K compared to 1440p at common screen sizes, at least when not carefully analyzing still-frames. But 4K requires significantly more hardware resources for what ultimately only amounts to a slightly sharper image. So if you can render a game at around 1440p, but upscale it to 4K, while applying algorithms to make it look near-indistinguishable from a scene natively rendered at 4K, that opens up more hardware resources for making the game look better in other ways. For example, with raytraced lighting effects, or more detailed environments, things that are likely to provide more noticeable improvements to visuals than just a slight increase in sharpness. So it's not so much "compromising image quality", but rather shifting image quality from areas that matter less to those that matter more, allowing games to look better on a given level of hardware, not worse.
I am really surprised that everyone seems to forget what these technologies do in the end. Compromise image quality to get performance...
If we cannot discern image quality while moving, in action etc, then why it is there at the first place? Were all those shiny graphics drawn in vain?
These technologies should be a search for better image quality with less performance impact, not sacrifice image quality for playable performance.
If we cheer to much on this I fear we may push amd and nvidia on wrong direction. Imagine with some dlss 3 or fsr pro you always get 200 fps.. By reducing quality, removing some stuff from the scene? Will we be happy?
These shouldn't be the leading factor of gpus.
I totally agree. The OP has a werid way of looking at it.Of course we will be happy... You need to remember that not everyone has the money to buy even cards like 3060ti/3070, let alone 3080/3090.
Reducing quality? How many people today have to put image quality settings to medium or low in their games because their gpu is not fast enough?
I think with AMDs implementation, it's not just driver based because the game renders the game 'content' up-scaled, then allows the HUD interface to be rendered at native resolution. A multi-step process.Honestly I'm just wondering something: How come none of this is driver based yet?
That would be the killer feature.
Honestly I'm just wondering something: How come none of this is driver based yet?
That would be the killer feature.