News Cyberpunk 2077 gets triple the framerate using an unholy combination of Nvidia and AMD tech — Nvidia DLSS Frame Generation plus AMD Fluid Motion Fr...

Status
Not open for further replies.
We know that both frame generation technologies don't leave visuals 100% intact, and combining two of them seems like a recipe for disaster.
I'd expect more artifacts than using just DLSS Frame Generation, but it's probably not much worse. The reason is that the differences between frames decreases as you increase the frame rate. The more similar they are, the easier it is to interpolate between them. So, if your base framerate is high, then the technologies probably both work pretty well.

Plus, user input latency should be atrocious.
Again, I'm going to call this into question. Both technologies add delays in terms of a couple frames. The higher the input frame rate, the less delay (in milliseconds) they'll add.

I'm not saying I'd like to use such a setup, but if your game is playable with either one, having both probably isn't much worse.
 
Again, I'm going to call this into question. Both technologies add delays in terms of a couple frames. The higher the input frame rate, the less delay (in milliseconds) they'll add.

I'm not saying I'd like to use such a setup, but if your game is playable with either one, having both probably isn't much worse.
Glad you brought this up. My big question is on the input latency. From what I've read of both techs DLSS 3 has a lot less latency compared to fluid motion frames from AMD. I've heard AMD implementation is really rough on said latency (some reviews were more positive to be fair and I have yet to try fluid motion frames ATP to test it myself). I'd be curious to see input latency tested with both solutions active. If its low enough....I could see myself trying to do the same in combining both techs for game play.
 
Last edited:
I am thinking the key is to generate so many frames that any abnormalities will not be noticeable to the human eye. Like +500 FPS, there's no way your brain will be able to detect inconsistencies between frames.
That's true if the anomalies aren't consistent from frame-to-frame and are somewhat evenly distributed. However, this usually isn't the case. Frame interpolation algorithms tend to make systematic errors on certain kinds of content.

For instance, my TV has a built-in motion smoother that's normally very good. However, there are some cases it consistently struggles with, for instance a person walking in front of a regularly-patterned background, such as a brick wall. There's a sort of halo around the person, where the bricks appear jumbled up and jumping around. It sticks out like a sore thumb. Yet, I leave motion smoothing on the maximum setting, because the effect normally delivers such a big improvement in how clearly camera pans or moving objects appear.
 
From what I've read of both techs DLSS 3 has a lot less latency compared to fluid motion frames from AMD. I've heard AMD implementation is really rough on said latency (some reviews were more positive to be fair and I have yet to try fluid motion frames ATP to test it myself).
Other than reading through the details on DLSS Frame Generation, when it first launched, I haven't followed either tech.

I do like motion smoothing so much that I would enable it on my TV, for a couple games that didn't require fast-twitch reaction times, on my PS3. If I played PC games and had a card that supported it, I'd probably try pretty hard to use it, unless it was just untenable. The latency on my TV was really bad, since I couldn't use its "Game Mode" if I wanted motion smoothing. Game Mode circumvents other processing the TV does that adds latency, so that other processing latency stacked on top of what the motion smoother itself added. Yet still I enabled it, because I'm such a framerate junky.
 
it might look good but the reaction time wouldn't match. Both of them add latency so it depends,
Allow me to use a numerical example, to make my case.

Let's say each doubles the frame rate, but adds a frame worth of latency, at the input frame rate. If you start with 30 fps, then the first stage adds 1/30th of a second latency (i.e. 33.3 ms) and outputs 60 fps. Then, the second stage adds a further 1/60th of a second of latency (i.e. 16.7 ms) and outputs 120 fps. Now, we're up to 50 ms of latency. Not great, but better than the 66.7 ms you might have assumed.

Note that this was only hypothetical, but I think it's a fair characterization of how stacking would impact latency.

I should add that stacking should be completely unnecessary. If you're doing frame interpolation, then you should be able to interpolate more than one frame. There might be practical reasons why the hardware lacks the resources to do it, but there's no logical reason why you shouldn't be able to just have DLSS interpolate a 30 fps stream to 120 fps. In that case, you'd avoid the second-stage latency altogether.
 
Last edited:
Fluid Motion Frames doesn't require support be built into the game. Been using it since the first beta was released bringing support for 6 series cards. It pretty much tripled the frame rate in Alan Wake 2 from 60 to 180, with obviously expected ups and down depending on the scene, but it's clearly working as intended.
Fluid Motion Frames is so new I haven't got any games that would use it. Its not in the Adrenaline drivers yet unless you get a beta pack. So I can't comment on either of their effects.
The driver has been stable enough I haven't bothered upgrading from the 2nd beta to the third yet.
 
So they make fake frames from fake frames..
Yeah, that part's not ideal for the reasons mentioned.

Just wait DLLS 5.0 to do same with one GPU... Of course it has to be Nvidia 5000 series GPU, but who cares...
🤣 🤣 🤣
I don't see the problem with simply having DLSS interpolate more frames. Assuming the quality is good, of course. But, whether you insert 1 or 3 frames doesn't affect latency, which seems to be most people's concern.
 
I'd expect more artifacts than using just DLSS Frame Generation, but it's probably not much worse. The reason is that the differences between frames decreases as you increase the frame rate. The more similar they are, the easier it is to interpolate between them. So, if your base framerate is high, then the technologies probably both work pretty well.


Again, I'm going to call this into question. Both technologies add delays in terms of a couple frames. The higher the input frame rate, the less delay (in milliseconds) they'll add.

I'm not saying I'd like to use such a setup, but if your game is playable with either one, having both probably isn't much worse.
I actually don't doubt it about quality. DLSS frame generation needs about 90 real FPS for fake frames to blend in. Otherwise you get noticeable artifacts that stay on screen for too long. Now here you are talking 2 fake frames per one real. So to ensure similar time between two real frames at 90FPS to cover for 2 fake frames, you would need double real framerate. Like sure, if you squint your eyes enough to not notice details, it works. But same is true fir really low resolutions too.

As for latency, there is reason why eSport players in top leagues don't use this. Like sure nVidia is battling it with Reflex, which granted is successful, but you know what is better? Reflex without fake frames. :-D Still, as far as latency goes, I do think you could have it maneagable for singleplayer games. However with fake frames, you will still have disconnect between how responsive and how fluid game is. I am by all means not eSport player, or super sensitive and even I noticed it. It was way less of an issue for me, but I figured that I prefer just upscaling, because at 90FPS, I also don't really need more for singleplayer. And where it would be handy, I see fake frames.
 
I don't see the problem with simply having DLSS interpolate more frames. Assuming the quality is good, of course. But, whether you insert 1 or 3 frames doesn't affect latency, which seems to be most people's concern.
The artifacts are localized, so the AI should be able to predict which parts of the screen are probable to generate artifacts, and actually calculate only those parts.

After that's done, it should be possible to use the same algorithm to extrapolate frames, calculating only the critical parts. That would simultaneously increase the framerate, reduce artifacts, and reduce latency.
 
I actually don't doubt it about quality. DLSS frame generation needs about 90 real FPS for fake frames to blend in. Otherwise you get noticeable artifacts that stay on screen for too long. Now here you are talking 2 fake frames per one real. So to ensure similar time between two real frames at 90FPS to cover for 2 fake frames, you would need double real framerate.
I disagree that you need any more real frames. The higher your framerate is, the more similar consecutive frames become and the easier it is to interpolate between them. That's probably a lot of what's behind the "90 FPS" number. I'd expect that you should be able to insert an arbitrary number of frames at 90 Hz, with hardly any more visible artifacts than just 1.
 
  • Like
Reactions: dimar and artk2219
The artifacts are localized, so the AI should be able to predict which parts of the screen are probable to generate artifacts, and actually calculate only those parts.

After that's done, it should be possible to use the same algorithm to extrapolate frames, calculating only the critical parts. That would simultaneously increase the framerate, reduce artifacts, and reduce latency.
I think embedding AI deeper in the rendering loop is something we're going to see really soon.
 
  • Like
Reactions: dimar and artk2219
Its not the first time an AMD and Nvidia GPU have been combined in the same system, it used to be more common in the days when Physx was a thing. You would have your AMD card as the primary and an Nvidia card like an 8800 GTX or something handling physx compute.
 
Last edited:
  • Like
Reactions: dimar and Order 66
Did the writer really say DLSS frame generation isn't supported in Starfield? I mean, it has been implemented in the game since patch 1.8.86 on November 20th (and a couple of weeks earlier with the beta on Steam).
 
  • Like
Reactions: Order 66
Status
Not open for further replies.