News Nvidia RTX 4000 and 5000 series owners instructed to lower quality settings in Hell is Us demo — Developer urges users to turn down settings and di...

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Its not really "imaginary" performance---it DOES appear smoother, at the current cost of noticeable input lag. It IS imaginary frames for the most part, but it does "perform" better in terms of visually "perceived" framerates. What you see, not what is truly going on.

If they ever make the determination that the majority of gamers prefer less smooth frames over input lag, they won't develop the technology anymore, since it wouldn't be worth it.
 
yeah.. this is patently an untruth and I challenge you to actually site a source that shows that AMD has had worse drivers. When you look at driver history Multiple crash studies have shown that GeForce actually cause more issues and have more crashes/problems

AMD has had fewer features and had black screen issues when the Navi RDNA 1 gpus came out which wasa major hardware issue actually not a driver issue but alas, the drivers were blamed and as the cards were less popular the "worse drivers" urban myth has set in.

Worse being fewer features... sure
Worse for stability/system issues... no
I cite my past experiences with ATI having grown up in the 90s.
Wouldn't be that hard to get sworn affidavits from my friends who also used ATI in the 90s lol.

Do note how the original post used words like "historically and had".

https://forums.anandtech.com/threads/question-how-did-ati-get-this-rap-for-bad-drivers.437090
https://community.khronos.org/t/ati-drivers-still-suck/38734

There graphics drivers have come a long way in 30 years.
I've no doubt AMD put in the effort for Navi which was released in 2019 lol.
 
Its not really "imaginary" performance---it DOES appear smoother, at the current cost of noticeable input lag. It IS imaginary frames for the most part, but it does "perform" better in terms of visually "perceived" framerates. What you see, not what is truly going on.

If they ever make the determination that the majority of gamers prefer less smooth frames over input lag, they won't develop the technology anymore, since it wouldn't be worth it.

Yes it's smoother for the same reason motion blur makes things smoother, frame interpolation.

It's just a smarter way to do motion blur, which is a cool animation smoothing technique but we don't give 50% higher "FPS" scores when someone turns it on.
 
t's just a smarter way to do motion blur, which is a cool animation smoothing technique but we don't give 50% higher "FPS" scores when someone turns it on.
I don't feel like we are on the same page here. I'm not arguing that it gives uncompromising benchmark advantages, just that its more pleasing to the eye in most situations than watching a stutterfest. At some point, yes, I agree, the input lag can become detrimental, but if they do a good job of balancing it, it has worked well for me.

Of course, you'd probably want to turn it off in online twitchfest games for that lag reason.
 
I don't feel like we are on the same page here.

There is no need to defend your own choices, technology doesn't care.

There is zero "performance" difference, is my point. Reviewers tend to use "FPS" as a benchmark for how "good" a graphics product is, nVidia see's this and creates a frame interpolation method that triggers the "frame ready" function. This creates the illusion of "MOAR FPS".

https://en.wikipedia.org/wiki/Motion_interpolation

These techniques have been around awhile, nVidia just invented one that is faster, though creates more artifacts.
 
There is no need to defend your own choices, technology doesn't care.
Wha? What are you talking about?

If the frame gen didn't provide a benefit to the end user over just letting the "real" stutterfest of native output happen, whether you choose to believe it or not, nobody would use it and nobody would develop the technology.

"Performance" does not have to mean actual FPS.
 
Wha? What are you talking about?

If the frame gen didn't provide a benefit to the end user over just letting the "real" stutterfest of native output happen, whether you choose to believe it or not, nobody would use it and nobody would develop the technology.

"Performance" does not have to mean actual FPS.
Hm... That's an interesting point: would any form of Frame Generation be able to hide stuttering?

Just thinking "out loud", but it* wouldn't make sense to me that FG can hide stuttering? If the GPU or CPU can't calculate the next frame and holds the pipeline, then obviously you can't generate the in-between? Is that logically sound as an argument?

Well, if I'm missing something, just tell me.

Regards.
 
Hm... That's an interesting point: would any form of Frame Generation be able to hide stuttering?

Just thinking "out loud", but it* wouldn't make sense to me that FG can hide stuttering? If the GPU or CPU can't calculate the next frame and holds the pipeline, then obviously you can't generate the in-between? Is that logically sound as an argument?

Well, if I'm missing something, just tell me.

Regards.
it’s far less work for the gpu to “guess”
The next frame without taking into account input from the game engine than it is to render an actual frame.

That’s the reason (very much simplified) that you can get more frames so your game looks smooth (enough fps) but feel laggy (because a number of frames are not related to your input).

That said if your gpu is fast enough to render enough frames to begin with for a smooth game but you want more for a high-refresh rate screen then it can look AND feel smooth, but basically only in situations where you could already run the game pretty well.
 
  • Like
Reactions: blppt
it’s far less work for the gpu to “guess”
The next frame without taking into account input from the game engine than it is to render an actual frame.

That’s the reason (very much simplified) that you can get more frames so your game looks smooth (enough fps) but feel laggy (because a number of frames are not related to your input).

That said if your gpu is fast enough to render enough frames to begin with for a smooth game but you want more for a high-refresh rate screen then it can look AND feel smooth, but basically only in situations where you could already run the game pretty well.
No... Not really?

In order to interpolate, it means you need 2 things: the current/previous frame and the next frame. If the future frame is stuck "in transit", then Frame Generation just won't have a reference point, unless it extrapolates, which is something current tech is not doing. The whole reason of stuttering, well one of the raeasons, it's because the "next" frame is stuck and not coming out fast enough due to the engine taking too long to calculate or whatever.

Regards.
 
No... Not really?

In order to interpolate, it means you need 2 things: the current/previous frame and the next frame. If the future frame is stuck "in transit", then Frame Generation just won't have a reference point, unless it extrapolates, which is something current tech is not doing. The whole reason of stuttering, well one of the raeasons, it's because the "next" frame is stuck and not coming out fast enough due to the engine taking too long to calculate or whatever.

Regards.

Almost.

Frame interpolation generates intermediate frames in between two existing frames.

You render frame 1 and store it in a buffer
You render frame 2 and also store it in a buffer
Then you render intermediate frames 1.1, 1.2 and 1.3 and store them in another buffer
Then you send Frames 1, 1.1, 1.2, 1.3 and 2 to the display evenly paced out.
Then you render Frame 3 and store in the same buffer 1 was at
Then you render Frame 2.1, 2.2, 2.3 and put them in a buffer
Then you spend Frame 2.1, 2.2, 2.3 and 3 out to the display

It does "smooth" things out because it is holding frames while rendering the intermediary ones, then sending the batch out. MFG does not "predict the future", it's just rendering additional intermediary frames in between two frames that have already been rendered.
 
Almost.

Frame interpolation generates intermediate frames in between two existing frames.

You render frame 1 and store it in a buffer
You render frame 2 and also store it in a buffer
Then you render intermediate frames 1.1, 1.2 and 1.3 and store them in another buffer
Then you send Frames 1, 1.1, 1.2, 1.3 and 2 to the display evenly paced out.
Then you render Frame 3 and store in the same buffer 1 was at
Then you render Frame 2.1, 2.2, 2.3 and put them in a buffer
Then you spend Frame 2.1, 2.2, 2.3 and 3 out to the display

It does "smooth" things out because it is holding frames while rendering the intermediary ones, then sending the batch out. MFG does not "predict the future", it's just rendering additional intermediary frames in between two frames that have already been rendered.
Does FrameGen work exclusively with VSync turned on then?

I don't see how either the engine or the driver could hold the "ready frame" in the buffer, unless there's VSync, which, to me at least, makes it a no-go option.

Also, another thing I thought of is that when the engine has the "frame ready" may be at a different time when the frame is sent to the display due to other post-processing things you can do via driver and such, which is related to what you said, so I had some inkling into what you mentioned.

Regards.
 
Does FrameGen work exclusively with VSync turned on then?

It is completely separate from Vsync, though it would be kinda dumb to use it without Vsync as you'd be doing that extra work only to throw it away.

The GPU has different function areas and one of them is responsible for taking what's inside a frame buffer and sending that out to a display. Most systems operate with double buffering, meaning you have two frame buffers that you alternate between such that once a frame is finish it's stored in one and "Frame Ready" is set and while the card works on the second frame that first one is transmitted out the display. The display output part can be very aggressive trying as it operates on the display resolution and will also grab whatever is it's frame buffer at the refresh timing (screen tearing) unless told to wait (v-sync). You can smooth this out by enabling a third frame buffer known as triple buffering. Of course there are trade off for that and it should only be used on displays with over 100hz refresh rate as it'll introduce another 16ms worth of input lag on a 60hz display.

What MFG is doing is ordering all these frames prior to them hitting the frame buffer, "Frame Ready" means it's ready to be sent to the buffer. This is also why DLSS MFG consumes about 1.5GB of additional VRAM, it needs that space for the additional buffers and model data to interpolate the frames since it's significantly more complicated then a shader method usually employed.
 
It is completely separate from Vsync, though it would be kinda dumb to use it without Vsync as you'd be doing that extra work only to throw it away.

The GPU has different function areas and one of them is responsible for taking what's inside a frame buffer and sending that out to a display. Most systems operate with double buffering, meaning you have two frame buffers that you alternate between such that once a frame is finish it's stored in one and "Frame Ready" is set and while the card works on the second frame that first one is transmitted out the display. The display output part can be very aggressive trying as it operates on the display resolution and will also grab whatever is it's frame buffer at the refresh timing (screen tearing) unless told to wait (v-sync). You can smooth this out by enabling a third frame buffer known as triple buffering. Of course there are trade off for that and it should only be used on displays with over 100hz refresh rate as it'll introduce another 16ms worth of input lag on a 60hz display.

What MFG is doing is ordering all these frames prior to them hitting the frame buffer, "Frame Ready" means it's ready to be sent to the buffer. This is also why DLSS MFG consumes about 1.5GB of additional VRAM, it needs that space for the additional buffers and model data to interpolate the frames since it's significantly more complicated then a shader method usually employed.
thanks for explaining in such detail, I learned something today 😀