Simply put, first of all, as we know, DLSS takes images rendered at a low resolution, then uses AI to fill in the blanks to make an image at a higher resolution, and it does so with very little reduction in image quality (give or take).
But,
Frame Generation/FG, on the other hand, also referred to as DLSS 3.5, uses AI to upscale existing frames to a higher resolution. Frame Generation helps in creating entirely new frames using AI.
FG works by analyzing consecutive frames of a game, predicting the
next frame, and then inserting a new, AI-generated frame
between them.
So basically, FG can generate entirely new frames by predicting where each pixel in a scene is going to be moving in a given frame, and then using that information to generate what
filler frames in between frames the GPU renders in a traditional way. This doesn’t come without any drawback though.
Because the Tensor Cores are guessing what’s going to be happening in the frames they’re generating, errors can happen at lower frame rates, where change is more apparent from frame to frame.
However, Frame Generation can also help
bypass CPU bottlenecks.
You can think of FG as follows.
- Initial Frame Analysis: DLSS 3 analyzes two consecutive frames, to get an idea about what changes are occurring in the scene.
- Prediction done using AI: Then using a deep learning model, it predicts what the intermediate frame should look like, based on the motion observed.
- Frame Insertion: Here the AI-generated frame is inserted between the original frames.