How does g sync eliminate tearing without capping fps?

AbstractSpaces

Honorable
Feb 25, 2013
25
0
10,530
My understanding is that screen tearing occurs when the GPU is putting frames out faster than the monitor can refresh. G sync doesn't slow the framerate to matxh the monitor though as far as I can tell. I know G sync matches the refresh rate of the monitor to the framerate of the gpu which helps eliminate stuttering at lower framerates, but it doesn't seem to do anything about the framerate exceeding the refresh rate. So how does it stop tearing?
 
Solution


The graphics rendering pipeline uses what's known as a "swap chain" to display frames. The swap chain consists of one or more buffers. One of the buffers is called the screen buffer, and any subsequent buffers are called back buffers.

Without any sort of adaptive refresh mechanism, the...

revernt

Honorable
Sep 20, 2013
126
0
10,760
Okay say the max refresh rate on your G-Sync monitor is 144hz. If you can render over 144 frames per second, your display adapter (nvidia video card) will cap the frame rate at 144. This is similar to V-Sync but much more flexible.

However if you can't render 144 FPS, your refresh rate will adjust in real-time to your current FPS.

Feel free to correct me on this, I personally haven't had the chance to use G-Sync technology yet.
 


The graphics rendering pipeline uses what's known as a "swap chain" to display frames. The swap chain consists of one or more buffers. One of the buffers is called the screen buffer, and any subsequent buffers are called back buffers.

Without any sort of adaptive refresh mechanism, the GPU displays the contents of the screen buffer to the display at fixed intervals. If the refresh rate is set at 60hz, the GPU scans over the contents of the screen buffer 60 times every second with the scan repeating every 16.67 milliseconds.
The scan proceeds line by line until the entire frame has been displayed. Each line is followed by a horizontal blanking interval (H_Sync) and the final line is followed by a much longer vertical blanking interval (V_Sync). These blanking intervals can be used to transmit additional data, such as audio (HDMI, and DVI with non-standard support).

With only a single buffer to write to and read from, and facing real time constraints by the video encoder, the raster process has to make some decisions.

First, it could wait to dump data into the screen buffer during the horizontal and vertical blanking periods. However, during this interval the rest of the render pipeline will have to wait for the rasterizer while the rasterizer waits for the screen buffer to become unlocked. This hinders performance, so it's a dumb choice.

Second, it could simply write to the screen buffer as fast as it can as long as the screen buffer isn't locked by the video encoder. If the rasterizer falls behind the video encoder, the video encoder will end up re-displaying part of a completed frame. If the rasterizer gets ahead of the video encoder, some frames will not be displayed completely at all. This allows for very good performance and responsiveness, but will often display broken and jagged frames.

Lets now introduce a second buffer, called a back buffer. When a single back-buffer is used, this method is called double-buffering. When double buffering is used, the video encoder reads from the screen buffer as usual, but drawing operations are performed on the back buffer. When all drawing operations are finished (and thus, the frame is completed) the software swaps the screen buffer and the back buffer during the vertical blanking interval. Now, the video encoder is drawing from what used to be the back buffer, and drawing operations are made to what used to be the screen buffer. Boom, no more jagged lines.
As a consequence of double buffering, the responsiveness of the video is dependent on the render pipeline's ability to finish all drawing operations before the video encoder reaches the vertical blanking interval. If it does, great, the GPU gets to sleep a bit; if it doesn't meet the deadline then the video encoder displays the same frame again and the GPU has to wait until the next interval to swap the buffers and start drawing again. This can cause some really nasty temporal artifacting in the form of what appears to be odd and inconsistent input latency. Some frames are held longer than others, so the player feels out of sync with their actions.

How do we solve this? Two ways:

First, introduce a third buffer, which is a second back buffer. When a second back-buffer is used the GPU can complete one frame, and then continue drawing the next frame in the next buffer. If it then completes that frame before the video encoder finishes, it can swap back to the first buffer and overwrite the undisplayed frame. If it falls behind the video encoder, it can overwrite the oldest complete frame with new draw calls. In any case, the video encoder will always display whatever is the most recently completed frame and the rendering pipeline can keep working. This is called triple-buffering and it provides the best of all worlds. No screen tearing, no screwy input, and fairly reasonable performance and response time.

The other way, and the new way, is to offload the screen buffer to the display itself and simply let the display decide what to display and when. Now, when the render pipeline finishes drawing a frame into a back buffer, it can send it to the display immediately without waiting for a vertical blanking interval from the video encoder. The display then most likely has a pair of buffers, one screen buffer and optionally one buffer holding the next frame to display.

The advantage of G-Sync over triple buffering? Not much really.
 
Solution

TRENDING THREADS