How does downsampling/supersampling work?

PossessedGamer

Reputable
Jan 2, 2016
10
0
4,510
I know downsampling and supersampling cause displays to run at higher resolutions than their native resolution but how does it actually work? From what I know a pixel can only be a single color at one time and to make virtual pixels wouldn't you have to be able to division one pixel up in to multiple different colors at the same time? Yeah as you can tell trying to figure this out is hurting my head so an explanation would be great.
 
Solution
Like Hlsgsz said, the idea of supersampling is render a full frame at higher resolution (usually the native resolution to the power of 2 or 4) and then downsampled back to the display's native resolution.

For example, consider the first and most costly method of super-sampling SSAA. At 2x SSAA, 4 supersampled pixels would need to be rendered for each individual pixel in a given frame. This method produces a very detailed and smooth image, free of line imperfections often referred to as 'aliased' or 'jaggy'. The downsampled image is usually just and average of the supersampled data points.
Supersampling.png


SSAA is largely obsolete in modern applications as the performance cost is...
It's pretty simple: the image is rendered at a higher resolution from the one the display uses(supersampled), and then resized to said resolution using algorithms similar to the ones used when resizing an image in an image editing software(downsampled).
 

spagalicious

Distinguished
Like Hlsgsz said, the idea of supersampling is render a full frame at higher resolution (usually the native resolution to the power of 2 or 4) and then downsampled back to the display's native resolution.

For example, consider the first and most costly method of super-sampling SSAA. At 2x SSAA, 4 supersampled pixels would need to be rendered for each individual pixel in a given frame. This method produces a very detailed and smooth image, free of line imperfections often referred to as 'aliased' or 'jaggy'. The downsampled image is usually just and average of the supersampled data points.
Supersampling.png


SSAA is largely obsolete in modern applications as the performance cost is massive. More modern algorithmic implementations of super-sampling and anti-aliasing techniques are used such as MSAA (Multisample anti-aliasing), FXAA (Fast approximate anti-aliasing), and more recently TXAA (Temporal Anti-Aliasing).

Perhaps you've also heard of super-sampling techniques known as DSR (Dynamic Super Resolution) and VSR (Virtual Super Resolution). Both technologies from Nvidia and AMD respectively, are essentially modern techniques for Super-Sampling.

Despite achieving relatively similar results, DSR and VSR are actually quite a bit different in terms of implementation. DSR is implemented as a shader program, which gives Nvidia cards lots of flexibility in terms of resolution availability at the cost of some performance hit (in addition to the actual supersampled frames). VSR was actually not implemented until GCN 1.1, VSR was limited to a max resolution of 3200x1800. It wasn't until the release of the Fury/300 series cards and GCN 1.2 where AMD added to their controllers the ability to supersample at 3840x1440 (4K).

Hope this helps you out a bit.
 
Solution

PossessedGamer

Reputable
Jan 2, 2016
10
0
4,510


Great explanation so what Iv'e gathered from it all is basically since 1 Pixel can't be multiple different colours at the same time the colours are blended in to one colour to make an estimate of what the colours would otherwise look like this must be why super sampling isn't quite as good as native higher resolution because it's not 100% accurate. However since it's still only one colour why is it still more demanding? Is it because all the sample points still have to be rendered to find the average colour?

You answered my question but now you have given me a new one to ask what exactly did you mean when you said DSR is implemented as a shader program?
 

spagalicious

Distinguished
While both techniques (DSR and VSR) still revolve around super-sampling an image and subsequently downsampling that image, the differences in filters and sampling points relative to the graphics/driver pipelines is what separates the two. DSR applies what it has deemed a '13-tap gaussian filter' to downsample-supersampled images.

Here's a good article explaining DSR.
http://techreport.com/review/27102/maxwell-dynamic-super-resolution-explored

Oddly enough, Nvidia's DSR really is about rendering a scene at a higher resolution and scaling it down to fit the target display. If you ask DSR to render a game at 4X the native res, say at 3840x2160 when the target display is 1920x1080, then the result should be similar to what you'd get from 4X supersampling.

The benefits are the same. The extra sample info improves every pixel—not only does it smooth object edges, but it also oversamples texture info, shader effects, the works. The performance hit is the same, too. The GPU will perform like it would when rendering to a 4K display, perhaps a little slower due to the overhead caused by scaling the image down to the target resolution.

The twist with DSR is that it can scale images down from resolutions that aren't 2X or 4X the size of the target display. For example, DSR could render a game internally at 2560x1440 and scale it down to fit a 1920x1080 monitor. That's just... funky, if you're thinking in terms of supersampling. But it does seem to work.

In order to make DSR scale down gracefully from weird resolutions, Nvidia uses a 13-tap gaussian filter. This downscaling filter is probably quite similar to the filters used to scale video down from higher resolutions, like when showing a 1080p video on a 720p display. The fact that this filter uses 13 taps, or samples, is a dead giveaway about how it works: it grabs samples not just from within the target pixel area but also from outside of the pixel boundary.

We'll get into some examples shortly, but the effect of blending in info from neighboring pixels is easy enough to anticipate. This downscaling filter will blur or soften images somewhat, granting them a more cinematic look. The effect is similar to the tent filters AMD used in its old CFAA scheme or, more recently, to the kernel employed by Nvidia's own TXAA technique.