Like Hlsgsz said, the idea of supersampling is render a full frame at higher resolution (usually the native resolution to the power of 2 or 4) and then downsampled back to the display's native resolution.
For example, consider the first and most costly method of super-sampling SSAA. At 2x SSAA, 4 supersampled pixels would need to be rendered for each individual pixel in a given frame. This method produces a very detailed and smooth image, free of line imperfections often referred to as 'aliased' or 'jaggy'. The downsampled image is usually just and average of the supersampled data points.
SSAA is largely obsolete in modern applications as the performance cost is massive. More modern algorithmic implementations of super-sampling and anti-aliasing techniques are used such as MSAA (Multisample anti-aliasing), FXAA (Fast approximate anti-aliasing), and more recently TXAA (Temporal Anti-Aliasing).
Perhaps you've also heard of super-sampling techniques known as DSR (Dynamic Super Resolution) and VSR (Virtual Super Resolution). Both technologies from Nvidia and AMD respectively, are essentially modern techniques for Super-Sampling.
Despite achieving relatively similar results, DSR and VSR are actually quite a bit different in terms of implementation. DSR is implemented as a shader program, which gives Nvidia cards lots of flexibility in terms of resolution availability at the cost of some performance hit (in addition to the actual supersampled frames). VSR was actually not implemented until GCN 1.1, VSR was limited to a max resolution of 3200x1800. It wasn't until the release of the Fury/300 series cards and GCN 1.2 where AMD added to their controllers the ability to supersample at 3840x1440 (4K).
Hope this helps you out a bit.