Anti-Aliasing Analysis, Part 2: Performance

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Nice article (both the current '2nd' and the 1st AA-Analysis), its an excellent write up.

---

If the article is to educate users about viable Anti Aliasing modes and types present on their drivers/games which can be utilized, it could be better if the following could be included.

.It would also be a nice touch if there are screen shots (as mentioned by some earlier) at the same time-stamps across the same AA_modes and but different AA_Types for the same resolution for different games, ideally at the highest playable resolution achievable on single GPU cards.

On the lines like; 8xMSAA, 16xCSAA, 8xEQAA, 24xEDAA, 8xMSAA+Transparency, 4xMLAA...all@1920x1080.

.Also expanding the article to cover a games which have more detailed geometry and varied texture images on geometry could help.

.If possible there could be an AA image repository where an article could be linked, while bumping it up for every new game which is bench-marked.

my cent
 
"The highest-quality AA mode, supersampling essentially renders the frame at a higher resolution and downsamples the result. This causes a performance hit so large that Nvidia straight-up removed this mode from its GeForce drivers some time ago."

Under "Antialiasing - Transparency" , the options show "Multisample" then "2x, 4x, and 8x(Supersample)". Are these really just higher levels of Multisampling and not actual Supersampling? I assume that's the case.

😉
 
"Supersampling is the granddaddy of all anti-aliasing modes. For all intents and purposes, this method essentially renders the output at a higher resolution and down-samples (averages out) the result."


@Author, Please double confirm if this is how it works on AMD drivers.

Because AMD's method of 'SSAA' (Sparse Rotated Grid) option doesn't appear to work by upscaling the whole image and then downscale the same as a final result, the technique in AMD's implementation appears to be different.

Instead of traditional SSAA, AMD's technique appears to involve taking duplicate copies of the frame/image, then taking sampling values across these images with the duplicate frame/images themselves as per the defined spaced and rotated grid pattern.

This is one of the main reasons when AMD's SSAA is not possible in all games (DirectX 9), AMD's SSAA fails if the game itself cannot support AMD's basic MSAA in the first place.
(ie..AMD's SSAA only works if AMD's MSAA works in that specific game in the first place)


 
I would love to see higher resolutions being the norm, with upscale to normal resolutions if graphics can't keep up. Like 4k2k resolution on a 21' display. If 2560x1600 can be made into a tablet size, why can't desktop computers grow to 4k2k or even 8k4k. No need to see all the individual pixels. They would be naturally merged together, instead of AA merged. And with the proper performance hit, which would be a good excuse for more graphics power, they could be rendered without the upscale. Else, a render of 1080p could be upscaled with precise 2x ratio to a 4k2k display, without visual artifacts and performance hit.
I am a fan of higher resolutions :)
 


Transparency AA isn't full-scene AA like MSAA or AMD's SSAA.

Nvidia's transparency AA works ONLY on transparent textures, not on the entire frame.

If you turn on Transparency supersampling, all of the edges of geometry will remain aliased. Nvidia removed FSAA supersampling from their drivers long ago. 😉
 
"Supersampling is the granddaddy of all anti-aliasing modes. For all intents and purposes, this method essentially renders the output at a higher resolution and down-samples (averages out) the result."


@Author, Please double confirm if this is how it works on AMD drivers.

Because AMD's method of 'SSAA' (Sparse Rotated Grid) option doesn't appear to work by upscaling the whole image and then downscale the same as a final result, the technique in AMD's implementation appears to be different.

Instead of traditional SSAA, AMD's technique appears to involve taking duplicate copies of the frame/image, then taking sampling values across these images with the duplicate frame/images themselves as per the defined spaced and rotated grid pattern.

That analogy is simply for the purpose of teaching, it's not meant as a literal description of how a graphics card creates a supersampling result (although it's an excellent description of what's happening).

Having said that, that analogy applies just as well to sparse rotated grid. Sparse rotated grid still samples more than once per pixel to down-sample the result, the difference being that it doesn't use an ordered grid to sample the pixel (it doesn't sample every single position in the grid), and the grid is rotated to provide better anti-aliasing on lines close to vertical and horizontal.

All versions of AA can be described as downsampling actually, because they take more than one sample per pixel and average out the result. But SSAA fits this description best as it works on every pixel in the scene, and each pixel is fully sampled more than once (regardless of the rotation of the grid or whether or not each position in the grid is sampled).
MSAA only takes extra samples for pixels on the edge of geometry, and when it does apply AA to a pixel that pixel is not fully sampled (only depth and stencil values are fully sampled, some things such as pixel shaders are only sampled once per pixel).
 
Fabulous article btw. Addressing video quality in terms other than frame rate is something hardware sites tend to stay away from - and that's easy enough to understand, since it becomes extremely difficult to attach hard numbers to what you see on a screen. Even frame rate statistics don't tell you what you actually see in a game. Thanks for stepping up and helping educate us, and demonstrating the differences between graphics card technologies for those of us who can't do side-by-side comparisons.

😉
 
[citation][nom]Cleeve[/nom]It's the Ultra setting that kills Skyrim and really shifts the bottleneck to the CPU.[/citation]

Are you sure? I run the game on ultra @1980, and I haven't seen any sort of performance issue despite running at stock cpu speeds currently (due to my memory at post being less than it should be, troubleshooting)
 
[citation][nom]neiroatopelcc[/nom]Are you sure? I run the game on ultra @1980, and I haven't seen any sort of performance issue despite running at stock cpu speeds currently (due to my memory at post being less than it should be, troubleshooting)[/citation]

Absolutely sure! Check the benchmarks:

http://www.tomshardware.com/reviews/skyrim-performance-benchmark,3074-9.html

If you think the results are off, you're welcome to duplicate the benchmark procedure and test it yourself... I'd be interested in seeing if your results are significantly different.
 


They tested MLAA with MSAA enabled as well, which is not the intended functionality. It's meant to be used by itself.
 
From the article:

Supersampling is the granddaddy of all anti-aliasing modes. For all intents and purposes, this method essentially renders the output at a higher resolution and down-samples (averages out) the result.

This isn't how the Radeon 5000/6000 series SSAA works. What it does is apply the same sampling technique as MSAA but for every pixel in the scene.

It wouldn't even be possible for the Radeon implementation to render at a higher resolution and downsample because their SSAA uses a rotated grid, not ordered.
 
Status
Not open for further replies.