[SOLVED] Ray Tracing Confusion

Solution
RTX aka Ray tracing, is a rendering technique for generating an image by tracing the light path as pixels in an image plane, and simulating the effects with virtual objects. The easiest way to think of ray tracing is to look around you, right now. The objects you’re seeing are illuminated by beams of light. Now turn that around and follow the path of those beams backwards from your eye to the objects that light interacts with. That’s ray tracing.

Ray tracing involves tracing the path of a ray (a beam of light) within a 3D world. Project a ray for a single pixel into the 3D world, figure out what polygon that ray hits first, then color it appropriately. In practice, many more rays per pixel are necessary to get a good...
RTX aka Ray tracing, is a rendering technique for generating an image by tracing the light path as pixels in an image plane, and simulating the effects with virtual objects. The easiest way to think of ray tracing is to look around you, right now. The objects you’re seeing are illuminated by beams of light. Now turn that around and follow the path of those beams backwards from your eye to the objects that light interacts with. That’s ray tracing.

Ray tracing involves tracing the path of a ray (a beam of light) within a 3D world. Project a ray for a single pixel into the 3D world, figure out what polygon that ray hits first, then color it appropriately. In practice, many more rays per pixel are necessary to get a good result, because once a ray intersects an object, it's necessary to calculate light sources that could reach that spot on the polygon (more rays), plus calculate additional rays based on the properties of the polygon (is it highly reflective or partially reflective, what color is the material, is it a flat or curved surface, etc.).

The most commonly used ray tracing algorithm, according to Nvidia, is BVH Traversal: Bounding Volume Hierarchy Traversal. That's what the DXR API uses, and it's what Nvidia's RT cores accelerate. The main idea is to optimize the ray/triangle intersection computations.

DLSS on the other hand, is sort of a deep learning antialiasing/AA technique.

It's a kind of neural network to find jagged edges, and perform high-quality anti-aliasing by determining the best color for each pixel, and then apply the proper color to create/smooth out some of the edges, and also overall improve the image quality. As per Nvidia, this new DLSS feature offers the highest quality AA with fewer artifacts, than other forms of AA.

Nvidia basically programs their supercomputer to run any game at extremely high resolutions, and the AI compares that data to the standard resolution, and tries to figure out what it should look like using BOTH these sets of data.

Once the AI has figured this, the instructions are saved via a driver or profile, so that the Tensor cores on the Turing GPU can run that code, and give you the same quality, but with slightly better performance, according to Nvidia, because IMO, I think CUDA isn't calculating antialiasing anymore.

DLSS doesn't run at Native Res, it upscales. Basically the Tensor cores use that as a basis to "super sample" the lower resolution rendered game (with the Turing cores rendering it at 1080p/1440p, before it can be sent out to the monitor (4K output)..) The Tensor cores "add" the necessary information with pixels to clean up the image and make it smoother and super sampled up to 4K...

But there is more to this, as ground-truth images are also needed.
 
Solution
BTW, sorry I forgot to mention this before. Ray tracing isn't a new tech, and it's not a gimmick either. It has been an industry standard CGI. It is pretty much in it's "infancy" stage as well. RTX won't be becoming mainstream anytime soon though. It's not only exclusive to NVIDIA cards either.

Even AMD has it's own ray tracing dubbed as "Radeon Rays", but this hasn't found it's way to consumer GPUS. Unlike RTX, which runs on Microsoft’s DX Raytracing API, Radeon Rays is open source and conforms to the OpenCL 1.2 standard, so it could be used by any non-AMD hardware as well, and on different OS environments.

Modern GPUs these days don't have enough horsepower to ray trace the entire scene, in a single frame, by rendering physically accurate reflections, refractions, shadows, and indirect lighting.. That's why it comes with a performance loss.

The current Bounding Volume Hierarchy (BVH) algorithm, as well as the Denoising Filter both also needs to be refined. To get maximum performance, shaders for all the objects in the scene need to be loaded into GPU memory and ready to go when intersections need to be calculated.
 
As per Nvidia, this new DLSS feature offers the highest quality AA with fewer artifacts, than other forms of AA.
Maybe according to Nvidia... According to reality, DLSS is more or less useless in it's current form, performing and/or looking worse than some other upscaling methods that don't even require special hardware.

The result either looks blurry, or if the game applies some some sharpening afterward, it tends to look a bit like plastic. And it's only supported in a small number of games so far, and restricted to certain resolutions and settings, depending on the game and which card one is using. It's kind of a pointless feature when better looking, better performing and less restrictive upscaling methods exist...

View: https://www.youtube.com/watch?v=3DOGA2_GETQ




RTX raytracing acceleration appears to be somewhat more useful, but it's only utilized by three games so far. And in those games, raytraced lighting effects cause a large performance hit even with the specialized hardware. I'm sure raytraced effects will become a lot more common in the future, but it's questionable how suitable this first-generation RTX hardware might be for accelerating them in the long-term.
 
Maybe according to Nvidia... According to reality, DLSS is more or less useless in it's current form, performing and/or looking worse than some other upscaling methods that don't even require special hardware.

Yup, I agree on this. I was just pointing out what Nvidia thinks of this new AA technique. They are also hiding something from the public though. DLSS actually blurs the image/scene as evident from some of the recent games which had support for DLSS. This has been a controversial topic.

The AI still needs to train these images to produce a sharp image, which is very time consuming as well.
 

1405

Distinguished
Aug 26, 2012
612
13
18,995
Thanks everyone for the much needed lesson. Has anyone played BFV with an Nvidia RTX card? Something less than the top of the line RTX 2080 Ti, I mean. Say, a RTX 2070 ;). Is it playable at 1440p? 1080p, maybe?

Btw, is there a free benchmark anywhere to run if one has a RTX card? I don't feel like buying 3DMark just to try the Port Royal benchmark.
 
  • Like
Reactions: Metal Messiah.

david_the_guy

BANNED
May 11, 2019
77
14
35
How many ray tracing games got released till now ?? very FEW so as to speak...nvdiia promised support for rtx in games , but only few games can implement it properly. The tech its new, it's demanding, and it's expensive.

I really don't care about DLSS in games. makes little to no difference to me. meh.
 
Yup, I agree on this. I was just pointing out what Nvidia thinks of this new AA technique. They are also hiding something from the public though. DLSS actually blurs the image/scene as evident from some of the recent games which had support for DLSS. This has been a controversial topic.

The AI still needs to train these images to produce a sharp image, which is very time consuming as well.
Blurring is how AA is done in every form I'm aware of. They probably just go overboard with the tech to make the image look so blurry.
 
Btw, is there a free benchmark anywhere to run if one has a RTX card? I don't feel like buying 3DMark just to try the Port Royal benchmark.

Do you already have an RTX GPU with you right now ? There are some benchmarks, like the Final fantasy XV, http://benchmark.finalfantasyxv.com/na/ , but this is basically for testing DLSS. You can also try one of the UNIGINE Benchmarks, though I'm not sure about full RTX support.

OR, if you are interested in some of the RTX DEMOs as well, so that you can try the demo on your card, then kindly go through this link. There are 3 DEMOs to test.

Reflections RTX Tech Demo.
Atomic Heart RTX Tech Demo.
Justice RTX Tech Demo.


More details and download links can be found here. Report back with the results, if possible.

https://www.nvidia.com/en-us/geforc...e-reflections-nvidia-rtx-tech-demo-downloads/`

Blurring is how AA is done in every form I'm aware of. They probably just go overboard with the tech to make the image look so blurry.

I think that also depends on the type of AA technique used. For example, FXAA (Fast-Approximate Anti-Aliasing) produces a blurry image, unlike MSAA or SSAA/SMAA, which are other alternate forms of AA.. Even TXAA still has some blurriness to it.
 
  • Like
Reactions: david_the_guy
I presume my card also supports ray tracing GTX 1660 Ti ??

Yes, the card supports RTX, but the performance is going to be sub-par, depending on the graphics and DXR settings applied (low/medium/high), and on 1080p. With the new geforce 425.31 drivers, Nvidia brought support for ray tracing via (DXR) on the GeForce GTX 1660 Ti and GTX 1660, along with the GTX 1080 Ti, GTX 1080, GTX 1070 Ti, GTX 1070, and GTX 1060 (6GB).

The Titan X and Titan Xp are also supported.

But be warned, however, that your graphics settings will need to be dialed down in order to comfortably play the current crop of games that support ray tracing, which includes Battlefield V, Atomic Heart, Metro Exodus, Shadow of the Tomb Raider and Justice. That's because the last-generation GTX cards use traditional shader cores and don't have dedicated RT cores like the new RTX components.

Something else worth noting is how the GTX 1660 / 1660 Ti compare to the 10-series cards with ray tracing.

These NEW Turing cards can do integer and floating point calculations both at the same time, whereas Pascal and earlier architectures use the same CUDA cores for both FP32 and INT. What that means is even though a card like the GTX 1070 (5783 GFLOPS) has more computational power in theory, than the GTX 1660 (5027 GFLOPS), in some of the tests the GTX 1660 ends up being faster with ray tracing/RTX.


aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS85L0QvODMyMzY5L29yaWdpbmFsL01ldHJvLUdsb2JhbC1JbGx1bWluYXRpb24ucG5n


aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS85L0MvODMyMzY4L29yaWdpbmFsL0JhdHRsZWZpZWRWLVBlcmYucG5n
 
  • Like
Reactions: lux1109
I think that also depends on the type of AA technique used. For example, FXAA (Fast-Approximate Anti-Aliasing) produces a blurry image, unlike MSAA or SSAA/SMAA, which are other alternate forms of AA.. Even TXAA still has some blurriness to it.
How much blur that occurs is dependent on what form is used, but the whole tech is based on blurring the lines to smooth the image out. AA is blur in all forms.
 
How much blur that occurs is dependent on what form is used, but the whole tech is based on blurring the lines to smooth the image out. AA is blur in all forms.
Actual supersampling (SSAA) involves what amounts to rendering the scene at a higher resolution than what the screen can display, then shrinking the image to fit, combining multiple pixels to more accurately represent the color of the area covered by any given pixel. So you don't end up with as many harsh pixels that stand out against their surroundings in unnatural ways. An example would be rendering a scene at 4K for display on a 1080p monitor. Each pixel would be made up of four samples combined, as opposed to just one. So, while the result technically loses detail from the original 4K source image, the monitor couldn't natively display that anyway, and the resulting image effectively contains more detail than what native 1080p rendering would provide, and reduces inaccurate jagged edges as a result. However, rendering at a higher resolution makes this by far the most demanding form of anti-aliasing, so it's only really good for situations where you have graphics performance to spare.

MSAA does something similar, but only performs the supersampling on small regions of the image where jagged edges are likely to be noticeable, leaving the rest of the scene alone, and as a result it performs a lot better since it is only supersampling a very limited portion of the scene. Unfortunately, a lot of newer game engines use rendering techniques that are not fully compatible with this form of anti-aliasing, so it's not used quite as often these days.

In recent years, various post-process AA methods like FXAA have become common. They have the advantage of working with all rendering techniques, and typically have very little performance impact, but are not particularly accurate and are more prone to blurring details. Since they are applied after the image has already been rendered, they rely heavily on guesswork to get their results, as they don't have additional samples to work with. Some will even perform these effects on a lower-than-native resolution image to provide a boost in performance, effectively doing the opposite of supersampling, generally resulting in a noticeably blurrier image.

Nvidia's DLSS is effectively more along the lines of this last type of post-process AA, as at least in it's current form, it's rendering the scene at a lower-than-native resolution and trying to fabricate details to fill in the gaps. It uses some AI techniques to improve this guesswork based on pre-computed data, but the resulting image is still quite blurry. Some of the handful of games that support it also apply post-process sharpening afterward to reduce the blurriness, but that introduces more visual artifacts. DLSS is more a means of improving performance at the expense of image quality rather than something to enhance image quality. And it doesn't even do that particularly well, since there are other upscaling techniques that can offer slightly better performance and/or quality without the need for special hardware. Now, if DLSS were able to actually enhance an image at native resolution and add detail better than other forms of post-process AA, that could arguably make it worthwhile, but I have some doubts that the dedicated hardware present in their current cards has the performance to enable that, or they probably would have done so already.

So, while post-process AA tends to blur an image, and upscaling techniques like the current implementation of DLSS provide even lower detail still, forms of AA that utilize supersampling like SSAA and MSAA are actually adding true detail to the scene that otherwise wouldn't be there. That can soften edges a bit, but that's because those edges are becoming more accurate, removing the aliasing artifacts that can make the edges of objects appear artificially jagged. In a similar way, the text on your screen right now is likely utilizing a form of anti-aliasing to appear smoother. This isn't blurring the text, so much as it is incorporating detail beyond what your monitor could otherwise display if it only incorporated one sample per pixel.
 

lux1109

Reputable
BANNED
Apr 30, 2019
117
25
4,610
Yes, the card supports RTX, but the performance is going to be sub-par, depending on the graphics and DXR settings applied (low/medium/high), and on 1080p. With the new geforce 425.31 drivers, Nvidia brought support for ray tracing via (DXR) on the GeForce GTX 1660 Ti and GTX 1660, along with the GTX 1080 Ti, GTX 1080, GTX 1070 Ti, GTX 1070, and GTX 1060 (6GB).

The Titan X and Titan Xp are also supported.

But be warned, however, that your graphics settings will need to be dialed down in order to comfortably play the current crop of games that support ray tracing, which includes Battlefield V, Atomic Heart, Metro Exodus, Shadow of the Tomb Raider and Justice. That's because the last-generation GTX cards use traditional shader cores and don't have dedicated RT cores like the new RTX components.

Something else worth noting is how the GTX 1660 / 1660 Ti compare to the 10-series cards with ray tracing.

These NEW Turing cards can do integer and floating point calculations both at the same time, whereas Pascal and earlier architectures use the same CUDA cores for both FP32 and INT. What that means is even though a card like the GTX 1070 (5783 GFLOPS) has more computational power in theory, than the GTX 1660 (5027 GFLOPS), in some of the tests the GTX 1660 ends up being faster with ray tracing/RTX.


aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS85L0QvODMyMzY5L29yaWdpbmFsL01ldHJvLUdsb2JhbC1JbGx1bWluYXRpb24ucG5n


aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS85L0MvODMyMzY4L29yaWdpbmFsL0JhdHRsZWZpZWRWLVBlcmYucG5n



Thanks for the explanation and info. that was very helpful ! :) I will do some testing and report back.
 
  • Like
Reactions: Metal Messiah.
Actual supersampling (SSAA) involves what amounts to rendering the scene at a higher resolution than what the screen can display, then shrinking the image to fit, combining multiple pixels to more accurately represent the color of the area covered by any given pixel. So you don't end up with as many harsh pixels that stand out against their surroundings in unnatural ways. An example would be rendering a scene at 4K for display on a 1080p monitor. Each pixel would be made up of four samples combined, as opposed to just one. So, while the result technically loses detail from the original 4K source image, the monitor couldn't natively display that anyway, and the resulting image effectively contains more detail than what native 1080p rendering would provide, and reduces inaccurate jagged edges as a result. However, rendering at a higher resolution makes this by far the most demanding form of anti-aliasing, so it's only really good for situations where you have graphics performance to spare.

MSAA does something similar, but only performs the supersampling on small regions of the image where jagged edges are likely to be noticeable, leaving the rest of the scene alone, and as a result it performs a lot better since it is only supersampling a very limited portion of the scene. Unfortunately, a lot of newer game engines use rendering techniques that are not fully compatible with this form of anti-aliasing, so it's not used quite as often these days.

In recent years, various post-process AA methods like FXAA have become common. They have the advantage of working with all rendering techniques, and typically have very little performance impact, but are not particularly accurate and are more prone to blurring details. Since they are applied after the image has already been rendered, they rely heavily on guesswork to get their results, as they don't have additional samples to work with. Some will even perform these effects on a lower-than-native resolution image to provide a boost in performance, effectively doing the opposite of supersampling, generally resulting in a noticeably blurrier image.

Nvidia's DLSS is effectively more along the lines of this last type of post-process AA, as at least in it's current form, it's rendering the scene at a lower-than-native resolution and trying to fabricate details to fill in the gaps. It uses some AI techniques to improve this guesswork based on pre-computed data, but the resulting image is still quite blurry. Some of the handful of games that support it also apply post-process sharpening afterward to reduce the blurriness, but that introduces more visual artifacts. DLSS is more a means of improving performance at the expense of image quality rather than something to enhance image quality. And it doesn't even do that particularly well, since there are other upscaling techniques that can offer slightly better performance and/or quality without the need for special hardware. Now, if DLSS were able to actually enhance an image at native resolution and add detail better than other forms of post-process AA, that could arguably make it worthwhile, but I have some doubts that the dedicated hardware present in their current cards has the performance to enable that, or they probably would have done so already.

So, while post-process AA tends to blur an image, and upscaling techniques like the current implementation of DLSS provide even lower detail still, forms of AA that utilize supersampling like SSAA and MSAA are actually adding true detail to the scene that otherwise wouldn't be there. That can soften edges a bit, but that's because those edges are becoming more accurate, removing the aliasing artifacts that can make the edges of objects appear artificially jagged. In a similar way, the text on your screen right now is likely utilizing a form of anti-aliasing to appear smoother. This isn't blurring the text, so much as it is incorporating detail beyond what your monitor could otherwise display if it only incorporated one sample per pixel.
When you reduce the image, and combine pixels into one, and pick a color that is an average of those multiple pixels, that is blurring. Even SSAA results in a slightly blurred image. It's just as bad as post process AA. Or rather, the blurring of pixels with SSAA and MSAA is better than jagged edges, but it's not as sharp of an image once done.
 
What is the difference between: Ray Tracing, RTX, and DLSS? Are they all the same thing?

Ray Tracing? that is one of rendering method. they exist since late 70s/ early 80s.
RTX is nvidia implementation to speed up ray tracing processing through the use of dedicated hardware. that is the simplest way to think of it though in reality it is a bit more complicated than that because RTX is not just about ray tracing.
DLSS? a new form of AA that nvidia have been trying to push by taking advantage of machine learning and tensor cores. and it does not work the way how traditional AA works.
 
When you reduce the image, and combine pixels into one, and pick a color that is an average of those multiple pixels, that is blurring. Even SSAA results in a slightly blurred image. It's just as bad as post process AA. Or rather, the blurring of pixels with SSAA and MSAA is better than jagged edges, but it's not as sharp of an image once done.
Again, I'm not sure I would describe that as "blurring" when compared to an image that is sampled at a lower resolution to begin with. In my prior example, the 4K source image is losing detail when it is downsampled to 1080p, but from the perspective of that final resolution, the area covered by each pixel is more accurately represented by the downsampling process. You are increasing accuracy of the image compared to native 1080p rendering, as opposed to post-process AA and upsampling, where accuracy is getting lost.

A native 1080p image might be more crisp, but that's only because it is a rougher representation of what the image would look like if rendered at a higher resolution. And sure, using more samples per pixel can result in a perceived loss of texture detail, since there will be fewer contrasting pixels that stand out, but those contrasting pixels are an artifact resulting from the limited resolution, and will tend to cause shimmering and other visual anomalies when in motion. Certainly there can be a tradeoff between having things look sharp, and having them look accurate when resolution is limited though.
 
  • Like
Reactions: TJ Hooker
Again, I'm not sure I would describe that as "blurring" when compared to an image that is sampled at a lower resolution to begin with. In my prior example, the 4K source image is losing detail when it is downsampled to 1080p, but from the perspective of that final resolution, the area covered by each pixel is more accurately represented by the downsampling process. You are increasing accuracy of the image compared to native 1080p rendering, as opposed to post-process AA and upsampling, where accuracy is getting lost.

A native 1080p image might be more crisp, but that's only because it is a rougher representation of what the image would look like if rendered at a higher resolution. And sure, using more samples per pixel can result in a perceived loss of texture detail, since there will be fewer contrasting pixels that stand out, but those contrasting pixels are an artifact resulting from the limited resolution, and will tend to cause shimmering and other visual anomalies when in motion. Certainly there can be a tradeoff between having things look sharp, and having them look accurate when resolution is limited though.
I'm just saying the tech is based on smoothing/blurring of lines to make them appear less jagged. MSAA and SSAA variants get help from the game engine to achieve a better result and post processing versions don't, which result in less clean solutions, and look blurrier. My point is still that the tech is based on blurring some or all of the image, in what ever form it comes in.

I'm also not so sure you can call downsampling more accurate, because you are now using colors which did not exist in the original higher resolution image, in a way to make it look smoother around lines. It is more appealing in most cases, I can agree with that.
 
How many ray tracing games got released till now ?? very FEW so as to speak...nvdiia promised support for rtx in games , but only few games can implement it properly. The tech its new, it's demanding, and it's expensive.

I really don't care about DLSS in games. makes little to no difference to me. meh.
You can't expect much more than you got with GPU accelerated PhysX games until further in the future. For now, there isn't many GPU's which can support it in any form, and the ones that do, still are very limited in performance. The tech is in its infancy for gaming.
 

1405

Distinguished
Aug 26, 2012
612
13
18,995
Do you already have an RTX GPU with you right now ? There are some benchmarks, like the Final fantasy XV, http://benchmark.finalfantasyxv.com/na/ , but this is basically for testing DLSS. You can also try one of the UNIGINE Benchmarks, though I'm not sure about full RTX support.

OR, if you are interested in some of the RTX DEMOs as well, so that you can try the demo on your card, then kindly go through this link. There are 3 DEMOs to test.

Reflections RTX Tech Demo.
Atomic Heart RTX Tech Demo.
Justice RTX Tech Demo.


More details and download links can be found here. Report back with the results, if possible.

https://www.nvidia.com/en-us/geforc...e-reflections-nvidia-rtx-tech-demo-downloads/`



I think that also depends on the type of AA technique used. For example, FXAA (Fast-Approximate Anti-Aliasing) produces a blurry image, unlike MSAA or SSAA/SMAA, which are other alternate forms of AA.. Even TXAA still has some blurriness to it.
Thank you! I will jump on that. Oh, and yes, I just bought a RTX 2070 hoping it would allow max settings at 1080p on the few RT games out so far. Waiting mostly for Atomic Heart.
 
  • Like
Reactions: Metal Messiah.