News Watch AMD's DirectX Raytracing Demo on Future RDNA 2 GPU

alextheblue

Distinguished
One thing we have learned from Nvidia's attempts to promote ray tracing over the past 18 months: It's very hard to come up with a 'must have' use case for the technology that doesn't tank performance on lesser GPUs.
Bingo. We'll see how it goes, but really anything below 2080 level right now takes a serious beating if you have a title that uses RT heavily. Hoping next-gen brings at least that level of RT performance down to the mid-tier cards.
 
Mirrors... Mirrors everywhere... I kind of think the demo would have had more of an impact had they started it in a room with limited reflective surfaces, instead focusing on effects like raytraced global illumination and shadows, before heading outside into mirror-land. And maybe reduce the number of mirrors out there a bit. It was kind of hard to tell if other raytraced effects were getting utilized with practically everything given a mirror-finish. Or maybe that was point. It's possible that each company's raytracing hardware might handle certain effects better than the other's, and they may be trying to showcase something that might not run as well on Nvidia's hardware.

What is the bandwidth requirements for something like this to run smoothly? (i.e. PCIe 4.0 suddenly making sense?)
I doubt it. If anything, the reduced framerates when raytracing will likely reduce demand over the PCIe bus, unless perhaps AMD is doing something like offloading a major part of the raytracing workload to the CPU, and transferring a lot of additional data in the process. Indications are that graphics cards are not coming close to the performance limitations of a PCIe 3.0 x16 slot though, and probably won't be for some years to come.
 

Chung Leong

Reputable
Dec 6, 2019
493
193
4,860
It was kind of hard to tell if other raytraced effects were getting utilized with practically everything given a mirror-finish. Or maybe that was point. It's possible that each company's raytracing hardware might handle certain effects better than the other's, and they may be trying to showcase something that might not run as well on Nvidia's hardware.

Perfectly reflective surfaces aren't hugely taxing actually. One ray in, one ray out. When a ray spawns multiple secondary rays upon hit a surface that's when it gets hard. If the hardware is only capable of doing what's shown in this demo, then it's decidedly inferior to RTX.
 
If the hardware is only capable of doing what's shown in this demo, then it's decidedly inferior to RTX.
Well, I'm pretty sure it will be capable of more than that. Microsoft has already been talking about how DXR will be offering "improved lighting, shadows and reflections as well as more realistic acoustics and spatial audio" on the new Xbox, so AMD's raytracing implementation should be more or less feature complete with Nvidia's, from the sound of it. The question comes down to performance though, as we have no real indication of how any of this hardware will perform compared to the first-gen RTX cards. It could be substantially faster, for all we know, but there's no way of telling until side-by-side comparisons can be made. And the same goes for the next generation of RTX cards. We'll likely know more later in the year.

And of course, this demo doesn't seem to have been done by a big game developer or anything. The art and animations in general have a somewhat budget, tech-demo feel to them, even compared to something like the recent 3DMark demos, so things will likely look a lot more impressive in the hands of skilled developers with larger budgets to work with.
 

Chung Leong

Reputable
Dec 6, 2019
493
193
4,860
The question comes down to performance though, as we have no real indication of how any of this hardware will perform compared to the first-gen RTX cards. It could be substantially faster, for all we know, but there's no way of telling until side-by-side comparisons can be made.

Given the pressure on price, it wouldn't be unreasonable for Microsoft to support just the obvious and forego the more subtle. Puddles! Woohoo!
 

hannibal

Distinguished
These are still early stage raytrasing. In few years we will get GPU that are fast enough... Next gen Ampera is not yet fast enough. Neither these consoles. Too early for that. A couple of years more raytrasing is needed.
 

It was a good demo and technically superior to NVIDIA's Star Wars demo.

HOWEVER it wasn't as neat, subtle, or balanced in it's presentation.

The objective of ray tracing is NOT to show horsepower with everything being a mirror, but to make things more realistic with caustics, shadows, ambients, refraction, and some reflective surfaces.
 

joeblowsmynose

Distinguished
The Nvidia's Star Wars demo was close to photorealistic. This demo, meanwhile, proves the assertion that ray-tracing with primary rays alone is practically rendering.

Exactly, that's why we've had environment space raytraced reflections in games for years now ... it requires absolutely nothing special. i think I saw a demo of a guy running this AMD demo on his calculator. [/s]

How does it prove that (aka "your") assertion?

The main thing with raytracing is the ability to cast rays out of screen space -- this is the main advantage and what allows it to work. The amount of rays are allowed to be cast in unseen space will dictate the overall quality of the reflections and GI. You can see in this demo, it appears to be done very well.

Apparently when NVidia shows off environment space high quality reflections its awesome, when AMD does it "its nothing".

I would hold off your assured judgement on how "bad" this is, until you see something that actual game artists do with it, and not demo technician's attempt. Also your "assertions" are pretty flimsy ... I've been working with RT for 10+ years now as a 3D artist, I'm not just pulling the comments out of my ass.
 
Last edited:
  • Like
Reactions: digitalgriffin

joeblowsmynose

Distinguished
Mirrors... Mirrors everywhere... I kind of think the demo would have had more of an impact had they started it in a room with limited reflective surfaces, instead focusing on effects like raytraced global illumination and shadows, before heading outside into mirror-land. And maybe reduce the number of mirrors out there a bit. ...

Yeah it was quite cheezy ... they should have hired a proper game studio to throw something together that was actually "game-like" and not so "cheezy tech demo" like.
 
  • Like
Reactions: bit_user

bit_user

Polypheme
Ambassador
Perfectly reflective surfaces aren't hugely taxing actually. One ray in, one ray out. When a ray spawns multiple secondary rays upon hit a surface that's when it gets hard.
Their reflections look very nice - I don't see any aliasing in them. Did you ever consider what it takes to avoid aliasing in ray-traced reflections off curved surfaces?

They also have some chrome-like reflective surfaces, if you look closely.

If the hardware is only capable of doing what's shown in this demo, then it's decidedly inferior to RTX.
Yes, but not for the reasons you cite. What concerns me is the apparent lack of Global Illumination. Nvidia's GTX 1660 can almost do raytraced reflections at a decent framerate, and that's with no hardware assist. So, let's hope AMD has more than just reflections up their sleeve.
 
No, I'm pretty sure Nvidia's Star Wars demo used global illumination, which this demo seems to lack.
Global illumination is relatively simple compared to complex reflections. Its just a series of bounding boxes for the element in question. The number of rays required dramatically drops off with bounding boxes.

Povray did something similar to speed up rendering. I believe the bounding box program code was introduced in 1994.

I wrote support code for POVRay back in college.
 
Last edited:
  • Like
Reactions: joeblowsmynose

bit_user

Polypheme
Ambassador
Global illumination is relatively simple compared to complex reflections.
Not really. It requires a lot more rays and sophisticated denoising.

In GI, virtually every hit involves bounces - not just on reflective surfaces!

Performance data backs me up on this. Check out the benchmarks of games that support GI - it's the most compute-intensive (and therefore least-used) ray tracing feature supported by RTX.

Its just a series of bounding boxes for the element in question.
All accelerated ray tracing uses bounding-volume hierarchies. There's still geometry, underneath. BVH is just a data structure to accelerate the search-phase of intersection tests.

in 1994.

I wrote support code for POVRay back in college.
I was using POVRay in the early 90's. Before then, I didn't think ray tracing was possible, on a PC. Of course, I also hadn't considered tracing rays back from each image pixel.

One of the very first things I tried, in POVRay, was an experiment to see if it did caustics. I think that's when I figured out how they got it so fast - by tracing backwards, instead of forwards.
 
  • Like
Reactions: digitalgriffin

Chung Leong

Reputable
Dec 6, 2019
493
193
4,860
No, I'm pretty sure Nvidia's Star Wars demo used global illumination, which this demo seems to lack.

If it's not in the demo then it probably means the hardware isn't powerful enough. This is going to be a $200 GPU after all. The next-gen should be much better at faking GI though. As someone pointed out in another thread, inclusion of fast SSD storage means game engines will have access to a much larger pool of assets. Instead of simulating light directly, RT can be used to intelligently choose from large sets of pre-baked lightmaps. Such an approach would yield good performance and scenes that largely look right.
 
Not really. It requires a lot more rays and sophisticated denoising.

In GI, virtually every hit involves bounces - not just on reflective surfaces!

Performance data backs me up on this. Check out the benchmarks of games that support GI - it's the most compute-intensive (and therefore least-used) ray tracing feature supported by RTX.


All accelerated ray tracing uses bounding-volume hierarchies. There's still geometry, underneath. BVH is just a data structure to accelerate the search-phase of intersection tests.


I was using POVRay in the early 90's. Before then, I didn't think ray tracing was possible, on a PC. Of course, I also hadn't considered tracing rays back from each image pixel.

One of the very first things I tried, in POVRay, was an experiment to see if it did caustics. I think that's when I figured out how they got it so fast - by tracing backwards, instead of forwards.

Cheating reflections is done with just casting a texture onto a surface. That tender comes from a viewport change draw call to a memory buffer.

GI puts a bounding box around the object. So the intersection test doesnt work if the secondary object is outside the box. Thus the number of hits dramatically drop off. Povray 1 didnt used bounding boxes. It was painfully slow.

True reflections as shown are computationally expensive because rays can keep bouncing at infinium. Imagine those infinity mirrors that face each other. Thus you have to calculate everything in the viewing fulstrum with a trace is the reflection depth > 1.
 

bit_user

Polypheme
Ambassador
If it's not in the demo then it probably means the hardware isn't powerful enough. This is going to be a $200 GPU after all. The next-gen should be much better at faking GI though. As someone pointed out in another thread, inclusion of fast SSD storage means game engines will have access to a much larger pool of assets. Instead of simulating light directly, RT can be used to intelligently choose from large sets of pre-baked lightmaps. Such an approach would yield good performance and scenes that largely look right.
That would work for explosions that happen in a pre-determined place, but not for moving light sources. It's not a very general solution.

BTW, Quake I (released in 1996) was the first game to use pre-baked global illumination light maps. The initial release did not support any 3D accelerators (though they later ported it to a couple) and would run okay a 486 DX2-66.
 
  • Like
Reactions: digitalgriffin

bit_user

Polypheme
Ambassador
GI puts a bounding box around the object. So the intersection test doesnt work if the secondary object is outside the box. Thus the number of hits dramatically drop off. Povray 1 didnt used bounding boxes. It was painfully slow.
So, you're saying you don't get secondary illumination off a surface that's too far away? That doesn't seem right - what if the surface is very big, or the light source is particularly bright?

I think it probably just uses the same BVH acceleration as all other use cases, but you just have a lot more rays (and bounces).
 
So, you're saying you don't get secondary illumination off a surface that's too far away? That doesn't seem right - what if the surface is very big, or the light source is particularly bright?

I think it probably just uses the same BVH acceleration as all other use cases, but you just have a lot more rays (and bounces).

For non light emitting sources, that is correct. The GI effect dramatically drops off as the secondary non illuminating object distances itself. In other words, if you have a white ball next to a red wall, the light bounces off that red wall and gives the white ball a red tint on the right side. However as with real life, secondary illumination is significantly weaker than the primary (direct light) Thus, as the distance increases between surfaces, the effect greatly diminishes and can be safely ignored (diffuse surface effect combines with 1 / distance squared). That is one of the reasons bounding boxes can be used. (At least that's how I remember it from POVRays source code back in the 90's when I reverse engineered it.) There's an option on the rendering line to allow you to draw the rendering box limits for GI. So you can see how the objects are affected by bounding boxes.

Combine a bit of random sampling with temporal filters used for de-noising between random samples (used on today's hardware), and you have a nice efficient GI effect.

With multiple reflections, each surface has to be rendered in multiple passes, each requiring full rays on every point and not random sampling. A randomly sampled reflection would not appear correct as it has to be point accurate (Low diffuse factor) If it were randomly sampled like GI, then you would get a distorted reflection that is quite blurry.

If you want I can create a quick simple povray file with various command line parameters to demonstrate the effect of the number of ray bounces, and how it affects render time when it comes to reflections. It's been probably 10 years, but I'm certain I could whip something up in 20 minutes with two mirrored surfaces facing each other over a semi reflective floor.
 
Last edited:
  • Like
Reactions: joeblowsmynose

joeblowsmynose

Distinguished
No, I'm pretty sure Nvidia's Star Wars demo used global illumination, which this demo seems to lack.

Reflection effects will generally override GI - so even if there was, you probably wouldn't see it with all those mirrors. ;)
Cheating reflections is done with just casting a texture onto a surface. That tender comes from a viewport change draw call to a memory buffer.

GI puts a bounding box around the object. So the intersection test doesnt work if the secondary object is outside the box. Thus the number of hits dramatically drop off. Povray 1 didnt used bounding boxes. It was painfully slow.

True reflections as shown are computationally expensive because rays can keep bouncing at infinium. Imagine those infinity mirrors that face each other. Thus you have to calculate everything in the viewing fulstrum with a trace is the reflection depth > 1.

Yes I am also thinking people are not understanding how GI actually will work in a real time application, vs in a 3D program - its not the same. Extreme optimizations are still happening in the real-time counterpart, and optimizations are highly frowned upon in a 3D rendering application because they reduce accuracy. In a game - where everything is moving quickly, accuracy is far less of a concern.


GI and AO rays can be spaced quite a bit apart and blurred heavily, then applied over top of the other shaders and it still looks amazing. With a mirror reflection "spaced apart" rays is NOT an option - they have to be tight or they won't reflect properly. The tighter the rays have to be, the more of them you need. Its pretty simple.

I have even played with this with the Arnold renderer in Max - it is a fully biased renderer, which means I can dictate how many rays to send for various effects. Gi, subsurface scattering, direct light, bounced light (GI), reflections, etc. but again, real-time RT will be incredibly heavily optimized, and all GI and AO rays will be spaced and blurred. With reflections, even in real time, I'm guessing you'd want a couple bounces minimum, but again every bounce is going to space the rays further apart, so you will need to more of them to keep the tightness required to actually create a reflected image.

So all this talk I keep hearing of "Reflections take no computations" is BS. Real time raytracing still has TONS of intense optimizations - it does not calculate accurately like a renderer in a 3D program at all -- -not even close.
 
Last edited:
  • Like
Reactions: digitalgriffin

bit_user

Polypheme
Ambassador
secondary illumination is significantly weaker than the primary (direct light) Thus, as the distance increases between surfaces, the effect greatly diminishes and can be safely ignored
You can't say that for all light sources and surface sizes. If my primary light source is a candle, and my secondary light source is sunlight or an explosion, then even tertiary bounces could overpower direct lighting from the primary.

Likewise, if my secondary illumination is bouncing off a planet I'm orbiting, then I could be miles away from it and it could still be the dominant illumination.

With multiple reflections, each surface has to be rendered in multiple passes, each requiring full rays on every point and not random sampling.
I don't know what you mean by "full rays". Do you mean a uniform sampling grid, relative to the incident ray?

I also don't follow what you mean by selectively rendering objects in multiple passes. Are these screen-space passes, in which case it would only apply to the primary rays?
 

bit_user

Polypheme
Ambassador
Reflection effects will generally override GI - so even if there was, you probably wouldn't see it with all those mirrors. ;)
The effect of GI might be subtle, but once you're used to it, scenes without it have a very "fake" look to them. Even if you're not used to it, turning it on just gives a much more realistic feel.

And while that demo has quite a lot of reflections, most of the surfaces aren't strongly reflective. More than enough to tell that it wasn't rendered with GI.

Go back and watch Nvidia's Star Wars trailer. That's a demo with reflections literally everywhere, but it just wouldn't have that life-like quality without GI.

GI and AO rays can be spaced quite a bit apart and blurred heavily, then applied over top of the other shaders and it still looks amazing. With a mirror reflection "spaced apart" rays is NOT an option - they have to be tight or they won't reflect properly. The tighter the rays have to be, the more of them you need. Its pretty simple.
Yeah, but because GI rays are traced forward, you still need a lot more of them. And RTX doesn't simply blur them, but rather uses a sophisticated deep learning algorithm to infer the correct illumination from relatively few samples.

Seriously, guys, did you not see the benchmarks of the different RTX effects? Are you really saying that GI is no big deal, performance-wise?

Okay, just for you, I went and found it:


It's more than just Pascal benchmarks, though. They go through and analyze the performance impact of each different RTX effect, on a range of different hardware (including RTX).

So all this talk I keep hearing of "Reflections take no computations" is BS.
Literally nobody here is saying that! All I'm saying is that GI is harder than reflections. Not (only) based on what I think, but also the data.

Real time raytracing still has TONS of intense optimizations - it does not calculate accurately like a renderer in a 3D program at all -- -not even close.
I think you're exaggerating. It does use fewer samples and maybe things like TAA. But one thing people like about ray tracing is that it involves way fewer hacks and optimizations than traditional rendering.

I also don't really see how it's relevant to the discussion, since I was never comparing professional renderers, in the first place.
 
Last edited: