News Nvidia GPU Subwarp Interleaving Boosts Ray Tracing by up to 20%

VforV

Respectable
BANNED
Oct 9, 2019
578
286
2,270
1
What I don't get is, why is this not coming to Lovelace?

You would think 6 months from now is enough time to add this, I mean we found out about this now because it was published, but nvidia could be actually working on this for over a year or more... so how is the author of this article sure it won't come to Lovelace?
 

Murissokah

Distinguished
Aug 12, 2007
1,357
35
19,690
138
What I don't get is, why is this not coming to Lovelace?

You would think 6 months from now is enough time to add this, I mean we found out about this now because it was published, but nvidia could be actually working on this for over a year or more... so how is the author of this article sure it won't come to Lovelace?
This is still academic work. They are just proving that the concept is worth pursuing. This comes before NVidia even considers integrating the technology into their stack. And since it requires microcode changes, there's no chance they'd do it 6 months before release. More like 2 years down the road to see it in a product.
 

VforV

Respectable
BANNED
Oct 9, 2019
578
286
2,270
1
This is still academic work. They are just proving that the concept is worth pursuing. This comes before NVidia even considers integrating the technology into their stack. And since it requires microcode changes, there's no chance they'd do it 6 months before release. More like 2 years down the road to see it in a product.
But how do we know they don't actually work on this since 2 years ago?

Is it mandatory to submit this at the start of the research? Because if not, they can be in the middle or end...
 

Murissokah

Distinguished
Aug 12, 2007
1,357
35
19,690
138
But how do we know they don't actually work on this since 2 years ago?

Is it mandatory to submit this at the start of the research? Because if not, they can be in the middle or end...
It's not a matter of regulation, it's a matter of going through all the steps necessary. Up until now this was merely an idea. As the article states, they have only just now published a paper that "...shows early promise in ray tracing microbenchmark studies.". If you look at the paper, you'll see that the test data comes from a simulator. This is a proof of concept, they managed to show that the design changes they propose could improve Ray Tracing performance by an amount singificant enough to be worth pursuing.

NVidia is likely to follow on this because Ray Tracing performance one of their top goals, if not the first one. What this means is that they will now have to evaluate the effect these changes will have on the rest of the featureset and then design and produce prototypes. This is likely to take a year on its own. After this is done, they have to integrate the design into the next architecture. Finally, once all that is done, there's the matter of allocating and ramping up production, which requires billion-dollar deals with third parties (Samsung made the 3000 family GPUs, TSMC will make the 4000s).

The only reason I say this might be in the market in 2 years is that NVidia is avid for this kind of improvement. Otherwise 3 to 4 years would be more reasonable.
 
Reactions: VforV and renz496

d0x360

Honorable
Dec 15, 2016
67
26
10,570
3
It's not a matter of regulation, it's a matter of going through all the steps necessary. Up until now this was merely an idea. As the article states, they have only just now published a paper that "...shows early promise in ray tracing microbenchmark studies.". If you look at the paper, you'll see that the test data comes from a simulator. This is a proof of concept, they managed to show that the design changes they propose could improve Ray Tracing performance by an amount singificant enough to be worth pursuing.

NVidia is likely to follow on this because Ray Tracing performance one of their top goals, if not the first one. What this means is that they will now have to evaluate the effect these changes will have on the rest of the featureset and then design and produce prototypes. This is likely to take a year on its own. After this is done, they have to integrate the design into the next architecture. Finally, once all that is done, there's the matter of allocating and ramping up production, which requires billion-dollar deals with third parties (Samsung made the 3000 family GPUs, TSMC will make the 4000s).

The only reason I say this might be in the market in 2 years is that NVidia is avid for this kind of improvement. Otherwise 3 to 4 years would be more reasonable.
It will be in their uarch after lovelace... Probably. It really depends on what kind of changes to the hardware need to be made.

These companies plan hardware 4-6 years out. Changes can be made but they have to be minor ones. I saw a prototype AMD "chiplet" gpu back when the 290x was king of the hill. It was essentially the 295x which was a dual core card but seen by the system as crossfire so not every game supported both cores.

This prototype was for "7nm" but now it's looking like it will be 5nm but that's close enough for something so far off. The PC with the prototype saw the card as a single GPU as did games and performance was 30% higher than the unmodified 295x. It was pretty impressive. I expect to see it in RDNA3 which would be fantastic.

AMD already has the current rasterization crown vs the 3090 and at a much lower price... although you can't find any. They still need to catch up to nVidia in ray tracing but now nVidia needs to catch up to AMD in DX12, Vulkan and rasterization performance while also staying ahead in ray tracing.

It's impressive what AMD has managed with over a decade of essentially no r&d budget in both cpu & gpu market segments yet they caught up to Intel and will catch them again... people seem to forget that AMD hasn't launched zen4 and zen3+ is still the same architecture with some 3d vcache slapped on the die and that alone gave them a 20-30% boost in performance. At the same time they have been slowly but steadily catching up to nVidia and if RDNA2 had proper dedicated ray tracing hardware then they probably would have matched nVidia this gen even if you include DLSS because amd could also use machine learning for aa and upscaling. The only reason they couldn't was the lack of proper hardware combined with the rush they were in due to market pressure.

The real question is if they saw even slight performance gains on current hardware then why can't they implement it now? Even if it's just a 5% gain...that's still a gain and could be the difference between a locked 60 or sometimes 60.

I mean, they said they used current hardware to test this on so it obviously works on current hardware. Sure it doesn't work as good as it will with an architectural change but it works.
 
Reactions: VforV

VforV

Respectable
BANNED
Oct 9, 2019
578
286
2,270
1
It's not a matter of regulation, it's a matter of going through all the steps necessary. Up until now this was merely an idea. As the article states, they have only just now published a paper that "...shows early promise in ray tracing microbenchmark studies.". If you look at the paper, you'll see that the test data comes from a simulator. This is a proof of concept, they managed to show that the design changes they propose could improve Ray Tracing performance by an amount singificant enough to be worth pursuing.

NVidia is likely to follow on this because Ray Tracing performance one of their top goals, if not the first one. What this means is that they will now have to evaluate the effect these changes will have on the rest of the featureset and then design and produce prototypes. This is likely to take a year on its own. After this is done, they have to integrate the design into the next architecture. Finally, once all that is done, there's the matter of allocating and ramping up production, which requires billion-dollar deals with third parties (Samsung made the 3000 family GPUs, TSMC will make the 4000s).

The only reason I say this might be in the market in 2 years is that NVidia is avid for this kind of improvement. Otherwise 3 to 4 years would be more reasonable.
I understand now, thank you.
 
Reactions: Murissokah

Blitz Hacker

Honorable
Jul 17, 2015
51
11
10,565
6
What I don't get is, why is this not coming to Lovelace?

You would think 6 months from now is enough time to add this, I mean we found out about this now because it was published, but nvidia could be actually working on this for over a year or more... so how is the author of this article sure it won't come to Lovelace?
Likely production of lovelace is already happening or about to happen. To do a complete redesign on already ordered parts is very unlikely. Most of the modern day processors and gpus have atleast a 6 month lead time when they're making them with engineering samples (sent to board partners etc) to design pcb's etc for the gpu and memory kits that nvidia sells) So maybe we will see this implemented post lovelace, provided it pans out into tangible real world RT performance, which hasn't really been shown yet
 

VforV

Respectable
BANNED
Oct 9, 2019
578
286
2,270
1
Likely production of lovelace is already happening or about to happen. To do a complete redesign on already ordered parts is very unlikely. Most of the modern day processors and gpus have atleast a 6 month lead time when they're making them with engineering samples (sent to board partners etc) to design pcb's etc for the gpu and memory kits that nvidia sells) So maybe we will see this implemented post lovelace, provided it pans out into tangible real world RT performance, which hasn't really been shown yet
I know how hardware tech is designed and planned years in advance, what I did not know was how this new research specifically will be implemented, in regards to regulation, which Murissokah explained already...
 

ASK THE COMMUNITY

TRENDING THREADS