News Nvidia RTX 3080 and Ampere: Everything We Know

Page 6 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
One thing that I'm kinda worried/surprised about is comparing these to consoles. ATM it seems like next-gen consoles are going to be way more powerful than next-gen PC GPUs. I know that that is kinda normal, and in 1-2 more years/generations, PCs should dominate again, but it feels wrong that we are all hyped for Ampere/Big Navi, but consoles are actually gonna dominate now. Why don't Nvidia/AMD ever flaunt next-gen specs like Sony/Microsoft do? I mean, whether or not it is really true, we know PS5/Series X are aiming for >= 4K60, but it's not like we know what Ampere/Big Navi are aiming for. And why are consoles more powerful than PCs at launch? Same manufacturer (AMD). Just a little confusing to me.
Lol
 
Nope possibility of GTX2000 series is almost 0. Next gen Ampere GPUs will all be RTX. The reason that NVIDIA initially had to cover low end with GTX series was that the implementation of RT cores and Tensor cores on 12nm was expensive and it was experimental. In this generation they worked it out into easier and cost effective implementation. No need for them to launch GTX series anymore.
Um, including RT and Tensor cores in GA10x series GPUs will still be expensive. Sure, 7nm allows basically 2.5X as many transistors, but the cost per mm2 of 7nm is also roughly 75% higher than 12nm. So, a 200mm square TU117 12nm part costs say $75 to manufacture (for Nvidia through TSMC - this is very approximate and intended for illustration purposes only). Shrink that to 7nm and add more cores plus RT and Tensor cores and you maybe end up with a 150mm square 7nm part, which now costs closer to $100 to manufacture, which would then mean the end user price is going to be at least $50 higher than before.
 
  • Like
Reactions: bit_user
One thing that I'm kinda worried/surprised about is comparing these to consoles. ATM it seems like next-gen consoles are going to be way more powerful than next-gen PC GPUs. I know that that is kinda normal, and in 1-2 more years/generations, PCs should dominate again, but it feels wrong that we are all hyped for Ampere/Big Navi, but consoles are actually gonna dominate now. Why don't Nvidia/AMD ever flaunt next-gen specs like Sony/Microsoft do? I mean, whether or not it is really true, we know PS5/Series X are aiming for >= 4K60, but it's not like we know what Ampere/Big Navi are aiming for. And why are consoles more powerful than PCs at launch? Same manufacturer (AMD). Just a little confusing to me.
I wouldn't worry too much. I suspect the next-gen console hardware (on the GPU side) will be roughly as fast as an RTX 2070 Super, maybe 2080 Super at most. And it will depend on the workload. If rumors on RT performance for Ampere prove even close to accurate, even RTX 3060 will stomp all over an RTX 2070 Super -- and in ray tracing performance it could be twice as fast.

Take those rumors with a grain of salt, and non-RT workloads will probably see less of a boost. But I'm still expecting at least 30-50% uplift in performance at each product tier. RTX 2060 + 30% is going to be equal to RTX 2070 Super, even in non-RT workloads. 2060 + 50% would be RTX 2080 Super.
 
  1. Why GA104 and GA106 projected to have less SMs than TU104 and TU106?
  2. Why on earth Nvidia will be doing gaming chips based on a chip with huge amount of FP64 cores? It's a huge waste of space and money. Either they will be converted to FP32 or to Tensor/RT cores. Which would mean redesign.
  3. Which leads us to another assumption that the gaming chip is going to be Hopper, not Ampere.
1: If GA102 has a lot of SMs and provides a massive boost in performance, plus architectural enhancements, lower tiers may not even need more SMs to outperform current products. Also, there's potentially a new GA103 tier that's really the replacement for TU104 -- everything shifts left one slot. So really, TU102 is replaced by GA102, but TU104 is replaced by GA103, and TU106 is replaced by GA104. That's the current working theory, but again, apply heavy doses of skepticism as the pre-Turing-launch leaks were WAY OFF on final specs in so many ways.

2: Not sure if this is in response to someone in the comments, or the article itself, but I've tried to make it abundantly clear that Nvidia won't be doing large levels of FP64 support. It will probably be the same as Turing and Pascal consumer parts, with two FP64 CUDA cores per SM.

3: It's entirely possible that Ampere ends up being purely for data center, just like Volta. Then Hopper or some as-yet-unknown codename replaces Turing. However, there are clear indications from what Jensen has said that Ampere will cover the full range of gaming and data center uses.
 
Um, including RT and Tensor cores in GA10x series GPUs will still be expensive. Sure, 7nm allows basically 2.5X as many transistors, but the cost per mm2 of 7nm is also roughly 75% higher than 12nm. So, a 200mm square TU117 12nm part costs say $75 to manufacture (for Nvidia through TSMC - this is very approximate and intended for illustration purposes only). Shrink that to 7nm and add more cores plus RT and Tensor cores and you maybe end up with a 150mm square 7nm part, which now costs closer to $100 to manufacture, which would then mean the end user price is going to be at least $50 higher than before.
One thing lot of us are doing is taking present cost of 7nm production. But by the time RTX3000 series comes out and matures over time till we get RTX3050 or what ever lowest end GPU they produce hit production line the 7nm production cost will be much cheaper and that will be manufactured on way larger scale than it is in present time. Pricing is simple. If there is less competition to be worried about there will be price increase compared to previous gen. But if there is enough competition NVIDIA will try pushing it out at much reasonable price.
 
One thing lot of us are doing is taking present cost of 7nm production. But by the time RTX3000 series comes out and matures over time till we get RTX3050 or what ever lowest end GPU they produce hit production line the 7nm production cost will be much cheaper and that will be manufactured on way larger scale than it is in present time. Pricing is simple. If there is less competition to be worried about there will be price increase compared to previous gen. But if there is enough competition NVIDIA will try pushing it out at much reasonable price.
I don't expect TSMC 7nm prices to come down much if at all in the near term. Maybe by next year when 3050 is most likely to launch, it could be 10-15% cheaper than now, but all indications are that TSMC is at capacity, meaning lots of demand for its 7nm fabs, meaning it has no reason to cut prices. This is supposedly part of why Ampere is a bit delayed and may have some chips fabbed at Samsung -- TSMC wouldn't give Nvidia a discount. Take that with a grain of salt as usual. (Samsung 7nm or 8nm for a 3050 is very probable, though.)

Anyway, yields should improve some, and using a smaller chip will help a lot for costs. Competition should reduce margins as well, but no one is going to make a product that intentionally sells at a loss. GTX 950/960 used a 227 mm2 28nm chip and sold for $159/$199 at launch. Two years later, GTX 1050/1050 Ti were only slightly faster and used a 132 mm2 16nm chip with a price of $109/$139 at launch -- so a big price cut thanks to the smaller chip, but not a big jump in performance. Then three years later the GTX 1650 arrives and uses a much larger 200 mm2 12nm chip that's around 30% faster than the 1050 Ti and the price is $149 (but more often $159). So price mostly stayed the same while performance went up and the chip got larger (probably thanks to yields at least in part).

The big question is how Nvidia will approach things with Ampere. 7nm is reportedly 75% higher (give or take) cost than TSMC 12nm. So unless the chip is a bit more than half the size, costs will go up or at best stay static. Compared to TU117, if Nvidia has to add RT and Tensor cores, I don't think the chip will shrink much at all.

Frankly, I suspect we may still see another round of non-RTX cards from Nvidia. That's purely a guess, but you need a certain number of Tensor cores and RT cores to make ray tracing viable. The RTX 2060 is at the bottom of that range and it's still a $299 card (and probably selling at cost now, meaning no profits meaning Nvidia wants to move to something that will make a profit). Keeping roughly that level of performance and just shrinking it via 7nm would still end up with a 200 mm2 die (give or take). If the size of the chip is the same as TU117 and the cost of the chip is even 50% higher (or just 25% higher) due to 7nm, there's no way it can be a $150 graphics card. It will land at $200 minimum, probably $250 -- call it RTX 3050 with slightly better performance than RTX 2060 but a lower price.

Strip out the RT and Tensor stuff and Nvidia could potentially have an 80-125 mm2 chip, which could easily get down to the <$150 range, and maybe even serve as a replacement for GT 1030 and MX250. But at that point it will also probably only deliver GTX 1650 levels of performance, and maybe not even that. Because Nvidia would much rather sell you a $200 GPU than a $100 GPU, so it will continue to severely limit the performance of the true budget offerings. Looking forward to GT 3030 next year: $100 for less than GTX 1650 performance is my assumption. At least then I won't be disappointed. :-\
 
  • Like
Reactions: bit_user
I don't expect TSMC 7nm prices to come down much if at all in the near term. Maybe by next year when 3050 is most likely to launch, it could be 10-15% cheaper than now, but all indications are that TSMC is at capacity, meaning lots of demand for its 7nm fabs, meaning it has no reason to cut prices. This is supposedly part of why Ampere is a bit delayed and may have some chips fabbed at Samsung -- TSMC wouldn't give Nvidia a discount. Take that with a grain of salt as usual. (Samsung 7nm or 8nm for a 3050 is very probable, though.)

Anyway, yields should improve some, and using a smaller chip will help a lot for costs. Competition should reduce margins as well, but no one is going to make a product that intentionally sells at a loss. GTX 950/960 used a 227 mm2 28nm chip and sold for $159/$199 at launch. Two years later, GTX 1050/1050 Ti were only slightly faster and used a 132 mm2 16nm chip with a price of $109/$139 at launch -- so a big price cut thanks to the smaller chip, but not a big jump in performance. Then three years later the GTX 1650 arrives and uses a much larger 200 mm2 12nm chip that's around 30% faster than the 1050 Ti and the price is $149 (but more often $159). So price mostly stayed the same while performance went up and the chip got larger (probably thanks to yields at least in part).

The big question is how Nvidia will approach things with Ampere. 7nm is reportedly 75% higher (give or take) cost than TSMC 12nm. So unless the chip is a bit more than half the size, costs will go up or at best stay static. Compared to TU117, if Nvidia has to add RT and Tensor cores, I don't think the chip will shrink much at all.

Frankly, I suspect we may still see another round of non-RTX cards from Nvidia. That's purely a guess, but you need a certain number of Tensor cores and RT cores to make ray tracing viable. The RTX 2060 is at the bottom of that range and it's still a $299 card (and probably selling at cost now, meaning no profits meaning Nvidia wants to move to something that will make a profit). Keeping roughly that level of performance and just shrinking it via 7nm would still end up with a 200 mm2 die (give or take). If the size of the chip is the same as TU117 and the cost of the chip is even 50% higher (or just 25% higher) due to 7nm, there's no way it can be a $150 graphics card. It will land at $200 minimum, probably $250 -- call it RTX 3050 with slightly better performance than RTX 2060 but a lower price.

Strip out the RT and Tensor stuff and Nvidia could potentially have an 80-125 mm2 chip, which could easily get down to the <$150 range, and maybe even serve as a replacement for GT 1030 and MX250. But at that point it will also probably only deliver GTX 1650 levels of performance, and maybe not even that. Because Nvidia would much rather sell you a $200 GPU than a $100 GPU, so it will continue to severely limit the performance of the true budget offerings. Looking forward to GT 3030 next year: $100 for less than GTX 1650 performance is my assumption. At least then I won't be disappointed. :-\
Okay non RTX GPU in Ampere lineup. I will only say that possibilities are extremely low. If it happens it happens.

Lets see how TSMC handles its production capacity and demands from different manufacturers. Only time will answer that. We can only make assumptions to some extent on the little info we have.
 
Okay non RTX GPU in Ampere lineup. I will only say that possibilities are extremely low. If it happens it happens.

Lets see how TSMC handles its production capacity and demands from different manufacturers. Only time will answer that. We can only make assumptions to some extent on the little info we have.
I don't know about "extremely" low -- Turing skipped the GP108 replacement so two generations later it's time for something to take on the extreme budget end at some point. Of course, it might be Hopper or some other name than Ampere, but at some point Nvidia will do a sub-100mm2 7nm GPU. Might not be for 12-18 months, though, after 5nm stuff starts to ship.
 
  • Like
Reactions: bit_user
I don't know about "extremely" low -- Turing skipped the GP108 replacement so two generations later it's time for something to take on the extreme budget end at some point. Of course, it might be Hopper or some other name than Ampere, but at some point Nvidia will do a sub-100mm2 7nm GPU. Might not be for 12-18 months, though, after 5nm stuff starts to ship.
Okay that 5nm will be in way future. That will not be the coming gen. It will be Hopper, but if we limit the discussion to 7nm Ampere. There is nothing stopping NVIDIA from making $100-$150 GPU as a base variant once TSMC starts pushing extremely high volumes of 7nm chips in N7+ or N6 for NVIDIA. Presently NVIDIA is limited by N7 production capacity which is comparatively very low and with N7+ and N6 as bot give area reduction and TSMC quoted to have 7nm production capacity increased in 2020 by 150% over 2019 capacity. Which is not bad.
 

regs01

Honorable
Apr 15, 2018
17
6
10,515
2: Not sure if this is in response to someone in the comments, or the article itself, but I've tried to make it abundantly clear that Nvidia won't be doing large levels of FP64 support. It will probably be the same as Turing and Pascal consumer parts, with two FP64 CUDA cores per SM.
But Ampere has 32 FP64 cores per SM. So what they going to do with another 30, if they limit FP64 core to 2? Leaving them is a waste of money and space that can be used for something useful, like RT cores. This will give RDNA2 an advantage in space. Removing them would mean redesign. Also Ampere has twice less tensor cores per SM than Turing, though they are 4 times faster. If FP64 cores can be converted into RT needs.

However, there are clear indications from what Jensen has said that Ampere will cover the full range of gaming and data center uses.
I remember he has been saying something like this, but I don't remember exact quote.
 

bit_user

Polypheme
Ambassador
But Ampere has 32 FP64 cores per SM. So what they going to do with another 30, if they limit FP64 core to 2? Leaving them is a waste of money and space that can be used for something useful, like RT cores.
Huh?

It's a different chip. They just put fewer fp64 pipelines in it. Simple as that. It's not as if you have to do something else to use an equivalent amount of area. Ampere defines a micro-architecture, but that doesn't mean all chips with that name must use exactly the same physical layout of the cores.


Removing them would mean redesign.
They already do that. There's one core design for HPC/training and at least one for gaming/rendering/inference.
 
  • Like
Reactions: JarredWaltonGPU
But Ampere has 32 FP64 cores per SM. So what they going to do with another 30, if they limit FP64 core to 2? Leaving them is a waste of money and space that can be used for something useful, like RT cores. This will give RDNA2 an advantage in space. Removing them would mean redesign. Also Ampere has twice less tensor cores per SM than Turing, though they are 4 times faster. If FP64 cores can be converted into RT needs.
Don't confuse Ampere GA100 with other Ampere GPUs. GV100 is completely different from TU102/TU104. GP100 is very different from GP102/GP104. The expectation is that GA100 will have lots of FP64 stuff that will bet axed in GA102 and above ("above" being the number, not the performance).
 

fynxer

Reputable
Jun 6, 2015
6
1
4,510
You question the RTX sales of only 15 millon in 18 months.

This is Jensens doing, when 10 series prices increased cause of the mining craze Jensen held on to that new pricing level even when the mining craze was over in April/May 2018 all the way in to the release of RTX in august.

He then transferred those high prices levels of the over priced 10 series and even put a PRICE MARK UP on top of that on the RTX series.

Suddenly RTX2080 was priced higher than GTX1080Ti when it came out and the RTX2080Ti landed on insane price levels. For example i bought my GTX1080Ti for aprox USD700 at release here in Sweden and when the RTX2080Ti came out it had a price tag of USD1500.

Jensens greed held back the evolution of the gaming market the last 18 months with over priced RTX graphic cards. Jensen said repeatedly he was on the gamers side when the mining craze was happening BUT in the end he used the mining crazed over prices against the gamers and showed us his true face and that is pure greed.

Now with Radeon RDNA2 and Intel working their way in to the GFX business Jensen can forget his greedy over prices since the competition the coming years will not allow nVidia to control prices anymore.

I recommend NOT buying RTX 3000 series up on release BUT to wait a while, as soon as RDNA2 gets a foot hold and Intel enters the gaming market prices will drop significantly.

I still own a 1080Ti, i totally skipped the RTX 2000 series. My plan is to upgrade in 2021 when i see how RDNA2 and Intel affect the market. I suspect Intel will have sick prices to quickly gain market share really pressuring both nVidia and AMD hard.
 
Last edited:
  • Like
Reactions: bit_user
You question the RTX sales of only 15 millon in 18 months.

This is Jensens doing, when 10 series prices increased cause of the mining craze Jensen held on to that new pricing level even when the mining craze was over in April/May 2018 all the way in to the release of RTX in august.

He then transferred those prices levels WITH AN MARK UP on top of that to the RTX series.

Suddenly RTX2080 was priced higher than GTX1080Ti when it came out and the RTX2080Ti landed on insane price levels.

Jensens greed held back the evolution of the gaming market the last 18 months with over priced RTX graphic cards.

Now with Radeon RDNA2 and Intel working their way in to the GFX business Jensen can forget his greedy over prices since the competition the coming years will not allow nVidia to control prices anymore.

I recommend NOT buying RTX 3000 series up on release BUT to wait a while, as soon as RDNA2 gets a foot hold and Intel enters the gaming market prices will drop significantly.
You're missing the meaning. I wonder about the 15 million sales in 18 months not as a figure, but rather how that compares to Pascal's first 18 months -- or Maxwell's first 18 months. I suspect both sold at least that well, and Pascal probably sold much better (but may have been conflated with crypto-mining).

As to the rest of your post ... greed maybe, more likely just a complete lack of competition. GTX 1080 Ti is still basically as fast as anything AMD makes, and the 2080 and 2080 Ti were quite a bit faster than the 1080 Ti. So if your competition can't keep up with a $700 GPU, there's no need to release a faster GPU at lower prices. Especially when the new faster GPU costs more to make, which Turing absolutely does!

GP102 is a 471mm square chip. That means even the TU106 costs probably close to the same amount for the GPU (but less for 6GB GDDR6 vs. 11GB GDDR5X probably). TU104 and TU102 are even larger, and TU102 in particular was never going to get into mainstream pricing. You can't do a 754mm square chip without charging a lot.

--------------

Let me give you some example numbers. A single wafer from TSMC, with packaging, probably costs $10,000. The maximum number of TU102 chips per wafer is going to be about 68 -- using a die size of 24.5mm x 30.8mm. Probably at least 5-10 chips are going to be bad, maybe less after harvesting partially working die. Optimistically, $10,000 / 63 ~= $160 per chip.

The problem is that the chip isn't the only cost. The PCB costs some money, the RAM costs money, the heatsink and fan cost money, the VRMs and resistors cost money... you hopefully get the point. Probably the total bill of materials on the RTX 2080 Ti ends up being close to $500. And Nvidia put a ton of R&D money into the architecture that also needs to be recovered, plus the distribution chain needs to make money as well.

So: Nvidia sells the chip for $300 to Asus let's say.
Asus adds a board and cooler and all the other bits and has now spent $600 total.
Asus sells this part to a major supplier for 15% more: $690, maybe even $750 to ensure profits.
The Distributor sells the card to retail outlets with another 15-20% markup: $790 - $900.
The retail outlet sells to the consumer for a 15-20% markup: $910-$1080.

The above is basically a reasonable minimum price structure for the whole supply chain to stay viable.

What about TU104? It's a 545mm square chip, measuring around 24.3mm x 22.5mm. That means maximum yield per wafer is about 94 chips, and optimistically 89 can be used after harvesting partially defective die. So now the GPU cost drops to around $110 instead of $160. Plus less RAM and slightly lower costs elsewhere since it's not at the same level as TU102 cards.

Nvidia sells this chip for $200 to Asus.
Asus adds a board and cooler and all the other bits and has now spent $400 total.
Asus sells this part to a major supplier for 15-25% more: $460-$500
The Distributor sells the card to retail outlets with another 15-20% markup: $529- $600.
The retail outlet sells to the consumer for a 15-20% markup: $610-$720.

(Tangent: Apple's A11 chip as an alternative only measures about 8.2mm x 10.6mm. It can get around 664 chips per 300mm wafer, which means the cost per chip plummets to around $15-$20. Big chips are expensive. Really!)

You can do the same sort of rough estimates for basically any graphics card. I'm putting in 'generous' profits in the above, because Nvidia GPUs usually are able to sell at a premium. But then my yields on the big chips are probably higher than reality -- TSMC may only get 35 or so good chips per wafer if you're more pessimistic.) The point isn't that they're fully accurate, but that larger and higher performance parts have lower yields and increase the total cost dramatically.
 
  • Like
Reactions: bit_user

bit_user

Polypheme
Ambassador
Nvidia put a ton of R&D money into the architecture that also needs to be recovered, plus the distribution chain needs to make money as well.
Nvidia also needs to subsidize all of the R&D it's putting into self-driving cars, which I'm sure isn't yet turning a profit for them.

Beyond that, they've really poured vast amounts of resources into deep learning, to create their entire software and cloud ecosystem, plus training classes, their in-house deep learning algorithms research, etc.

Even Intel is pretty far behind them. Probably the only organization with more invested in deep learning (outside of China, anyway) is Google.

The Distributor sells the card to retail outlets with another 15-20% markup
What about Newegg, Amazon, Walmart, etc.? I'm sure these behoemoths don't go through a middle-man. Are they just raking in the profits, or why aren't they that much cheaper?

But then my yields on the big chips are probably higher than reality -- TSMC may only get 35 or so good chips per wafer if you're more pessimistic.)
That's much too pessimistic, unless you're talking about top-binned chips. But, don't forget that the x080 Ti's are not fully-enabled. So, they don't actually need fully-functional chips for them.
 
  • Like
Reactions: JarredWaltonGPU
What about Newegg, Amazon, Walmart, etc.? I'm sure these behoemoths don't go through a middle-man. Are they just raking in the profits, or why aren't they that much cheaper?
Big direct-to-consumer sellers make a much larger profit by trying to omit the middle man. This is how Amazon can pay affiliate commissions of up to 8-10% on many items. I do think Amazon also gets some stuff from the distributors, though I can't say for sure.
That's much too pessimistic, unless you're talking about top-binned chips. But, don't forget that the x080 Ti's are not fully-enabled. So, they don't actually need fully-functional chips for them.
Yeah, which is why I didn't use that figure. The difficulty is that TSMC (and Intel, GloFo, SMIC, Samsung, etc.) don't really publish detailed information on yields, and AMD/Nvidia/Intel don't go into details on how many partially defective chips are able to be sold at lower tiers via harvesting. Done properly, with enough redundancies and such, you could theoretically make big chips that have near 100% yield provided you're willing to disable some portions and run at lower clocks. But there are almost always going to be a few chips that are basically unusable.

Nvidia at least tends to hang onto big chips that don't meet the requirements for higher SKUs, and then they show up in stuff like the RTX 2060 cards that have TU104 but with only six GDDR6 channels and 30 out of 48 SMs enabled. Better to sell a chip at $100 than to not sell it at all.
 
  • Like
Reactions: bit_user

Deicidium369

Permanantly banned.
BANNED
Mar 4, 2020
390
61
290
Is that what a transplant operation costs? I seriously doubt that's what AD&D (Accidental Death & Dismemberment) insurance would pay for the loss of one.

In the US (and most developed countries, AFAIK), the sale of human organs for transplantation is illegal.
Go to Israel you can buy a kidney within a couple of days. Only illegal if the patient (donor and recipient) or the Dr (performing an illegal procedure) rats the others out.
 

Deicidium369

Permanantly banned.
BANNED
Mar 4, 2020
390
61
290
What do you mean by this?
Implementation of Ray-tracing in a way that it has no to minimal effect on overall performance hit.
So in other words, break the laws of physics to do something that is inherently computationally intensive and do it on a raspberry Pi4.

The number of rays is what determines how much compute power is needed. There are no shortcuts - what needs to be determines is - what is good enough for today. That metric slides towards more complexity / fidelity with each generation.

The process to determine what is appropriate for ray tracing - fast FPS shooter (bf5) - not applicable ... in a game like Witcher or Skyrim or GTA5 - appropriate.
 

bit_user

Polypheme
Ambassador
Go to Israel you can buy a kidney within a couple of days. Only illegal if the patient (donor and recipient) or the Dr (performing an illegal procedure) rats the others out.
Uh... so, you're saying it's only illegal if you get caught? That's not how laws work anywhere, much less in Israel.

I guess you're saying it's illegal, but it's not actively enforced.
 

bit_user

Polypheme
Ambassador
So in other words, break the laws of physics to do something that is inherently computationally intensive and do it on a raspberry Pi4.
No, he's saying that they could include enough hard-wired ray tracing logic that it could be as fast as traditional rasterization. It's not impossible, but it will probably depend somewhat on the title, how efficient its raster-based engine is vs. how many ray tracing effects are utilized in the RT-path.

The process to determine what is appropriate for ray tracing - fast FPS shooter (bf5) - not applicable ... in a game like Witcher or Skyrim or GTA5 - appropriate.
It basically all comes down to frame rate. If you can get the FPS up, then go ahead and use it in a twitchy shooter. If not, then it's at least good for games where you can appreciate the scenery.
 
  • Like
Reactions: JarredWaltonGPU
So in other words, break the laws of physics to do something that is inherently computationally intensive and do it on a raspberry Pi4.

The number of rays is what determines how much compute power is needed. There are no shortcuts - what needs to be determines is - what is good enough for today. That metric slides towards more complexity / fidelity with each generation.

The process to determine what is appropriate for ray tracing - fast FPS shooter (bf5) - not applicable ... in a game like Witcher or Skyrim or GTA5 - appropriate.
Definitely not breaking law of Physics. But with more than doubling the RT Cores, Increasing IPC by a decent amount. And most importantly simplifying or using completely different algo to process RT calculations so that more calculation can be done at a time making he RT experience more immersive and easy to drive. The claims are that RT performance is to improved x4 which is extremely big jump. To be honest even if it is x2 or x3 the performance of gen 1 it still be enough to reduce the performance hit with RT enabled vs base Raster performance.
 
  • Like
Reactions: JarredWaltonGPU