News Nvidia Ampere Purportedly 50% Faster Than Turing At Half The Power Consumption

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
You keep on saying RT is like other graphical improvements, but it is not. Compare a PS5 vs a PS4 (non pro) game visually and tell me you wouldn't have rather had that earlier rather than waste time with ray tracing. Textures, polygons, shaders, all those things still need huge improvements, and RT is hybrid rendering, it relies on that for visual upgrades. Even look at Quake RTX, it is all the non RTX things that make it look good, not the RT itself. I can easily make a new Quake that looks better without RTX.

Because it is. RT is like anything else, a new idea (well not that new) thats close but only for those who are willing to shell out.

Yes graphics can improve without it but lets look at any CGI in a movie. While people will say its not realistic compared to almost any game movie CGI is vastly superior AND it typically uses RT to obtain it because of the benefits of RT over rasterization.

Again RT is like any other high end feature. Tessellation improved polygons but drastically killed performance until about 2-3 generations after the first cards.

If it wasn't then Microsoft wouldn't be pushing a new API feature for RT and AMD wouldn't be looking to support it as well. Nvidia was just the first to push it out.
 
You keep on saying RT is like other graphical improvements, but it is not. Compare a PS5 vs a PS4 (non pro) game visually and tell me you wouldn't have rather had that earlier rather than waste time with ray tracing. Textures, polygons, shaders, all those things still need huge improvements, and RT is hybrid rendering, it relies on that for visual upgrades. Even look at Quake RTX, it is all the non RTX things that make it look good, not the RT itself. I can easily make a new Quake that looks better without RTX.

Raytracing (as NVidia has implemented it) has one main advantage, and that is the ability to render lighting based on environment space as opposed to screen space. Even the Marty McFly pathtrace shader still operates within screen space (but it still looks pretty great).

Working within screen space will work with any game engine, but working in environment space to calculate lighting has to have a game engine tweaked to allow the RT cores access to the environment for calculating. That's why RTX has to be implemented in the game itself, and why MartyMcfly can simply create a Reshade shader for it that works in almost all games, on any hardware. (it cost $1 though ... but better than the RTX premium :))

The difference between the two is environment space rendering allows for things like you to see something in a reflection that is entirely off screen. Generally, game engines cull everything that it can't draw a pixel for (everything outside of screen space and everything occluded) to maximize speed and efficiency; and if the engine won't draw it, it can't be included in lighting considerations.

This is the limitation that RTX allows us to get around. This limitation is also why currently reflections in games can't really be that great without RTX ... currently. Game engines can be modified to respect environment space calculations without RTX, but there's a steep performance trade-off. I would assume that CryEngine RT still works within screen space.

Other than that, I agree with everything you said. I think RTX 2xxx series cards will never get a true oppurtunity for proper raytracing except the 2080 series, just due to the fact that by the time there's a lot of games that support RTX, the 3xxx series will be out, and the games will have their performance levels more catered to higher end hardware .. .and the poor 2060 and 2070 will get 30 and 40 FPS while trying to use RT and might look like crap, because you had to turn all the other settings down to just get that fps.

Its sad that NVidia forced people to buy the RTX hardware in order to upgrade and made them pay ... with the 16 series and the rumoured GTX 2070, I think they may have realized all they did was screw their customers .... I don't think they care, but at least we're getting a bit of choice now ...
 
Last edited:
GTX 980 was a midrange card? ... I never knew that ...
i didn't say card, i said CHIP.
yep. nvidia's shell game since kepler. the Gx-104 chips were in mid range cards but now sold as high end. x80/x70 cards had their phat chips (100/110) and x60/x50 card were mid range. they introduced the enthusiast level above high/mid/entry and there are 106 chips in x70 cards.

but if you want to believe nvida's marketing, have at it.
e:
but if it really bothers you that much; i'll rephrase that as, "two enthusiast chips vs one high end".

happy now? 😉
 
Last edited:
Wasn't worth the hit to frame rate in 2008, not worth it now. Maybe in 2028.

View: https://www.youtube.com/watch?v=mtHDSG2wNho
If Nvidia never tried anything new we'd still be playing at 640x480x16bit on TNT2, rasterized of course.

RTX Ray-Tracing may not be Pixar quality, but you have to admire them for trying.

I Oooh and Aaahhed at the water in Elder Scrolls 3 Morrowind when I first saw it on a Geforce 3 Ti 500 with vertex shaders the same way I saw ray-traced puddles for the first time.
 
Last edited:
i didn't say card, i said CHIP.
yep. nvidia's shell game since kepler. the Gx-104 chips were in mid range cards but now sold as high end. x80/x70 cards had their phat chips (100/110) and x60/x50 card were mid range. they introduced the enthusiast level above high/mid/entry and there are 106 chips in x70 cards.

but if you want to believe nvida's marketing, have at it.
e:
but if it really bothers you that much; i'll rephrase that as, "two enthusiast chips vs one high end".

happy now? 😉

Getting happier ...

The chips by themselves don't actually do anything, 😉 ... the comparison was between two top of the stack cards. Sure you could say the AMD card was in a higher tier - it was over $200 more expensive, so yeah ... enthusiast vs high end is valid - but at the card level.
 
Last edited:
...
RTX Ray-Tracing may not be Pixar quality, but you have to admire them for trying.
...

I quite appreciate them getting the ball rolling ... what I don't appreciate is the slimy marketing tactic of forcing users to buy it without the ability to try it first, or see how well it performed, when games eventually start to support it. "Trust me! It just works!"

There's people who paid extra for raytracing in a 2060 that will never get the opportunity to really use it - when most games start to support it, the 2060 will be too slow for the game's demands.

It would have been far less slimy to rather than make 127 skus of RTX cards, make a line of RTX and a line of GTX, and let the consumers choose. That way if RTX ends up a dead end, because someone else made RT an open standard, the consumers weren't forced to pay for that mistake. But when has the customer ever mattered to NVidia? Par for the course I guess ... Anyway, I'll stop ranting about that. :)
 
Last edited:
Getting happier ...

The chips by themselves don't actually do anything, 😉 ... the comparison was between two top of the stack cards. Sure you could say the AMD card was in a higher tier - it was over $200 more expensive, so yeah ... enthusiast vs high end is valid - but at the card level.
the fact is one card had two CHIPS whereas the the other had one. now say they don't do anything. 🙄

E:
its funny how you're criticizing NV's marketing of RTX but yet drink their kool aid of market segmentation.
 
Last edited:
Nvidia has traditionally gotten more real world game performance per FLOP compared to AMD. At stock speeds, the Radeon VII has more TFLOPs (13.8) than a 2080 Ti (13.5) despite on average losing to a 2080 (10 TFLOPS). If big Navi only reaches 18 TFLOPs, that would put it in roughly 2080Ti territory. If the 3080ti surpasses 20 TFLOPS, even the regular 3080 is going to blow by AMD's best while the Ti will be all out by its lonesome again.
It's worth mentioning that Navi has significantly improved gaming performance vs FLOPS compared to Vega.
 
That way if RTX ends up a dead end, because someone else made RT an open standard, the consumers weren't forced to pay for that mistake.
From my understanding, the recent foray into ray tracing is already part of an "open standard"*, namely DirectX DXR. RTX is Nvidia's proprietary implementation of an accelerator for DXR calculations, kind of like how GPUs as a whole are proprietary accelerators for general graphics API calculations (among other things). I don't see RTX being a "dead end" unless ray tracing as a whole (or at least DXR) turns into a dead end.

* Open in the sense that any GPU manufacturer can work with it.
 
I don't see RTX being a "dead end" unless ray tracing as a whole (or at least DXR) turns into a dead end.
I could imagine DXR becoming a dead-end if more companies decide to implement Cryengine-style ray-tracing approximation - no point in using an API that relies on costly new hardware when you can get 90% of the effect for a fraction of the GPU-power using methods other than brute-force ray count.
 
the fact is one card had two CHIPS whereas the the other had one. now say they don't do anything. 🙄

E:
its funny how you're criticizing NV's marketing of RTX but yet drink their kool aid of market segmentation.

Whether you have two chips or a hundred - unless you put them in a card, they don't do anything. Was my comment on that not correct?

What product segmentation kool aid did I drink? I don't know anything about their chips, but a GTX 980 wasn't a mid range card in anyone's book; if your still trying to claim that the chip inside made it a mid tier card: performance and price dictates segmentation ... not sure what you are trying to get at.

I'm not an NVidia fanboy if that's what you are thinking - haven't owned one of their cards in years and years.
 
Last edited:
From my understanding, the recent foray into ray tracing is already part of an "open standard"*, namely DirectX DXR. RTX is Nvidia's proprietary implementation of an accelerator for DXR calculations, kind of like how GPUs as a whole are proprietary accelerators for general graphics API calculations (among other things). I don't see RTX being a "dead end" unless ray tracing as a whole (or at least DXR) turns into a dead end.

* Open in the sense that any GPU manufacturer can work with it.

I could imagine DXR becoming a dead-end if more companies decide to implement Cryengine-style ray-tracing approximation - no point in using an API that relies on costly new hardware when you can get 90% of the effect for a fraction of the GPU-power using methods other than brute-force ray count.

My understanding is that neither Crytek RT nor DXR requires specific hardware at all (and thus I consider them "open"), but RTX games need to have considerable tweaks to the game engine itself to utilize RTX cores.

That said, I haven't researched DXR that much, please correct me if I am wrong.
 
Whether you have two chips or a hundred - unless you put them in a card, they don't do anything. Was my comment on that not correct?

What product segmentation kool aid did I drink? I don't know anything about their chips, but a GTX 980 wasn't a mid range card in anyone's book; if your still trying to claim that the chip inside made it a mid tier card: performance and price dictates segmentation ... not sure what you are trying to get at.

I'm not an NVidia fanboy if that's what you are thinking - haven't owned one of their cards in years and years.
and what a card w/o a chip? a piece of PCB. if you want to dive into semantic much further; you are doing that on your own.

titanX/980ti high end, 980/970 mid range and 960/950 entry level. if you allow NV to tell you other wise w/pricing performance rhetoric; go educate yourself on chips then.

as i said, it was comparing two high end chips to a single mid range one. i don't care if you agree with that are not.

welcome to my ignore list, have a good day.
 
Last edited:
rough napkin math is 41% more performance at 49% less power on the same node.
small print aside (ie how the benchmarks are skewed in their favor) nvidia can do some phenomenal things when they need to - maxwell was the last time they felt compelled to compete w/AMD. since then they have been sandbagging - HARD.

The 1080ti will go down as one of the greatest video cards of all time. While the 20xx series won't likely be remembered fondly, it will forever be remembered as the series that brought ray tracing to the masses. The 2080ti is a 751mm GPU. Think about all the dual GPU video cards that AMD/ATi has released in its history. Only 2 of them had more die space than the 2080Ti when combining the area of BOTH dies on the card. AMD has never had a 600mm gaming die and only one 500+mm die I could find. What Nvidia achieved with Turing is quite an engineering feat. You might not like the price/performance of Turing, but Nvidia has not been sandbagging anything since Maxwell.
 
Last edited:
  • Like
Reactions: Gurg
and what a card w/o a chip? a piece of PCB. if you want to dive into semantic much further; you are doing that on your own.

titanX/980ti high end, 980/970 mid range and 960/950 entry level. if you allow NV to tell you other wise w/pricing performance rhetoric; go educate yourself on chips then.

as i said, it was comparing two high end chips to a single mid range one. i don't care if you agree with that are not.

welcome to my ignore list, have a good day.

I've never seen anyone so easily triggered for reasons that should trigger no one ... I guess he was having a bad day ...
 
My understanding is that neither Crytek RT nor DXR requires specific hardware at all (and thus I consider them "open"), but RTX games need to have considerable tweaks to the game engine itself to utilize RTX cores.

That said, I haven't researched DXR that much, please correct me if I am wrong.
I don't know which exact games you consider "RTX games", but something like Battlefield V does use DXR.

And no, it doesn't strictly require specific hardware, it just tends to perform like crap without it. For example, you can enable raytracing on non-RTX 10/16 series Nvidia cards, it just usually performs so poorly it's not worth it IIRC.

The fact that it benefits from hardware acceleration doesn't make it not "open". As I alluded to, the entire concept of a GPU is about hardware acceleration. It's technically possible render games in software with your CPU, it's just infeasible/impractical.
 
I don't know which exact games you consider "RTX games", but something like Battlefield V does use DXR.

And no, it doesn't strictly require specific hardware, it just tends to perform like crap without it. For example, you can enable raytracing on non-RTX 10/16 series Nvidia cards, it just usually performs so poorly it's not worth it IIRC.

The fact that it benefits from hardware acceleration doesn't make it not "open". As I alluded to, the entire concept of a GPU is about hardware acceleration. It's technically possible render games in software with your CPU, it's just infeasible/impractical.

Yup. The hardware seems pointless but it does make a difference. I am sure AMD will at some point design a chip with dedicated hardware as well. Hell we may eventually see a design where shaders and RT cores are one in them same.

I remember a time when Pixel Shaders and Vertex Shaders were separate until we moved to Unified Shader uArchs with Tesla (Nvidia) and TeraScale (ATI) . Technology moves forward and eventually RT will be just as common as anything else.
 
I don't know which exact games you consider "RTX games", but something like Battlefield V does use DXR.

And no, it doesn't strictly require specific hardware, it just tends to perform like crap without it. For example, you can enable raytracing on non-RTX 10/16 series Nvidia cards, it just usually performs so poorly it's not worth it IIRC.

The fact that it benefits from hardware acceleration doesn't make it not "open". As I alluded to, the entire concept of a GPU is about hardware acceleration. It's technically possible render games in software with your CPU, it's just infeasible/impractical.

What I mean by using an "open" standard is that game devs won't have to cater tweaks to both Nvidia RTX GPUs and to an AMD dedicated hardware GPU because both or even one of them requires specific proprietary programming due to any "specialness" of the hardware - they won't be happy if they have to do that and the innovation will be stifled.

It sounds like DXR might be an API that will make that possible though ... ok, that's good to hear.
 
What I mean by using an "open" standard is that game devs won't have to cater tweaks to both Nvidia RTX GPUs and to an AMD dedicated hardware GPU because both or even one of them requires specific proprietary programming due to any "specialness" of the hardware - they won't be happy if they have to do that and the innovation will be stifled.

It sounds like DXR might be an API that will make that possible though ... ok, that's good to hear.
Well, most games seems to favor one GPU brand over the other in terms of overall performance (how much is incidental vs deliberate one-sided optimization, I couldn't say). Maybe we'll see the same thing for RT performance from one game to another.

But for now even if devs are optimizing for performance on RTX it kinda makes sense, given that RTX is the only RT hardware they can optimize for at this point.
 

PS5 has Ray Tracing, it was confirmed in CES 2020 slides.

I know the term "raytracing" has been confirmed, but if we consider that raytracing is just a math model, and raytracing for audio has been confirmed (can't recall where I read that, but it was confirmed for PS5, I'm 99% sure), I am wondering if we might not be getting the same level of RT grunt that NVidia offers, which is about as low as you can go without sacrificing quality to a detriment over not having it.

I guess what I am saying is that am having some doubts on what level of RT prowess AMD will be able to cram into a low cost console chip. NVidia's 2080 costs more than the entire console likely will ...

Is it reasonable to expect that AMD can match NVidia's RT performance in such a small form factor and with such a smaller cost budget, coming from quite a way behind in said technologies? AMD has surprised us in that past , but this looks like an impossible task, so I'm not going to get too excited ... yet.

I know AMD has been working on RT for many years already with their progressive "ProRender" 3D renderer that can leverage both CPU and GPU - I've used it; they can use some of the research on this to help with designing dedicated HW, and they may have been doing this in tandem. ProRender is a free, public product - due to this it makes sense that they view it as a research project as much as trying to produce an actual product.

I am just skeptical how much of the consoles RT will be practical for advanced lighting effects ... some I am sure, but I think that will be a limited variable.

All that aside, I am a bit excited about audio raytracing for games. I hate it when an NPC or player is three levels above me, but the game engines audio spacialization only works in 2D and they sound like they are next to me - so immersion breaking. Being able to have properly occluded audio in full 3D space will be amazing. Audio quality can "make or break" a game. To me, this will have way more value than shinier reflections or smoother AO.
 
Last edited:
I highly doubt that the PS5 will be able to give the same level of, as you put it, "grunt" that a RTX card has. The consoles as a whole normally still pale in raw power to a single GPU alone and the PS5 will be no exception especially because they will have to fit inside of a thermal envelope normal desktop gaming systems do not have and pricing envelopes.

However being a console the design is all based around one base hardware design so they are able to get more performance per TFLOP than PCs normally. So while I doubt they will be able to do the same level of RT I could see still some RT in it.

I can't say though if this RT will be AMDs actualy hardware designed RT or just their software based RT that utilizes shader cores that are inactive. My bet is the latter as hardware RT would take time and need quite a bit of a design change to implement and I doubt even Big Navi had that in mind when being designed. Maybe RDNA 2.0 will be hardware RT for AMD.