Discussion Ampere's 2X performance misleading marketing?

niz

Distinguished
Feb 5, 2003
903
0
18,980
All the evidence points to when Jensen Huang said "Ampere is 2X as powerful" he was actually ONLY TALKING ABOUT RTX/DLSS GAMING. nVidia apparently have embargos about which games reviewers can even talk about in connection with Ampere performance. It's mentioned here: https://www.eurogamer.net/articles/digitalfoundry-2020-hands-on-with-nvidia-rtx-3080. Its almost certainly no coincidence that the few games that reviewers can even discuss the performance of on Ampere are all RTX games.
Why would nVidia force comparisons based only on raytracing performance? (i.e. stuff nearly all games don't even use?) Logic says because that's Ampere's only 2X gain over Turing. Hardly surprising given RTX was first introduced on Turing, so was first-gen, He's almost certainly also trying to make RTX sound like the new 'normal' just because AMD still doesn't even support raytracing or AI anything at all.
As soon as independent reviewers start publishing their own benchmarks, I expect 3080 will actually be a far more usual generational performance improvement with any game you currently have or mostly want to play (from all nVidia history that's never been even close to the magical 2X number Jensen kept pushing).
Also when reading performance reviews we now need to watch out for raw/actual rendering performance vs. using any AI 'tricks' like actually rendering at much lower resolutions then upscaling with DLSS .
 
Last edited:

Phaaze88

Titan
Ambassador
Jensen really wants to bury the 1080Ti in the bud... even went as far as to demonstrate ray tracing performance on it.
Like, you didn't even give it the hardware to do that, so of course it would look like crap compared to Turing and Ampere.
Take RTX off though, and Turing was really unimpressive compared to the old Pascal beast.

They're trying real hard to push the RTRT thing. As long as the feature yields less fps with it enabled VS off, it's pretty much a given that most users are going to run with it off on the regular.
From time to time, some will turn it on to mess around or whatever, but that's about it.
 
They're trying real hard to push the RTRT thing. As long as the feature yields less fps with it enabled VS off, it's pretty much a given that most users are going to run with it off on the regular.
The same thing could be said about a lot of other "revolutionary" GPU technologies of the past. Hardware T&L was mocked by 3dfx. I think people had mixed reaction to programmable shaders. DX10 was derided until maybe the 2nd or 3rd gen hardware. And people were thinking tessellation of DX11 wasn't worth turning on. There was also some non-hardware/API specific things like HDR rendering and SSAO that people had mixed feelings about during their initial years.

I'd also wager all of these features still reduce FPS to a degree, if I wanted to poke at that argument. Why run any game at anything other than low quality at the lowest resolution you're willing to tolerate?
 
All the evidence points to when Jensen Huang said "Ampere is 2X as powerful" he was actually ONLY TALKING ABOUT RTX/DLSS GAMING. nVidia apparently have embargos about which games reviewers can even talk about in connection with Ampere performance. It's mentioned here: https://www.eurogamer.net/articles/digitalfoundry-2020-hands-on-with-nvidia-rtx-3080. Its almost certainly no coincidence that the few games that reviewers can even discuss the performance of on Ampere are all RTX games.
Why would nVidia force comparisons based only on raytracing performance? (i.e. stuff nearly all games don't even use?) Logic says because that's Ampere's only 2X gain over Turing. Hardly surprising given RTX was first introduced on Turing, so was first-gen, He's almost certainly also trying to make RTX sound like the new 'normal' just because AMD still doesn't even support raytracing or AI anything at all.
As soon as independent reviewers start publishing their own benchmarks, I expect 3080 will actually be a far more usual generational performance improvement with any game you currently have or mostly want to play (from all nVidia history that's never been even close to the magical 2X number Jensen kept pushing).
Also when reading performance reviews we now need to watch out for raw/actual rendering performance vs. using any AI 'tricks' like actually rendering at much lower resolutions then upscaling with DLSS .

Did anyone even watch this video all the way though?

The first game that he runs is borderlands 3, a game with no ray tracing or DLSS. Result? 80%ish performance uplift over the 2080.

The third game he runs is Battlefield V, but the text at the top of the screen specifically shows "DXR OFF." Result? 70%ish performance uplift over the 2080.

The fifth game that he runs is Doom Eternal, yet another game without DLSS or ray tracing even as an option. This game actually sees the highest performance uplift, running at one point almost twice as fast as the 2080.

Of course Nvidia wants to paint itself in the best light, but to say that "the few games that reviewers can even discuss the performance of on Ampere are all RTX games" is absolutely false. Multiple comparisons were done in this video that shows the gen on gen increase happening without any Nvidia-specific features being enabled.
 

Phaaze88

Titan
Ambassador
The same thing could be said about a lot of other "revolutionary" GPU technologies of the past. Hardware T&L was mocked by 3dfx. I think people had mixed reaction to programmable shaders. DX10 was derided until maybe the 2nd or 3rd gen hardware. And people were thinking tessellation of DX11 wasn't worth turning on. There was also some non-hardware/API specific things like HDR rendering and SSAO that people had mixed feelings about during their initial years.

I'd also wager all of these features still reduce FPS to a degree, if I wanted to poke at that argument. Why run any game at anything other than low quality at the lowest resolution you're willing to tolerate?
Fps junkie, that's why. I'm not one of them though.
There's people out there that get 9900Ks and 2080 Supers or whatever and run Fortnite/CSGO/COD on the lowest settings, 4:3 aspect ratio, and 720p resolution... ROFL!!!

The hardware of today is also far more capable than what came out around the same time as those features. RTRT is no different. I wouldn't be surprised if Ampere 'isn't quite there yet', plus support for it is still rather poor.
RTRT also has mixed reception, and it may stay that way unless Nvidia can really press all the software developers to incorporate it into their games, or it'll fade away into obscurity, like PhysX and others, that I can't remember for some reason.
 
Sep 3, 2020
4
0
10
What amazes me is not the performance uplift but the price. Even if there is zero uplift from 3070 to 2080ti, it for $500 gives you what people were paying $1200-1500 for just a few days ago. In a cooler more energy efficient package with faster ram, now it has less ram - but unless your use case is only MSFS 2020, that is mostly irrelevant right now. And Samsung silicon is way nicer than TSMC. Puts a hard ceiling on used 2080ti values, but honestly, i would not want one of those hot, inefficient buggy cards even for $350...as A new warranty, box, all the accessories, and zero chance someone ESD'd the card or did parametric damage by overclocking...also I like new tech, especially when it provides a nice generational life at a price/performance ratio that has not been seen in years.
 
No. It's not 2x the performance. It's close but no cigar. Although it's very much probably what you'd see from sli'ing the previous gen cards. Except in a single card and without the problems that come with sli.
I expect it’s still way above the sli option. From the more recent SLi reviews I’ve seen if the game supports SLi the average boost is about 30%, then you have the games that don’t support it.
 
What amazes me is not the performance uplift but the price. Even if there is zero uplift from 3070 to 2080ti, it for $500 gives you what people were paying $1200-1500 for just a few days ago. In a cooler more energy efficient package with faster ram, now it has less ram - but unless your use case is only MSFS 2020, that is mostly irrelevant right now. And Samsung silicon is way nicer than TSMC. Puts a hard ceiling on used 2080ti values, but honestly, i would not want one of those hot, inefficient buggy cards even for $350...as A new warranty, box, all the accessories, and zero chance someone ESD'd the card or did parametric damage by overclocking...also I like new tech, especially when it provides a nice generational life at a price/performance ratio that has not been seen in years.
FYI those "hot, inefficient and buggy cards" were flagship last week. And they were also leagues above anything else in the market as far as efficiency, temps. and also stability.
 
Sep 3, 2020
4
0
10
FYI those "hot, inefficient and buggy cards" were flagship last week. And they were also leagues above anything else in the market as far as efficiency, temps. and also stability.

Zotac is planning a 3070 mini without loss of performance - they had said a 2080ti full performance card would not be possible in a mini, that should tell you how far apart these generations are.

Saying "And they were also leagues above anything else in the market as far as efficiency, temps. and also stability." is a little intellectualy dishonest, they were the ONLY game in town, with no competition, of course they were going to be "best". Does not mean the design was any good. In my opinion, looking back, it is pretty clear with 20/20 hindsight - although there were some saying it then -it was a stop gap money grab by big green while they developed the real cards, glad i did not get suckered. You all can keep your "best performance cards" - I'll take a new one with equivalent performance, 1/2-1/3 the cost, better fabrication process, faster throughput ram, and on and on.

You could say the exact same thing about bronze weapons the day before a dude with an iron sword showed up.
 
The first generation of just about anything is always going to be expensive and full of compromises. When the technology is refined, lessons are learned, and things are figured out well enough to drive the price down, the second generation emerges and makes first gen whatever-it-is look horrible. Turing was first gen RTX; everyone who bought one should have realized that they are basically the beta testers of this stuff.

Honestly, I like my RTX cards just fine, but I'll be happy to put Turing behind us: that way I don't have to listen to people whining about how they are way too expensive and ray tracing is stupid etc etc.
 
RTRT also has mixed reception, and it may stay that way unless Nvidia can really press all the software developers to incorporate it into their games, or it'll fade away into obscurity, like PhysX and others, that I can't remember for some reason.
NVIDIA doesn't have to do anything extra. Sony and Microsoft have been boasting about ray tracing on their new consoles. AMD appears to be committed to it now judging by that Chrome City video they posted a while ago. Unreal and Unity, two the most used game engines, have it and it looks like you just need to push a button and tweak some sliders to get it working. And RTRT is now standardized in Vulkan. All the pieces are in place for RTRT to become a standard feature in high-end gaming.

I think a lot of the initial pushback was simply because of Turing's relatively poor showing. I recall scrolling through the comment train of Quake II RTX's predecessor and most of the people were impressed/excited despite that version looking worse.
 
Zotac is planning a 3070 mini without loss of performance - they had said a 2080ti full performance card would not be possible in a mini, that should tell you how far apart these generations are.

Saying "And they were also leagues above anything else in the market as far as efficiency, temps. and also stability." is a little intellectualy dishonest, they were the ONLY game in town, with no competition, of course they were going to be "best". Does not mean the design was any good. In my opinion, looking back, it is pretty clear with 20/20 hindsight - although there were some saying it then -it was a stop gap money grab by big green while they developed the real cards, glad i did not get suckered. You all can keep your "best performance cards" - I'll take a new one with equivalent performance, 1/2-1/3 the cost, better fabrication process, faster throughput ram, and on and on.

You could say the exact same thing about bronze weapons the day before a dude with an iron sword showed up.
Yes, that's how progress works. Not sure where you're going with this.
 
  • Like
Reactions: hotaru.hino

Phaaze88

Titan
Ambassador
Honestly, I like my RTX cards just fine, but I'll be happy to put Turing behind us: that way I don't have to listen to people whining about how they are way too expensive and ray tracing is stupid etc etc.
The price to performance those cards offered were bad, especially for folks running on Pascal.
For people like myself, who have/had the 1080Ti, only one card made for a reasonable upgrade, but even then, the price heavily offset that.
At the end of it's generation, the 2080Ti, and other Turing cards, did not deliver on the performance improvements Nvidia promised, because software 'was on coffee break'.
The number of games in which Turing shined(50% faster) over Pascal could probably be held in the palms of your hands. On average, the 2080Ti was 20-30% faster than 1080Ti, for like 70% more, but because software was trailing behind.

SOFTWARE DEVELOPMENT.
This, right here, is one of, if not the greatest bottleneck to all this fancy PC hardware. Not the cpu, not the ram, not the power supply - ok, they have some effect, but they're minor, compared to my main point.
Everyone should know by now that Nvidia was cherry picking in their announcement. Once independent reviewers show their results, then we'll all know how far software has caught up.

I don't think RTRT is stupid. I do believe it should've spent a little more time in development, or at least marketed as a side feature of sorts, not the primary one. IMO, it doesn't look like much; running higher resolutions looks better, plus the performance impact appears to be smaller.
But I play at 1440p though, so that's just me.

NVIDIA doesn't have to do anything extra. Sony and Microsoft have been boasting about ray tracing on their new consoles. AMD appears to be committed to it now judging by that Chrome City video they posted a while ago. Unreal and Unity, two the most used game engines, have it and it looks like you just need to push a button and tweak some sliders to get it working. And RTRT is now standardized in Vulkan. All the pieces are in place for RTRT to become a standard feature in high-end gaming.

I think a lot of the initial pushback was simply because of Turing's relatively poor showing. I recall scrolling through the comment train of Quake II RTX's predecessor and most of the people were impressed/excited despite that version looking worse.
Well, time will tell on that one.
 
The number of games in which Turing shined(50% faster) over Pascal could probably be held in the palms of your hands. On average, the 2080Ti was 20-30% faster than 1080Ti, for like 70% more, but because software was trailing behind.

SOFTWARE DEVELOPMENT.
I think this hits the nail on the head. Turing has a lot of forward looking features beyond RTRT and tensor cores for AI, such as the INT+FP split and support for Mesh Shaders, Variable Rate Shading/Rendering, and Texture-Space Rendering. As far as I know, the only commercial software that uses VRS is Wolfenstein Youngblood and 3DMark. The other two features I've not yet seen used anywhere outside of tech demos that I don't have access to. The INT+FP split was supposed to be a freebie, but I think only recent games started using both data types simultaneously to a noticeable degree for it to matter.

The only oddity I found was the RTX 2080 was able to obliterate everything before it except the Radeon VII in Wolfenstein II.

EDIT: If anything, I'm looking at the GeForce 20 series like the GeForce 256. I'm pretty sure the 256 didn't do much at the time and by the time the GeForce 2 came out it was looking pretty hokey, but it laid the foundation for what the survivors of the current generation use.
 
Last edited:
  • Like
Reactions: Phaaze88
Sep 3, 2020
4
0
10
Yes, that's how progress works. Not sure where you're going with this.

Bit more than progress - lots of anger being expressed because people got used to being able to sell their old cards to pay for most of the new. Now that NV has absolutely confirmed in their Q&A that their $500 card matches the RAS performance of the 2080ti and beats it in RT...who in their right mind is going to pay even 250 for a less efficient, larger space taking, slower performing card that someone else has had their grubby little hands on has no warranty and is probably missing box and accessories. There will always be fools, but most I think are smart money and gaming is a luxury so waiting for stock is not exactly hardship. I admit a bit a schadenfreude, but man is it fun.
 
Bit more than progress - lots of anger being expressed because people got used to being able to sell their old cards to pay for most of the new. Now that NV has absolutely confirmed in their Q&A that their $500 card matches the RAS performance of the 2080ti and beats it in RT...who in their right mind is going to pay even 250 for a less efficient, larger space taking, slower performing card that someone else has had their grubby little hands on has no warranty and is probably missing box and accessories. There will always be fools, but most I think are smart money and gaming is a luxury so waiting for stock is not exactly hardship. I admit a bit a schadenfreude, but man is it fun.
$250 for basically the same performance as a $500 card? I'd call that a steal. Also the way the ad was written on whatever you look at can tell a lot about how the card was handled. If it looks like someone knows what they're talking about and took good pictures of it, they've probably taken care of the card.

Also a missing box is a moot point unless you have a hard-on for boxes. And accessories? What accessories? I'm sure going to miss that poster I never put up, the DVD with the outdated drivers, or the dongle that never got used.
 
Sep 3, 2020
4
0
10
$250 for basically the same performance as a $500 card? I'd call that a steal. Also the way the ad was written on whatever you look at can tell a lot about how the card was handled. If it looks like someone knows what they're talking about and took good pictures of it, they've probably taken care of the card.

Also a missing box is a moot point unless you have a hard-on for boxes. And accessories? What accessories? I'm sure going to miss that poster I never put up, the DVD with the outdated drivers, or the dongle that never got used.

That $500 is getting you so much more than performance (I realize that these items may not matter to many people). Form facter, there is going to be a mini-itx card Q1 21 - so 2080ti+ performance for us SFF people, for 1/3 the cost! Samsung processes demonstrateably better than TSMC, also with much higher yields, much more energy efficient GPU for easier cooling, etc. Oh and a warranty, which is worth at least 80-100 to me.
 
That $500 is getting you so much more than performance (I realize that these items may not matter to many people). Form facter, there is going to be a mini-itx card Q1 21 - so 2080ti+ performance for us SFF people, for 1/3 the cost!
This is your use case and your parameters when shopping around for a card, but for a lot of other people, $250 for a 2080 Ti is a steal. I don't see a problem with this.

Samsung processes demonstrateably better than TSMC, also with much higher yields, much more energy efficient GPU for easier cooling, etc.
I don't think it's fair to directly compare 8nm to 12nm in this context.

Oh and a warranty, which is worth at least 80-100 to me.
AFAIK most warranties aren't transferable (EVGA seems to be the only exception). So the warranty is useless anyway in the used market.
 

TRENDING THREADS