News Nvidia Ampere Purportedly 50% Faster Than Turing At Half The Power Consumption

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

spongiemaster

Admirable
Dec 12, 2019
2,278
1,281
7,560
What are you on about?

Turing was claimed to be 6x faster than pascal, and we all know that that claim was spot on.[/s]

I'll wait for the reviews.
NVIDIA-RTX-Turing-GPU_13-1480x833.png


Only in ray tracing, which was an accurate statement.
 

spongiemaster

Admirable
Dec 12, 2019
2,278
1,281
7,560
I expect most of this batch of claims will pan out just like when the 2080 was twice as fast (NOT!) as the 1080. (I'd be quite happy to be wrong, however, and would be thrilled for a 2070 Super equivalent card to come along (RTX3060?) at $300 or so...
This leak came from an investment firm communicating with its clients. Not a situation you want to get caught lying. It carries much more weight than an unnamed source at wccftech.
 
But that's not really what spread around as rumour now is it?

Makes you wonder what then, is meant by "50% faster" at "half power consumption" ... there's a lot of ways that could be interpreted.

I'll wait for the reviews, thanks. :)

Thats why I said it is probably throughput and not real world performance. I can easily see nVidia getting 50% better throughput. A die shrink and uArch update can do that. It just may not translate directly to real world gains.
 
  • Like
Reactions: joeblowsmynose

spongiemaster

Admirable
Dec 12, 2019
2,278
1,281
7,560
But that's not really what spread around as rumour now is it?

Makes you wonder what then, is meant by "50% faster" at "half power consumption" ... there's a lot of ways that could be interpreted.

I'll wait for the reviews, thanks. :)
There was no rumor I remember stating that turing was 6 times faster than Pascal. Has there ever been a video card 6 times faster than the previous generation? Maybe the Voodoo 1 vs whatever was fastest before it? I think I would have remembered if there was a rumor that Turing would be the biggest jump in performance in the history of the industry.
 

spongiemaster

Admirable
Dec 12, 2019
2,278
1,281
7,560
Thats why I said it is probably throughput and not real world performance. I can easily see nVidia getting 50% better throughput. A die shrink and uArch update can do that. It just may not translate directly to real world gains.
I agree, there's no way Nvidia has representative game performance #'s at this point, so either it is an estimate or target based on what they do have, or it is a lowlevel performance metric.
 

joeblowsmynose

Distinguished
There was no rumor I remember stating that turing was 6 times faster than Pascal.
...

It wasn't a rumour - it was a headline in in a fair few articles. I posted a source link to one of the tech articles that claimed that, in the post you quoted ... the part highlighted in red is a hyperlink.

I usually try to not just make stuff up as I go along ... ;) Although I do get accused of that occasionally ...
 

spongiemaster

Admirable
Dec 12, 2019
2,278
1,281
7,560
It wasn't a rumour - it was a headline in in a fair few articles. I posted a source link to one of the tech articles that claimed that, in the post you quoted ... the part highlighted in red is a hyperlink.

I usually try to not just make stuff up as I go along ... ;) Although I do get accused of that occasionally ...

That's a pretty typical example of bad click bait reporting. That article was a recap of Nvidia's official Turing announcement. The chart I posted above being part of that announcement, so there was no ambiguity from Nvidia where Turing was 6x faster. The following is from your article:

"Nvidia is promising “up to 6X the performance of previous-generation graphics cards,” and real-time ray tracing for these cards."

That should have said "6x the performance...IN real-time tracing", not "6x the performance...AND real-time ray tracing." By using "and" they falsely implied that Nvidia claimed 6x performance in other gaming scenarios besides ray-tracing, which they never did. Again, bad and misleading reporting. The underlined part of the quote is a link that goes to Nvidia's RTX 20 series page that I removed to make the forum software happy. Scroll down just a little bit and you get a chart with a big "Ray Tracing Performance" title at the top of it showing a 2080ti somewhere between 5 and 6x faster than a 1080ti. No where else is there any reference to Turing being "x" times faster than Pascal.
 

joeblowsmynose

Distinguished
That's a pretty typical example of bad click bait reporting. That article was a recap of Nvidia's official Turing announcement. The chart I posted above being part of that announcement, so there was no ambiguity from Nvidia where Turing was 6x faster.
...

There was a fair bit of it at the time. Also in perhaps the same set of slides you posted was that 2080 was 2x faster than 1080 ... but it isn't anywhere close, its just the fine print the fine print indicated that this only occurred with DLSS on - AKA rendering at a lower resolution then trying to sharpen it make it look like not crap, as the other guy mentioned. We don't even get any fineprint with articles like this.

That's like saying "look! this is 2x faster!" but comparing 1080p resolution to 720p and then calling them both 1080p. Its disingenuous. This is marketing. We need to expect this.

Intel does things like claim that "these are the real world desktop applications people use" - and then use stats that they gathered from their own "spy" software exclusively from laptops and tablets. What does that have to do with desktops?

AMD was supposed to make almost free 5.0 ghz zen 2, as was floating around as rumours before AMDs announcement - which made headlines everywhere.

I don't trust headlines at all for these reasons, and I only believe 40% of what goes into a company's own marketing.


I'll wait for reviews before I vehemently believe anything. I'm not saying that 50% increase in performance AND 50% reduction in power isn't within the realms of possibility, I am saying that money motivates headlines, whether accurate or not, and there's a lot of ways you can massage data to make that claim without actually having any performance metrics at all.

If there are precedents, its that headlines and marketing slides help to create stronger investment interests and are crafted to cater for such - its usually never about actual data that enthusiasts care about.
 
Last edited:

Gurg

Distinguished
Mar 13, 2013
515
61
19,070
If this is just as stated "50% faster" could this be simply 50% faster than FE edition base clock speed of 1350 MHz? We know that most 2080 Ti will overclock 50% faster than base clock speed to 2025 MHz or higher. When the base clock speed of high end Intel and AMD multi-core CPUs are 3.6+ GHz why would it be a stretch to see a Nvidia GPU generate a 2025 MHz base clock speed with additional margin boost and overclocked max two years after the intro of the 2080 Ti?

Maybe Nvidia is feeling the heat of Intel's entry into the discrete GPU market at potential core clock speeds considerably higher than Nvidia's 2080 Ti? What if the discrete Intel GPU will be able to use the Intel CPU integrated graphic as a form of SLI to further boost its performance? 2020 could be a very interesting year as all these pieces unfold.
 
Last edited:

JordonB

Distinguished
Feb 3, 2007
14
1
18,515
AMD fanboys on panic mod.
Shouldn't talk about fanboys as competition keeps prices low and all benefit. AMD is doing great things with CPUs while Intel does vaporware. Nvidia rocks GPUs but AMD keeps them honest as far as prices. I am a reality fanboy.
 

mradr

Distinguished
Oct 12, 2011
12
2
18,515
Tessellation made games look better. Ray tracing doesn't. It's "opportunity cost" look it up. The idea is if you have to run your frame rate at 1/3 for ray tracing, you have to ask how good could you have made it look instead, if you applied 3 times the performance using the old techniques? My meaning is would you rather a GTX 1050 game with ray tracing, or a game that maxes out the 1080 ti in visuals, at the same frame rate? I'll take next gen visuals using the old technique over playing last gen games with ray tracing every time. It's a huge waste of money as you split your silicon resources (ray tracing related calculations require different silicon that can't speed up traditional rendering). You're building a giant GPU full of matrix multiplication and INT8 performance instead of floating point shading performance.

I still remember the back lash of tessellation in games with the same arguments you are making now really.

I think you have it wrong - or looking at the wrong small chunk of the total pie in what it "cost" to make a game. RT for lighting effects is "cheap" to add when you break down the "cost" and "time" it takes to do it with the other methods. Thus - your one time purchase of the RT seems high - but over all - cost less for you in terms of getting good looking visuals in the long run. So while, yes, it cost the customer some upfront extra cost to have said feature - it does have the "opportunity cost" in place to lower the cost of creating software, time to release, and increasing the visuals that has a greater value over many titles than just a single purchase. The problem is just that it takes time for software to catch up thus your "opportunity cost" doesn't fit the need to have it yet. Witch is fine. For firs generation products - it never will be the dream upgrade, but it does open the door for the future of what is to come.

As for cost of the hardware and design - sure there will be more improvements later down the road that will help reduce the cost thus more reasons software will "have" to drop the old methods and start using the new methods going forward. We are only looking at first generation as well. One thing to keep it mind - in terms of scale - RT only needs to be able to keep up with the resolution as the rays are cast base on pixels or pixel groups thus actually gives a hard limit that they just need to spend the resources on and figure out the best method to get that level of performance. 4k for example, will be around for a long time - thus they only need to target RT to be able to be fast enough for that for a while or simple target 8k and sit on that for 20+ years as they did with 1080p for the longest time.

Another thing to keep in mine - with RT - this is the first glimpse into what is to come in the future. Instead of today's rendering methods - as RT get more powerful in terms of hardware - we will start to see a shift to RT rendering instead. RT has a key advantage in that it scales a LOT more than current methods. I think we're seeing that if they are claiming a 50% improvement (assuming that's mostly RT and only maybe 20-25% in normal performance like we usually see from a node shift). At that point, they only need to scale the RT again to meet the pixel count and some sort of target refresh rate for the hardware to hit. This means you could remove other half slowly while improving RT render methods more and more with year over year increases by 50% to the point we could have 8k at 240Hz screens or 16k at 60Hz.

What is fun is we're only seeing the tip of what RT can do with lighting - there are still more to explore such as using RT for water effects, wind effects, sound effects, AI, VR... etc etc. In a way - I think what we will start to see in the future is multi chip RT cores because of how it scales and all the over all future visual and audio effects along with performance and new methods of rendering will it provide. You could just create an AI on the fly by setting a RT eye onto the object and just let it run around the map.

Sorry for the wall of text, my point is that RT offers more than what " tessellation" did and that it has a WAY bigger "opportunity cost" than it costing us an extra 100$ when you compare it to saving multi millions of dollars in the future for everyone.
 
Last edited:

csm101

Distinguished
Aug 8, 2007
180
13
18,715
if this turns out to be exactly true, then and only then i will go for a 3080. if not ill happily wait for another 3 years before getting a new VGA.
 
I wish the rtx 3060 is going to be faster than 1080ti
Wish granted! You will be able to buy one soon for just $699 USD! : 3

Really though, that's pretty much what Nvidia did when they launched the 20-series. Sure, a 2060 is a bit faster than a 1070 Ti on average, but it was priced near the 1070's MSRP, not the 1060's. Going by product names alone, they were offering upward of 50% more performance than the similarly numbered cards from the prior generation, but with pricing taken into account, the performance gains were more like 20%, and after 2.5 years.

Then AMD followed along with their own odd naming scheme, adding an extra zero to product names and similarly placing them in completely different price categories, with similarly mediocre performance gains over their comparably-priced predecessors.
 
  • Like
Reactions: TJ Hooker

AlistairAB

Distinguished
May 21, 2014
229
60
18,760
Consoles have no bearing on anything PC wise. If anything PCs drive console technology, especially now that consoles have basically become ultra custom PCs.



They both do. Just because you don't think it doesn't does not mean it does not. Ray Tracing absolutely increases quality for multiple items. If we could do real time RT for everything and get the performance we want it would be mind blowing how realistic by comparison it would be.

As said this is the same as any other performance heavy technology. Do you think Super Sampling was worth the cost originally? No. But now its useful since people have GPUs that can perform at X resolution but their screens only show Y resolution so use SS to get better visuals for less of a performance hit than normal AA.

You keep on saying RT is like other graphical improvements, but it is not. Compare a PS5 vs a PS4 (non pro) game visually and tell me you wouldn't have rather had that earlier rather than waste time with ray tracing. Textures, polygons, shaders, all those things still need huge improvements, and RT is hybrid rendering, it relies on that for visual upgrades. Even look at Quake RTX, it is all the non RTX things that make it look good, not the RT itself. I can easily make a new Quake that looks better without RTX.
 
Pretty sure the 50% faster means the TFLOPS, in games it might be 30% faster at maximum. An OCed 2080 Ti can reach 16.3 TFLOPS. My OCed 2080 Ti does 16000 in Time Spy on air, a heavly OCed 1080Ti does 11000 so the 2080 Ti was a 46% improvement in synthetic benchmarks. Their 50% faster claim is completely legit in synthetic benchmarks. 13TFLOPS stock 2080 Ti x 50% +13= ~20TFLOPS. Big Navi from AMD is rumored to be 18TFLOPS. Ampere is going to be faster by 10-20% in my opinion but Big Navi will be cheaper. 4K 120FPS might become a reality with a heavily OCed 3080 Ti :)
 
Last edited:

spongiemaster

Admirable
Dec 12, 2019
2,278
1,281
7,560
Pretty sure the 50% faster means the TFLOPS, in games it might be 30% faster at maximum. An OCed 2080 Ti can reach 16.3 TFLOPS. My OCed 2080 Ti does 16000 in Time Spy, a heavly OCed 1080Ti does 11000 so the 2080 Ti was a 46% improvement in synthetic benchmark. Their 50% faster claim is completely legit in synthetic benchmarks. 13TFLOPS stock 2080 Ti x 50% +13= ~20TFLOPS. Big Navi from AMD is rumored to be 18TFLOPS. Ampere is going to be faster by 10% in my opinion but Big Navi will be cheaper. 4K 120FPS might become a reality with a heavily OCed 3080 Ti :)
Nvidia has traditionally gotten more real world game performance per FLOP compared to AMD. At stock speeds, the Radeon VII has more TFLOPs (13.8) than a 2080 Ti (13.5) despite on average losing to a 2080 (10 TFLOPS). If big Navi only reaches 18 TFLOPs, that would put it in roughly 2080Ti territory. If the 3080ti surpasses 20 TFLOPS, even the regular 3080 is going to blow by AMD's best while the Ti will be all out by its lonesome again.
 
  • Like
Reactions: Zizo007
Dec 31, 2019
22
2
25
yeah little screwy to compare a 2gb-4gb cards at that resolution.
780ti/980ti at 1440 that 3gb will handle just fine esp back then:
https://www.techpowerup.com/review/nvidia-geforce-gtx-980-ti/31.html
perfrel_2560.gif
https://www.techpowerup.com/review/nvidia-geforce-gtx-980-ti/32.html
perfwatt_2560.gif
rough napkin math is 41% more performance at 49% less power on the same node.
small print aside (ie how the benchmarks are skewed in their favor) nvidia can do some phenomenal things when they need to - maxwell was the last time they felt compelled to compete w/AMD. since then they have been sandbagging - HARD.

don't forget:
Nvidia says it “can create the most energy-efficient GPU in the world at anytime”

i'm not saying its in the bag but depending what metric is used, 50% more performance at 50% less power is certainly plausible.
 

Irisena

Commendable
Oct 1, 2019
94
10
1,565
wait for proper review, and look out for more SUPER <Mod Edit> from nvidia. I've learned that looking at rumours do me no good and led to disappointment when the actual thing got released.
 
Last edited by a moderator: