News Nvidia Ampere Purportedly 50% Faster Than Turing At Half The Power Consumption

AlistairAB

Distinguished
May 21, 2014
229
60
18,760
From the same company that hasn't done anything in 3 years since the 1080 ti launched that is worth mentioning. It is most likely 50 percent more performance OR half power consumption at best. Don't fall for it.
 
  • Like
Reactions: TJ Hooker

InvalidError

Titan
Moderator
AMD still can't match the three year old 1080ti.
Which is fine, only ~5% of PC gamers can be bothered to spend ~$1000 on a GPU. The primary market for $1000+ GPUs is datacenter at $3000+ a pop, consumers get the surplus and rejects. If AMD does not have sufficient confidence to go after the datacenter market with a given architecture, then there won't be high-end consumer parts either as those don't make economic sense to pursue specifically.
 
  • Like
Reactions: alextheblue
Wasn't worth the hit to frame rate in 2008, not worth it now. Maybe in 2028.

View: https://www.youtube.com/watch?v=mtHDSG2wNho

Nothing is worth the early adoption for the majority but it is worth it in the end. Would you say the same about Tesselation? Because AMD pushed that hard and at first it killed performance while increasing visual fidelity. Pretty much any modern technology gets pushed by one side then eventually adopted by both sides.

The biggest difference is that real time Ray Tracing would be the biggest bump to graphics in a very long time.

As for the 50% performance I guess it depends on the performance metric. A lot of times these can refer to throughput (i.e. TFLOPs) rather than real world performance. Considering we are dropping from 12nm to 7nm and if we take just TSMC, it is about 2x the density of 12nm. That could easily put 50% more logic into the same size die. Samsungs 7nm is EUV based which should put it at 3x the density of TSMCs 12nm.

So using less power and increasing performance could easily be possible. However will that raw throughput translate to real world performance gains? Not always. However this should be their second round with RTX and being able to increase logic in the same area I think Ampere will be quite a bit better for RT.
 
Why would a "datacenter" want an ultra high performance GPU?

GPGPU purposes. You know the same thing that years back caused GPUs to be snatched up for Bitcoin and Litecoin mining.

Now with AI being something heavily invested into and machine learning, a GPU being tasked with GPGPU purposes is a better option than a CPU. CPUs are great at handling multiple things but GPUs are better at hammering out single tasks very fast.

https://searchdatacenter.techtarget.com/feature/Data-center-GPU-market-set-for-growth-AI-workflows
 
  • Like
Reactions: mr_tan

Gurg

Distinguished
Mar 13, 2013
515
61
19,070
Which is fine, only ~5% of PC gamers can be bothered to spend ~$1000 on a GPU. The primary market for $1000+ GPUs is datacenter at $3000+ a pop, consumers get the surplus and rejects. If AMD does not have sufficient confidence to go after the datacenter market with a given architecture, then there won't be high-end consumer parts either as those don't make economic sense to pursue specifically.
1080ti and 2080 super were both priced at low to mid $700s.
 
  • Like
Reactions: throwawayaccnt

toyo

Distinguished
Sep 24, 2011
36
0
18,530
If true, I will probably upgrade my 1070ti with it - if it's within the "correct" price of maybe 550-570 euro for a 3070-3080 that offers 2080ti performance levels or a bit below.
 

fla56

Distinguished
Jul 19, 2006
8
3
18,515
This will just be 50% better ray-tracing which let's be honest, is badly needed!

Ultimately it's the consoles that determine the GFX outcomes though so look forward to better approaches
 
Last edited:
  • Like
Reactions: alextheblue

InvalidError

Titan
Moderator
1080ti and 2080 super were both priced at low to mid $700s.
Close enough to $1000 in my book. The point was that AMD and Nvidia design their higher-end chips for HPC first. Gamers complained about Nvidia "wasting" silicon on tensor cores in gaming GPUs. Nvidia wants to be a major player in the self-driving car business and AI research in general, putting "useless" tensor cores in consumer GPUs is a relatively inexpensive bid to lock down future markets by giving developers a range of relatively inexpensive platforms to experiment on.
 
  • Like
Reactions: alextheblue

AlistairAB

Distinguished
May 21, 2014
229
60
18,760
Nothing is worth the early adoption for the majority but it is worth it in the end. Would you say the same about Tesselation? Because AMD pushed that hard and at first it killed performance while increasing visual fidelity. Pretty much any modern technology gets pushed by one side then eventually adopted by both sides.

The biggest difference is that real time Ray Tracing would be the biggest bump to graphics in a very long time.

As for the 50% performance I guess it depends on the performance metric. A lot of times these can refer to throughput (i.e. TFLOPs) rather than real world performance. Considering we are dropping from 12nm to 7nm and if we take just TSMC, it is about 2x the density of 12nm. That could easily put 50% more logic into the same size die. Samsungs 7nm is EUV based which should put it at 3x the density of TSMCs 12nm.

So using less power and increasing performance could easily be possible. However will that raw throughput translate to real world performance gains? Not always. However this should be their second round with RTX and being able to increase logic in the same area I think Ampere will be quite a bit better for RT.

Tessellation made games look better. Ray tracing doesn't. It's "opportunity cost" look it up. The idea is if you have to run your frame rate at 1/3 for ray tracing, you have to ask how good could you have made it look instead, if you applied 3 times the performance using the old techniques? My meaning is would you rather a GTX 1050 game with ray tracing, or a game that maxes out the 1080 ti in visuals, at the same frame rate? I'll take next gen visuals using the old technique over playing last gen games with ray tracing every time. It's a huge waste of money as you split your silicon resources (ray tracing related calculations require different silicon that can't speed up traditional rendering). You're building a giant GPU full of matrix multiplication and INT8 performance instead of floating point shading performance.
 
  • Like
Reactions: below20hz
This will just be 50% better ray-tracing which let's be honest, is badly needed!

Ultimately it's the consoles that determine the GFX outcomes though so look forward to better approaches

Consoles have no bearing on anything PC wise. If anything PCs drive console technology, especially now that consoles have basically become ultra custom PCs.

Tessellation made games look better. Ray tracing doesn't. It's "opportunity cost" look it up. The idea is if you have to run your frame rate at 1/3 for ray tracing, you have to ask how good could you have made it look instead, if you applied 3 times the performance using the old techniques? My meaning is would you rather a GTX 1050 game with ray tracing, or a game that maxes out the 1080 ti in visuals, at the same frame rate? I'll take next gen visuals using the old technique over playing last gen games with ray tracing every time. It's a huge waste of money as you split your silicon resources (ray tracing related calculations require different silicon that can't speed up traditional rendering). You're building a giant GPU full of matrix multiplication and INT8 performance instead of floating point shading performance.

They both do. Just because you don't think it doesn't does not mean it does not. Ray Tracing absolutely increases quality for multiple items. If we could do real time RT for everything and get the performance we want it would be mind blowing how realistic by comparison it would be.

As said this is the same as any other performance heavy technology. Do you think Super Sampling was worth the cost originally? No. But now its useful since people have GPUs that can perform at X resolution but their screens only show Y resolution so use SS to get better visuals for less of a performance hit than normal AA.
 

joeblowsmynose

Distinguished
From the same company that hasn't done anything in 3 years since the 1080 ti launched that is worth mentioning. It is most likely 50 percent more performance OR half power consumption at best. Don't fall for it.

What are you on about?

Turing was claimed to be 6x faster than pascal, and we all know that that claim was spot on.[/s]

I'll wait for the reviews.
 
I expect most of this batch of claims will pan out just like when the 2080 was twice as fast (NOT!) as the 1080. (I'd be quite happy to be wrong, however, and would be thrilled for a 2070 Super equivalent card to come along (RTX3060?) at $300 or so...