Apparently Ampere is more than ready for Big Navi.
Nvidia Ampere Purportedly 50% Faster Than Turing At Half The Power Consumption : Read more
Nvidia Ampere Purportedly 50% Faster Than Turing At Half The Power Consumption : Read more
The scheduled forum maintenance has now been completed. If you spot any issues, please report them here in this thread. Thank you!
Its what usually happens when they dont have much competition for 3 years. They decided to take a break to fiddle around with ray tracing. 😀From the same company that hasn't done anything in 3 years since the 1080 ti launched that is worth mentioning...
Don't be surprised if it's 50% increase in Ray Tracing performance and minor increases in general perf / power consumption.
Focusing on the future is the right thing to do.For sure they'll focus a lot on performance improvements with *when ray tracing enabled
Focusing on the future is the right thing to do.
AMD still can't match the three year old 1080ti.Nvidia is obliterating AMD.
Which is fine, only ~5% of PC gamers can be bothered to spend ~$1000 on a GPU. The primary market for $1000+ GPUs is datacenter at $3000+ a pop, consumers get the surplus and rejects. If AMD does not have sufficient confidence to go after the datacenter market with a given architecture, then there won't be high-end consumer parts either as those don't make economic sense to pursue specifically.AMD still can't match the three year old 1080ti.
Wasn't worth the hit to frame rate in 2008, not worth it now. Maybe in 2028.
View: https://www.youtube.com/watch?v=mtHDSG2wNho
The primary market for $1000+ GPUs is datacenter at $3000+ a pop, consumers get the surplus and rejects.
Why would a "datacenter" want an ultra high performance GPU?
GPGPU purposes. You know the same thing that years back caused GPUs to be snatched up for Bitcoin and Litecoin mining.
1080ti and 2080 super were both priced at low to mid $700s.Which is fine, only ~5% of PC gamers can be bothered to spend ~$1000 on a GPU. The primary market for $1000+ GPUs is datacenter at $3000+ a pop, consumers get the surplus and rejects. If AMD does not have sufficient confidence to go after the datacenter market with a given architecture, then there won't be high-end consumer parts either as those don't make economic sense to pursue specifically.
Close enough to $1000 in my book. The point was that AMD and Nvidia design their higher-end chips for HPC first. Gamers complained about Nvidia "wasting" silicon on tensor cores in gaming GPUs. Nvidia wants to be a major player in the self-driving car business and AI research in general, putting "useless" tensor cores in consumer GPUs is a relatively inexpensive bid to lock down future markets by giving developers a range of relatively inexpensive platforms to experiment on.1080ti and 2080 super were both priced at low to mid $700s.
Nothing is worth the early adoption for the majority but it is worth it in the end. Would you say the same about Tesselation? Because AMD pushed that hard and at first it killed performance while increasing visual fidelity. Pretty much any modern technology gets pushed by one side then eventually adopted by both sides.
The biggest difference is that real time Ray Tracing would be the biggest bump to graphics in a very long time.
As for the 50% performance I guess it depends on the performance metric. A lot of times these can refer to throughput (i.e. TFLOPs) rather than real world performance. Considering we are dropping from 12nm to 7nm and if we take just TSMC, it is about 2x the density of 12nm. That could easily put 50% more logic into the same size die. Samsungs 7nm is EUV based which should put it at 3x the density of TSMCs 12nm.
So using less power and increasing performance could easily be possible. However will that raw throughput translate to real world performance gains? Not always. However this should be their second round with RTX and being able to increase logic in the same area I think Ampere will be quite a bit better for RT.
This will just be 50% better ray-tracing which let's be honest, is badly needed!
Ultimately it's the consoles that determine the GFX outcomes though so look forward to better approaches
Tessellation made games look better. Ray tracing doesn't. It's "opportunity cost" look it up. The idea is if you have to run your frame rate at 1/3 for ray tracing, you have to ask how good could you have made it look instead, if you applied 3 times the performance using the old techniques? My meaning is would you rather a GTX 1050 game with ray tracing, or a game that maxes out the 1080 ti in visuals, at the same frame rate? I'll take next gen visuals using the old technique over playing last gen games with ray tracing every time. It's a huge waste of money as you split your silicon resources (ray tracing related calculations require different silicon that can't speed up traditional rendering). You're building a giant GPU full of matrix multiplication and INT8 performance instead of floating point shading performance.
From the same company that hasn't done anything in 3 years since the 1080 ti launched that is worth mentioning. It is most likely 50 percent more performance OR half power consumption at best. Don't fall for it.