Nvidia Turing & Volta MegaThread

NVIDIA-GeForce-RTX-2080-Ti-Graphics-Card.jpg

Welcome to the Volta & Turing MegaThread! I know, it is strange to mash two different architectures in one thread, but with how similar the Volta and Turing GPUs are, plus the fact that Turing doesn’t succeed Volta; it’s much easier to discuss both architectures if they are integrated into one thread.

VOLTA:

The Volta architecture was designed around the need for a cheaper and simpler way to power AI/Deep Learning compute tasks. Traditional deep learning algorithms require vast amounts of server arrays that cost hundreds of thousands of dollars and also tens of thousands of dollars to power each month.

Running these resource hungry AI algorithms on regular CPUs and GPUs isn’t the most effective way of processing AI, since normal processing units are designed to process different kinds of tasks and not just one specific workload. This is where Tensor cores come into play, these cores are designed specifically for deep learning and machine learning compute tasks. The result is a core that runs FAR more efficiently and performs better than traditional CPUs (in AI computing), now a server with several Volta GPUs can process just as much data and information as a whole line of servers reaching from wall to wall in a large warehouse.

However, Volta didn’t exactly, takeoff, as many expected, there are only two variants of the Volta core, one is a Quadro variant and one is in a prosumer variant, the Titan V.


nvidia-titan-xp-shop-625-ud.jpg

Titan V:

Price: (Around) $3000

CUDA Cores: 5120
Streaming PUs: 80
Texture Units: 320
Base Clock: 1200 MHz
Boost Clock: 1455 MHz
Memory Clock & Bandwidth: 850 MHz & 1.7 Gbps
Memory Bus: 3072-bit
VRAM: 12.3GB HBM2


TURING:

While Volta may have been focused almost exclusively on machine learning, Nvidia’s latest architecture, Turing, is focused back on the pure graphical performance of a GPU, this time in the form of Real Time Ray Tracing.

Turing is a special architecture, compared to all (known) previous Nvidia architectures, Turing has been in the making for over 10 years (yes that means Nvidia has been developing Turing since the days Fermi). You might be asking, is Ray Tracing that important? Nvidia seems to think so.

Without going too deep, Ray Tracing is incredibly intensive to run on normal GPUs since it simulates actual light rays. It can take a day, days, or even a WEEK to generate ONE ray traced image. This is why the movie industry has been the only place to take advantage of ray tracing, since they can afford to wait that long for a single ray traced image.

Turing on the other hand can generate Ray Traced images in REAL TIME. Through Nvidia’s new RT core and optimizations for Ray Tracing, real time ray tracing can now be done.

This is why we are now seeing Nvidia pushing ray tracing into video games.

Of course ray tracing isn’t the only thing Turing and the RTX gaming cards are good at, these cards also have a big bump in CUDA cores compared to their Pascal predecessors and include an unknown amount of Tensor cores (most likely for Nvidia’s new-deep-learning-anti-aliasing tech).

So for the first time ever in a GPU, we have three different cores designed for three separate functions, but all somehow work towards one goal. Pretty interesting stuff.

For now Nvidia has launched three RTX gaming cards, the RTX 2080 Ti, RTX 2080 and RTX 2070, and 3 more Turing cards in the Quadro family for the enterprise space.


Note: Turing GPUs support GPU Boost 4.0 in spec sheet. Something Nvidia hasn’t shown light on (yet).


geforce-rtx-2080-ti-gallery-c-641-u.jpg

RTX 2080 Ti:

Price: Around $1150

CUDA cores: 4352
Base Clock: 1545 MHz
Boost Clock: 1545 MHz (1635 MHz OC on FE)
VRAM: 11GB GDDR6
Memory Speed: 14Gbps
Memory Bus: 352-bit
Memory Bandwidth: 616GB/s
TDP: 250-260W
SPCs: Dual 8 pin connectors.

geforce-rtx-2080-gallery-b-641-u.jpg

RTX 2080:

Price: Around $750

CUDA cores: 2944
Base Clock: 1515 MHz
Boost Clock: 1710 MHz (1800 MHz OC on FE)
VRAM: 8GB GDDR6
Memory Speed: 14Gbps
Memory Bus: 256-bit
Memory Bandwidth: 448GB/s
TDP: 215-225W
SPCs: 8 pin + 6 pin connectors (FE only, AIBs can use different configurations)

geforce-rtx-2070-gallery-b-641-u.jpg

RTX 2070:

Price: Around $550

CUDA cores: 2304
Base Clock: 1410 MHz
Boost Clock: 1620 MHz (1710 MHz OC on FE)
VRAM: 8GB GDDR6
Memory Speed: 14Gbps
Memory Bus: 256-bit
Memory Bandwidth: 448GB/s
TDP: 175-185W
SPCs: 8 pin connector (FE only, AIBs can use different configurations)


 
I find it odd they released the 2080ti alongside the 2080 and 2070... It just feels... Odd...

Oh welp, let's hope it's just a coincidence and they'll still release something better for a refresh later on.

Cheers!
 
I think the 2080 Ti was ready for prime time because they delayed release of RTX due to the 10 series still selling strong. I imagine they never really stopped R&D and continued until the Ti was ready and then they launched them all together. This also might pace them well for the next wave as AMD, while not really competitive right now, has 7nm in the pipeline. They can define raytracing as the new direction and further entrench themselves in the market because AMD has no equivalent. Then, as AMD drops 7nm, NVidia drops 7nm with raytracing - advantage extended. Someone mentioned another reason which is the premium pricing of Turing gpus props up the 10 series which is still in demand because Turing is out of reach for a great many gamer. I’m sure after waiting and then seeing the prices for Turing, many people said to heck with that and bought a 10 series after all.
 


but i think the question will be the usual "does it worth it or not"? if 2070 for example capable of matching 1080ti in classic games (no ray tracing, no DLSS) then officially 1080ti level of performance has been officially brought down to $500 from $700.
 
If 2070 performs at the lvl of 1080Ti then yes. But 2070 is not performing at the lvl of 1080Ti it is only performing bit better than 1080 as per the leaks.

If 2080 performs nearly at the lvl of TITAN V then yes. But 2080 is only performing a bit better than 1080Ti as per the leaks and the NVIDIA chart.

If 2080Ti performs double 2x of 1080Ti then yes. But 2080Ti is only 50% more powerful which is only 1.5x of 1080Ti as per the leaks.
 
  • Like
Reactions: makaveli-313
The 2080 and 2070 might be expensive but i'm not sure about 2080ti. I think majority of people want 2080ti to be priced at the exact same price as 1080ti while offering 50%-60% more performance. but the thing is right now the competitor canmot even touch 1080ti. There is no pressure for nvidia to drop 1080ti price. So it is more logical for nvidia to charge even more for 2080ti. But ultimately if nvidia did price 2080ti at $700 it does not good for consumer either down the road.
 


The day Pascal series is either out of stock or not being bought anymore NVIDIA will have to lower its price on RTX series especially RTX 2080Ti(to around $800, we cannot expect NVIDIA to drop its price to $700 at any given point). Till then we have to wait. All this is caused by two main reasons. First being that AMD is unable to compete with NVIDIA. Second but equally important the Crypto-currency mining because of which NVIDIA had to delay Turing launch by months only to clear already manufactured Pascal GPU. If both of these factors were answered then RTX 2080 and 2070 would have launched long ago and at much meaningful price of $450 and $650 respectively and now RTX 2080Ti would have launched for $750.
 
The problem here is how you put the price into the wider picture of things. Currently, the price of the 2080ti (or anything above $400, really) goes to weeks (months?) of groceries, maybe a full rent or even a nice 3 day vacation somewhere with your family/friends. Out of those, you have to make choices on what you can live with and without. Given that, and being a tad obvious (needed at times), it seems like nVidia is touting the cards for people that:

1.- Don't need to choose and can afford everything.
2.- Has little understanding of life costs and/or priorities.
3.- Is willing to get credit for one.
4.- Needs the extra FPS'es at all costs and/or the eye candy.
5.- Is a raging fanboi.

There might be a few more, but that list is pretty damn good and it even applies to every single expensive piece of tech out there. Including Teslas!

People needs to make a stand here on what they consider affordable for themselves and what they're willing to put up with here. I'm not going to put my hands on fire for any big Corporation, so I won't even give nVidia the benefit of the doubt on pricing. If the people don't buy the cards, nVidia (and all the chain behind) will get the message instantly and get reasonable with pricing.

Cheers!
 
Yes and trust me I thought of it as well but then NVIDIA can get away saying that RTX 2080Ti was made for the crowd who direct their budget toward TITAN cards and there will simply be no TURING TITAN or even easier thing to do simply produce less number of 2080Ti cards. Unless people neglect buying RTX 2080(I doubt if it is possible) only then we can expect there to be a price cut.
 
Nah let the price goes sky high for a generation or two. If nvidia drops the price for turing after pascal inventory depletes that still going to give a lot of trouble to AMD. AMD needs the room to put themselves in better position first. Nvidia charging crazy price will give that opportunity to AMD. That's why i said if nvidia put 2080ti pricing at $700 from the get go it is not good for us in the long run either.
 
Aug 26, 2018
1
0
10
until we have actual answers about direct comparisons between pascal and turing, nobody is gonna have the true answers
if we get 30% and above 1080ti performance in rasterization then the card is definitely worth the price, if not, sticking with my 980tis lol
 
NVIDIA-_Ge_Force-_RTX-2080-_Time_Spy-_Benchmark-_Leaked.png

People are considering it to be RTX 2080. But I can argue of it being RTX 2070 based on limited info given there. Both RTX 2080 and 2070 pack 8GB GDDR6 Memory and have similar clock speeds. Why consider it RTX 2080 when there is a possibility of it being RTX 2070.
 


That's a positive way to take in the hit. Well it works for me.

We will have to wait for 2 more weeks to get exact info on the performance of cards.
 


I'd heard it said that NVidia hasn't given out any 2070s for review, so that makes it likely that the benchmark is for the 2080. At only marginally faster rasterized graphics speed than 1080 ti, that is disappointing if it is the 2080.
 


It has not been given out but it can possibly be internal testing leak. I am saying that because possibility of it being 2080 is as low as it being 2070. As it not being 2070 has a reason of 2070 not being given out yet. 2080 on the other hand is only weeks away from launch and I doubt that it still does not have developed drivers which can possibly display it as 2080 and not as Generic VGA. But as 2070 is still far on launch date can possibly be under testing on non-developed drivers which shows it as Generic VGA.

If this was posted few weeks before then I would have possibly agreed blindly. But presently RTX 2080 is already being transported to retail outlets all over the world so I expect it to have perfect working finalized drivers instead of under-developed drivers which is unable to display it as 2080. There is no reason for anyone to keep testing on older drivers when NVIDIA already has new drivers preinstalled on RTX 2080s which are being shipped.
 

mjbn1977

Distinguished
Here are my 2 cents in regards to all the speculations and rumors when it comes to the RTX line. I neither be pro nor against the line and have a neutral stance in regards to pricing. I'm really following every little piece of information in regards to the RTX launch and performance leaks. Here a few thoughts:

1. Nvidia researched this technology for 10 years. They could have launched it last year, they could launched it next year, but they decided to launch it this year. Considering that they skipped one year with new GeForce releases shows me that they finally got it to a point where it will make an impact on the graphics market. It was not ready last year, otherwise we would have seen it (also has to do with availability and cost for GDDR6).

2. Nvidia is doing pretty well as a company overall. So there is no desperate financial need to throw a product on the market that is not ready yet.

3. Nvidia is very well aware that most players want to play games at 60fps+, and they know if they can't deliver this experience with the new technology that their loyal customer base would get a little upset

4. Price: yeah it sucks, but there is a lot of factors to consider. GDDR6 RAM, second largest chip ever build (that costs extra money), 12nn technology (gives lower yield than 14nm), new import duties on some electrical components, and R&D that was invested over the last 10 years into Tensor and Ray Tracing. I don't love the high price, but I understand their pricing to some degree. Don't get me wrong, I still would prefer to pay $600 for a RTX2080.

5. Will Ray Tracing make gaming slower? Probably yes, but I speculate that the technology is good enough to deliver 60fps at least in 1080p and 1440p. As soon that there will be optimized driver and game developer played around with that technology it will get much better than demos shown at the GamesCon. Ray Tracing will make games significantly look better, but if you play online competitively and you need 144fps, Ray Tracing might not be for you.

6. Don't forget, if you play a game that uses the RTX chip for ray tracing to create the soft shadows or reflections (or both), the CUDA cores have a lot of extra resources for other stuff.

7. The same will most likely be true for the new AI AA that the tensor core is supposed to do. If that turns out to look nice and game developers integrate it into more games, not doing AA with the CUDA cores will open up HUGE resources. Just play a game and turn off all AA and see how that affects your FPS.

This is all just speculation, and it could come better or worse. But I just really think that Nvida wouldn't throw a product on the market that does not deliver. Nvida can't just live from preorders....

Just be patient and wait until mid-September. Read the reviews.

This all looks like I am very optimistic. Yeah, I don't have any reason not to be optimistic. But based on this I will NOT go ahead and pre-order a RTX without reading at least a couple of reviews when they come out.

Again, take all performance leaks with a grain of salt. We talking about unofficial reviews missing optimized drivers. NEVER underestimate optimized drivers.
 
I just watched a HW News video by GamersNexus which included info on overstock of 10 series GPUs. So it would make sense why turing is so expensive due to Nvidia wanting to get rid of 10 series cards.

At least it isn't too big of an issue, at least for now, seem like (performance wise) 10 series is still very good even though the 20 series is coming.
 
MERGED QUESTION
Question from hmonkey20 : "What does everybody think about the RTX?"



I think until real benchmarks come out everyone is hyping up ray tracing for nothing. Yes its a cool technology but it will probably be a while before its taken advantage of well.
 
The most benefit is for game developer not so much for gamer. For the foreseeble future hybrid rendering is the way to go. Though game engine need to adapt first before it can really shine. Just like 4A dev said (Metro series) what we see right now is still the naive implementation of RT.