Nvidia Volta Megathread

Page 11 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Still speculation at this time. now ray tracing is part of direct x. It would be weird if only the high end are more compliant to DirectX latest spec. Some people speculate that ray tracing hardware will be useless on midrange and below since it will be taxing even for the high end hardware. Hence the x60 not having RT core is make sense but looking how effective are RX8000 vs even 4 titan v in ray tracing even the x60 level of GPU probably will pack quite a pack for ray tracing stuff. Not to mention there will be no game that will be fully ray traced like those star wars demo so even 2060 probably very capable of handling games like metro exodus. But even if 2060 lack RT cores the performance should be similar to 1080 or slightly slower for much cheaper price.
 


Rename it to Volta and Turing as first half is Volta while recent comments are based on Turing. Creating new Thread is waste of time while lot of useful info is already discussed on this thread.
 
Turing could be volta + more effective RT tech added or something totally new. We know pascal GP100 SM config is very different than the rest of pascal chip. So volta might end up being 100% compute design while Turing is more about rendering performance plus some compute in form of tensor cores. Anyway i see something interesting with Turing that many people seems to ignore since the focus is mostly on RT. it was it's Tensor core performance. The tensor core have support for FP16, INT8, INT4. from nvidia slides the tensor cores specifically mention 125Tflops of FP16 performance. Meaning turing will offer more than double FP16 performance vs it's FP32 performance. Also with the tensor cores nvidia probably will be offering something similar to AMD RPM. will be ineresting to see more unveil at gamescon event.
 
The tensor cores was supoosed to accelerate AI application. The core of nvidia hybrid rendering with Turing is this: Rasterization, ray tracing, compute and AI. the AI part was suppose to correct the final image of the whole scene. Well the actual thing still quite complicate. Hopefully we can get more details at nvidia next event.
 
so 2080Ti from the get go?

https://videocardz.com/newz/exclusive-msi-geforce-rtx-2080-ti-gaming-x-trio-pictured

https://videocardz.com/newz/palit-geforce-rtx-2080-ti-and-rtx-2080-gamingpro-series-unveiled

not sure if nvidia decided to move the Ti series down in product positioning (means 2080Ti will use TU104 instead of TU102) or it is as simple as the name imply: release the fastest Turing (TU102) from the get go with at least 60% performance increase over existing 1080ti. nah too good to be true. did nvidia intend to surprise us or to let us down (if they really bring the Ti series down from the "fastest" performance bracket....what kind of hype is this?
 
If RTX 2080Ti will be around 50-60% more powerful then TITAN will be 70-75% more powerful than GTX 1080Ti. That is crazy jump in performance. I think it will not be that huge of a performance jump. RTX 2080Ti is rumored to be priced at $800.
 
big jump or not the 1080ti is already unchallenged and considered as a very fast GPU even for 4k. mainstream user like me that still on 1080p monitor most likely will only going to get something like 2060 at best. even my 970 still play everything that i want to play right now quite easily (60FPS) even at high or very high preset. right now the high end really is "fun fair" to those enthusiast with money in hand....
 
It's a little bit chicken and egg isn't it? To convince gamers to switch to 4k displays, you need a GPU capable of maxing out 4k settings. To sell maxed out 4k capable GPU's, you need gamers to switch to 4k displays...

So, cue 4k@144hz panels to convince the everyday gamer that they'll just have to 'settle' for 4k@60hz.
 
Expecting a true 4K 144Hz capable card is not possible till 2020(at-least). To be honest majority of the crowd will be satisfied if RTX 2080Ti can handle 4K at 60fps without any compromise in setting for even graphic intensive games.
 
High refresh panels have more benefits than just being capable of higher FPS. Plus, most tests you see around the web don't even go into specific settings that allow higher FPS'es. We all know there are specific settings in all games that just trash performance at the expense of really subjective eye-candy benefits.

I'll jump on a heartbeat once 4K@120Hz+ panels are common enough to be ~$250. Maybe ~$350 if 27"+.

And that is why I think nVidia is missing the point completely with that pricing (if true). Neither side is ready for mass adoption until they go *under* $400.

Cheers!
 
Hmm to be honest i'm interested to know what nvidia intend to do with NVlink connector. Some people expect that the nvlknk will be excljsive to professional market but it seems the connector will still exist for consumer card. With multi gpu setup becoming less and less exciting due to lack of support from games itself i wonder what nvidia have in plan to re-ignite the interest of multi gpu setup.
 
Since nVlink is pretty much a "ring bus on a cable" for the GPUs, they might be trying to one-up what "multi-GPU" means. As in, I wouldn't be surprised if they straight out make the 2 GPUs have a "single GPU" config to be exposed to the system. That would be the only feasible way for them to make "multi-GPU" relevant again.

Cheers!
 
AFAIK nvidia already the tech to make such thing happen. And they already have actual comercial product doing that: The DGX-2 where all 16 Tesla V100 being connected via NVswitch and the system only see there is one massive GPU instead of 16. And it was supposed to be transparent to the the operating system and any application that using that "single" GPU. The only concern that may raise from this kind of networking was latency. So far what being confirmed was using the new NVlink on Turing will allow the amount of VRAM being added to the whole resource instead of each card needing their own resource. Next step most likely how to make it so they can connect two different die together like how AMD did with infinity fabric.
 


That was my point precisely...

They already have that working on the "pro" world, so the next logical step is to give it a "consumer-ey" spin and sell it. That is the most interesting aspect of seeing a full fledged nVLink in a consumer card (fingers crossed).

Cheers!
 
Status
Not open for further replies.