Nvidia Volta Megathread

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
by Paul Lilly — Saturday, December 09, 2017
Leaked NVIDIA TITAN V Benchmarks Show Volta GPU Demolishing All Competitors

TITAN_V.jpg

Make no bones about it, the new TITAN V is a monster graphics card. We know this from the spec sheet alone—the NVIDIA TITAN V is rocking a Volta GPU underneath the hood, along with a whole bunch of redesigned Tensor cores for deep learning workloads. Looking beyond the spec sheet, a user on Reddit has assembled a collection of links to some early, unofficial benchmarks of the TITAN V, and boy-oh-boy is it fast.

We say "unofficial" because there is always the chance these benchmarks are fake. We do not know how many of these cards NVIDIA might have shipped out to reviewers, but however many it is, they are surely under an NDA. That said, it is not uncommon for benchmarks to leak out ahead of schedule. Assuming the impressive numbers are legit (which we'll get to in a moment), performance will only get better in time as NVIDIA tweaks its drivers.

To quickly recap, the TITAN V sports a 21.1-billion transistor GV100 GPU that is manufactured on a 12nm FFN high-performance process, customized by NVIDIA. It has 5,120 CUDA cores, 640 Tensor cores, a 1,200MHz base clock and 1,455MHz boost clock, and a whopping 12GB of HBM2 memory running at 1.7Gbps, with a 3,072-bit interface for an effective 653GB/s of memory bandwidth. That's quite the bark, now let's look at its bite.
big_aots.jpg.ashx

big_aots2.jpg.ashx

big_gow4.jpg.ashx

big_rise_tomb_raider.jpg.ashx

big_superposition_1080p.jpg.ashx

big_superpostion_8k.jpg.ashx

big_volta_fire_strike.jpg.ashx
The TITAN V was put through its paces in several different benchmarks, including 3DMark Fire Strike. Running at stock clocks and using NVIDA's latest 388.59 WHQL drivers, the card posted a graphics score of 32,774. That listing is no longer showing up on Futuremark's website. However, it does still show performance numbers with the card overclocked by 170MHz, which bumped up the score to nearly 36,000 points (35,991, to be precise).

To put those numbers into perspective, a TITAN Xp based on NVIDA's Pascal GPU architecture typically scores around 28,000, as does the GeForce GTX 1080 Ti. The TITAN V is a good clip faster, especially when overclocked.
Unigine's Superposition benchmark also yielded some impressive numbers. At stock speeds, the TITAN V scored 5,222 in the 8K preset, and 9,431 in the 1080p Extreme preset. The latter is particularly interesting—famed overclocker Kingpin had previously taken a GeForce GTX 1080 Ti, stripped off the heatsink and bathed the card in liquid nitrogen (LN2), and overclocked it to 2,581MHz, which resulted in a score 8,642 in the 1080p Extreme preset. The TITAN V scored nearly 800 points higher.

The TITAN V was also put its paces in some gaming benchmarks, not just synthetic ones. They include Ashes of the Singularity, Ashes of the Singularity DX12, Rise of the Tomb Raider (1440p), and Gears of War 4. Here is a look at those:
In short, this is a very strong showing for the TITAN V, and bodes well for NVIDIA's Volta GPU, which should trickle down into consumer-based gaming cards sometime next year. Not that you can't plunk down $3,000 for a TITAN V and use it for gaming, but this card is really aimed at scientists and researchers.
What's also interesting to note is a screenshot of EVGA's Precision X OC seemingly supporting the TITAN V. We've reached out to EVGA to see if this is truly the case and will update when we hear back. In the meantime, we should mention the clocks on the TITAN V. Its 1,200MHz base clock and 1,455MHz boost clock are slower than both the TITAN Xp (1,417MHz / 1,480MHz) and GeForce GTX 1080 Ti (1,480MHz / 1,582MHz). The fact that the TITAN V was able to achieve a comfortable lead with a clockspeed disadvantage (at stock) is a testament to Volta and the work that NVIDIA has put into the GPU.

Of course, NVIDIA is not in a rush to bring Volta to the consumer market, as AMD has not fully caught up with Pascal (Vega comes close). The silver lining to that is it gives NVIDIA time to tweak things and flesh out better drivers for when Volta does infiltrate the mainstream gaming sector. Based on what we've seen here, we can hardly wait.
 
Here are early Titan V benchmarks
By Jarred Walton 5 hours ago
How does a $2,999 Titan V compare to a $699 GTX 1080 Ti?

PCwdK2tXQKebn87YTkqsbY-650-80.png

The difficulty in analyzing the current results is that we don't have fully identical systems. The Titan V benchmarks were done with a Core i7-6700K, apparently running stock, and that immediately raises some concerns about CPU bottlenecks. Videocardz for its part used a Core i7-6800K, a different architecture running on a different platform, with different CPU clocks and core counts. Both the Titan V and Videocardz GPUs are also listed as overclocked, though specific details aren't given. For my tests, I used an i7-8700K, running stock, with an MSI GTX 1080 Ti Gaming X (factory overclocked), giving a third data point.

The benchmarks we can easily compare include 3DMark and Unigine tests, aren't always the best predictor of gaming performance. I tried running Rise of the Tomb Raider and Ashes of the Singularity benchmarks as well, but my results indicate either a significant difference in settings or some other factor, so I've omitted the game performance for now.

While Videocardz shows only modest improvements in performance, my GTX 1080 Ti numbers are quite a bit lower. Firestrike Ultra is only 8 percent faster, but Firestrike Extreme shows a 26 percent improvement, Time Spy a 33 percent increase, Heaven gives a 32 percent improvement, and Superposition gives 23 percent at 8K optimized and a whopping 63 percent increase at 1080p extreme.

Obviously there's still the issue of real value, as even a 60 percent improvement is nothing compared with a price that's four times as high. And while Nvidia says the Titan V isn't for gaming, we're still trying to get a card for testing, just to provide some concrete numbers. I also hope to run a few deep learning tests, to show what the Titan V can do with the right workload. But if you're interested in playing games, I can only recommend holding off to see what the actual consumer models of Volta look like.
 
I swear to God if the next GTX --60 card beats my 1080 (Like the 1060 beating a 980), I will be severely pissed for spending $800 AUD.

I don't want to regret buying a 1080, but the fact that the possibility of a mid-range Volta card beating a high-end Pascal makes me want to regret.
 


So you don't want progress to be made in the GPU world to justify your purchase?

That is some wonderful logic you have there. Plus, why would you be angry of having enjoyed your 1080 when the "2060" is not even announced yet.

On the other hand, the sad reality is that nVidia can charge you whatever it wants, because AMD is not close enough to force nVidia to lower prices. Now, no one really knows where the performance of Volta will land, so it doesn't mean they're going to leapfrog Pascal.

Cheers!
 
the 1080 released in 2016. in 2018 it is to be expected that the new x60 part will somehow reach 1080 performance (might be a bit slower). that has been always the case. just look at 1060 itself. even the 960 is roughly on the same ball park as 680.

 
Is there even any reason to expect a $200-300 consumer Volta to be better at gaming than a 10xx card of equivalent MSRP, though?
As far as I can tell, Volta's innovations seem to be mostly about Tenzor cores and AI thingamabob.
 


Titan V 5120 CUDA Cores has 42.8% more Cuda Cores than the1080Ti 3584 CUDA Cores. Does the performance match?

Overclocked NVIDIA TITAN V benchmarks emerge
Published: 9th Dec 2017, 15:20 GMT |

2TbkQ9I.png

It doesn't look like we are getting the increase we would expect, but maybe we will see a significant change in a gaming version.
 
That's because nvidia still not talking about gaming aspect of volta. Though this time around nvidia probably will going use diffrent name for their gaming architecture instead of sharing the same name like pascal.
 
Yeah, I thought the Volta would never be a gaming card, because it was specifically designed for AI and cloud computing purposes. I heard the gaming architecture will be named Ampere, but take that as a grain of salt.
 


If we don't even know the name for sure yet, seems like it could be a couple of years before we see consumer models on the market?
 
But the real question is when we can expect to see Volta-based GeForce graphics cards. The GTX 10-series launched a mere month after Pascal’s release at GTC 2016. It's already been seven grueling months since Nvidia unveiled Volta and we haven't heard a peep about GeForce offerings.


Now that the Titan V is in the wild, I'd love to see Nvidia launch a powerful Volta-based GTX 1080 Ti successor during its CES keynote. But nothing in the market is pushing Nvidia to release Volta GeForce cards right now. AMD's much-hyped Radeon Vega cards wound up being unable to beat the GTX 1080—not the Ti version, but the vanilla GTX 1080—in most scenarios, and current GTX 10-series cards are still flying off the shelves thanks to cryptocurrency miners. Hell, many GTX 10-series options are selling for more now than they did when they launched a year and a half ago.

We’ll hear more about the next-gen GeForce lineup someday, but whether that’ll be sooner or later is very much up in the air.
You can read more about this article in PCGAMER
 
They have Navi rumors for late 2018 starting to circulate. Nvidia should release their 2,000 series lines in a few months, and we typically see ~30%? performance uplift generation over generation. If the competition with Intel and AMD is any indication we need to have a competitive atmosphere to push innovation to newer more loftier heights. It's will be ~2 years before we can expect anything competitive from Intel, maybe early we hope! When it comes to competition the more minds you throw at something the better our chances or a favorable outcome on the consumer side.


 
We also need to hope >1080p and high refresh adoption rates take off. Otherwise, a theoretical 30% faster low or middle-end card would be more than enough for most people. We should probably be thankful for machine learning, otherwise the R&D costs for GPU's alone probably wouldn't have justified the last round of cards.
 


surely it is not late 2019? they say Vega will launch in october 2016 but in the end actual Vega only launch almost a year after that. in any case if Navi will launch by the end of 2018 then we should some kind of prototype being shown to run actual games in mid 2018.

about intel can't say anything yet. but this intel+AMD combo might push nvidia not to slack off.
 


Why would we need higher resolution? I can't say I've used a >1080p or a >24" monitor, but at this point I don't feel like higher pixel density would accomplish anything anti-aliasing doesn't, while taking vastly more resources.

Pixel density issues might become apparent when using 27"+ monitors, but I suspect they don't work well for gaming, because of eye field of view issues. So you'll have to sit further from the monitor, making its larger size pointless.

Agreed on refresh rates, though. High refresh rate makes tearing and input lag a non-issue.

Besides, there could be other advantages to having a card with surplus power. I'm sure game devs would appreciate not having to optimize like crazy for once. It would also mean smoother framerates in general, as well as power efficiency. There's always room to grow, assuming it costs the same, of course.
 
I'm sure game devs would appreciate not having to optimize like crazy for once.

honestly i would prefer for dev to make proper optimization. take VRAM usage for example. from my observation VRAM usage start exploding when new gen console arrive. would be good if you can upgrade your hardware on regular basis. but i think not many people can do that. CDPR is one of developer that still doing proper optimization. when majority of triple A games using bajillion of VRAM even at 1080p (some game even use up to 4GB VRAM) the witcher 3 barely exceed 2GB VRAM usage even at 4k as per TPU test. and there is other thing to think of as well. DRAM pricing is quite expensive right now. i saw some article few months ago that GPU pricing might going up because of DRAM pricing. so GPU with more VRAM capacity might end up more expensive. maybe nvidia should develop something similar to AMD HBCC but without the need of HBM. and make it work automatically unlike AMD HBCC.
 


Technically, if people stop at 1080p GPU makers could stop designing new cards. A 1070 or Vega 56 is generally considered overkill at 1080p in all but a few titles. Let alone something like a 1080 ti. If not for non-gaming applications (machine learning, cluster computing, mining etc) why design a 1080 ti successor if nobody needs it while simultaneously driving down prices on your other cards?
 


I understand the issue, I just don't see increasing the resolution to put more [unnecessary] stress on the GPU as a solution.
 


It's one type of stress for another. AA has always been more expensive than increasing resolution for GPUs for the simple reason that a lot of AA (best quality ones) actually imply "super-sampling", which is increasing resolution and then fitting to current resolution. Others require edge detection and boxing of polygons (as I remember) to apply aliasing. And the least expensive ones are based on post processing (FXAA) which is just blurring and the result looks like crap.

Higher DPI for monitors actually have no side impacts other than the raw processing requirement and no hidden ones for these techniques that do require extra coding on the Dev's side. Problem is Windows has crappy high DPI support and only in Windows 10 they've improved it a little bit.

Cheers!
 
Status
Not open for further replies.