Nvidia’s New Titan V Pushes 110 Teraflops From A Single Chip

Status
Not open for further replies.

bit_user

Polypheme
Ambassador
That's actually cheaper than I thought it'd be. The V100 dies are enormous, and in extremely high demand. I guess enough of them had a bad memory channel that they decided selling them as Titans wouldn't cannibalize their Tesla market too badly.

BTW, for comparison, the Quadro P100 sells for about $6k (although I believe it's fully-functional).
 

bit_user

Polypheme
Ambassador

Probably, but its drivers might not be very well optimized for gaming. The few gaming benchmarks I could find on the Quadro P100 were a bit underwhelming.
 
“With TITAN V, we are putting Volta into the hands of researchers and scientists all over the world. I can’t wait to see their breakthrough discoveries.”
Nvidia’s Titan series graphics cards were never meant for gamers.
With the Titan V, gamers likely won’t be as enticed to drop one into their PC. The cards boast incredibly high Tensor compute performance, but it’s unclear how that would translate to gaming performance. {...} If you want a Titan V, get ready to pony-up a whopping $2,999.

That is a painful price for unknown performance. I'd dare say it will trounce everything currently on the market for gamers, but will the Tensor units get in the way? Will this be the hint towards the possibility with the GTX-30 series? (GTX-20 series seems to be hinted at being just a Pascal refresh.)
 

Rock_n_Rolla

Distinguished
Sep 28, 2009
209
0
18,710
5120 Cudas / 640 Tensors / 3,072 bit mem bus / 15 TFLOPs in SP / 1.7Gbps on HBM2 / 250 Watts TDP

-- Probably will perform the same as GTX1080 or somewhere in the numbers close to 1080Ti since its specialized driver are streamlined with Ai / DL / Data crunching.

I BET Nvidia will relase a compatible driver for it sometime soon specifically designed for usual desktop and gaming and if that does happen and since its a Volta, theres a high possibility it will beat 1080Ti...

... And playing Crysis with it on its very highest custom settings will be a walk in a park. IMO
 

extremepenguin

Distinguished
May 21, 2009
32
2
18,535
What I want to know about this card is how the drivers stack up for it in Citrix when used as a GPU resource for hosted programs. We currently use Tesla's for it and they are not cheap if we could take out some of the older Tesla's and replace them with these it would be a huge win for me licensing and performance wise.
 

extremepenguin

Distinguished
May 21, 2009
32
2
18,535
As for those asking if it can play Crysis I am sure there is somebody already working with OpenAI to accomplish just that if for the only reason of making that question go away. The answer will be yes and it plays it better than you.
 

mynith

Distinguished
Jan 3, 2012
133
0
18,680
This time they're right though, it's not designed for gaming. It'll do it well, I'm sure, but a lot of functionality goes unused. I'm not sure what you'd use tensors for in game development other than in the vertex shader.
 

bit_user

Polypheme
Ambassador

I have it on pretty good authority that they're semi-decoupled from the normal execution pipeline.

The main impact that the tensor units and fp64 units have on gaming is just consuming die area (and making it more expensive) with hardware that games wouldn't use. Even for normal fp16 calculations, you still don't use the tensor units - they're hard-wired for computing tensor products, which shaders don't normally do.

It would be interesting to see which is faster - Quadro P100 or Titan V. The Quadro has more memory bandwidth, but the Titan V has a completely new ISA and more "cores".
 

bit_user

Polypheme
Ambassador

Moreover it's fp16, while vertex shaders normally operate in fp32.

I'm sure a few clever game developers could find a good use for the tensor units, but there's no incentive to use them if it's a hardware feature almost nobody has. It seems very unlikely desktop Volta GPUs will have them, although Nvidia has announced they'll appear in embedded SoCs.
 

kyotokid

Distinguished
Jan 26, 2010
246
0
18,680
...3,000$? Crikey, one would be better off saving 500$ and getting a P5000 with 16 GB of GDDR5X . Not all of us are into scientific modelling and just need as much VRAM as we can get for CG rendering. When it comes to rendering big scenes, VRAM trumps CUDA cores and floating point precision speed.
 
KYOTOKID,
Well, one would hope you'd look at BENCHMARKS for the applications you use before deciding on which card to get.

Obviously the Tensor Core applications are significant if the software can properly utilize them.
 

kyotokid

Distinguished
Jan 26, 2010
246
0
18,680
...true.

The Titan V is actually being marketed by Nvidia more for its compute/deep learning capability (sort of an "entry level" Tesla) rather than graphics production in spite of it having video outputs.

As I work on very involved scenes and plan to render in large format resolution for gallery print purposes, even the "biggest" Quadro card available (the 24 GB P6000) would get chewed up and spat out. This is why I am designing a dual Xeon multi core production/rendering system into which I can throw a boatload of physical memory (128 - 256 GB).
 

DDWill

Distinguished
Aug 3, 2009
28
0
18,530
Why do I get the feeling that this is going to be another Titan Z scenario?

When the Titan Z was lunched it was seriously over priced, and it made more business sense to get two or more single GPU cards a lot cheaper and get better performance. Eventually it dropped to under half the original launch price that it should have sold for in the first place, just a few months before being phased out from the retailers, at which point it sold like hot cakes, most likely to miners, and render farms….

Don’t get me wrong, I am seriously interested in this card, but from a business point of view it doesn’t make sense.

I can currently buy 4x Titan Xp for just under £4K and get 48Tflps of SP Compute for Iray / Vray-VT rendering.

Or I can get 2x Titan V for £5.4K and get 30Tflops of SP compute…. Obvious which option to go for..

Waiting on some Octane, Iray, Vray-RT, and Blander Benchmarks, but looks like I’m waiting until next year to see what Nvidia has planned for the Geforce range Volta‘s.

Shame also that there is no SLI or NVLink…

There is still no single GPU on the market that can handle some of the more demanding games at ultra setting on a 4K screen, and keep the frame rates above 60fps, so until there is, SLI still needs to be there as an option for the extreme gamer.

I also hate that 4-way SLI is now dead. My priority will always be work over gaming when I build a new workstation, but If I am going to build a 4x GPU system for rendering, why cant I also take advantage of all that power for when I do get the time to game?

Multiple GPU set ups for rendering make sense if you need a large amount of images rendered noise free in one day, I.e. a single GPU will render a noise free image in 40 minutes, 4x GPU will do it in 10. For gaming SLI has diminishing returns but, why not if you already need that power for Pro app’s..

I also understand that Volta had some serious R&D cha ching thrown into it, and that Nvidia also don’t want to put a dent in their Quadro / Tesla sales, so are keeping the price high, but I already think the Titan Xp is overpriced, after all, all you get is an extra 1GB of ram and an extra 256 Cuda cores, but have to pay an extra 450 for it, and now Titan V is nearly 3x the price of Titan Xp, so you are paying almost 200% extra for only a 30-50% performance increase per card.

Shame, but why do I still want them?? :-/
 

bit_user

Polypheme
Ambassador

This card is mainly for AI and fp64. They say as much. So, what Nvidia card can you get that costs less than $1500 and has more than half the performance in either metric? Your error is in thinking of this as a fp32 or gaming card.

Maybe Nvidia eventually does make some cheaper, smaller chip for AI, but unlike Titan Z, there's currently nothing else with those juicy tensor cores.


Its price really has to do with it being an incredibly large chip, fabbed on the latest process (12 nm). Because of that, I don't ever expect to see the price drop by much. This chip was never made for consumers. In that sense, it truly hearkens back to the original Titan.
 

DDWill

Distinguished
Aug 3, 2009
28
0
18,530
I am fully aware of the target audience of this card, and my specific use for these cards would be production rendering, not gaming like my post said, so no mistake my end.. I've managed to get in around 10 hours of gaming this year due to client work. I am also fully aware of the immense cost that went into building it, and the die size, and why Nvidia placed it at the price point its been launched at, but that price point will defiantly put off some of the target audience ( GPU rendering ) that know full well there are faster cheaper configurations available.

I have been reading up on it for the last few days, however, even in Jensen Huang's Titan V launch speech , one of the areas demonstrated was GPU rendering, so if Nvidia want to target people like me and larger studios for GPU rendering, then they should be aware that from a business perspective, there are cheaper configurations out there with better performance. As I mentioned before, 4x Titan Xp, or even 1080tis over 2x Titan V.

On average I complete around 30 to 40 hours of rendering per month on a workstation with 4x GPU's. So I am always looking at the next best thing, and research thoroughly. Less time rendering means more time creating, meaning more time for more clients etc. These cards will pay for themselves very quickly even if I went for the Titan V option. I will wait for productivity benchmarks, but most likely I will wait until next year to see what's available before upgrading my aging Titans...
 

bit_user

Polypheme
Ambassador

That was probably a nod to this:

https://blogs.nvidia.com/blog/2017/12/07/nvidia-optix-ai-denoiser/

Otherwise, as you say, it's not their most cost-effective GPU rendering solution.
 

kyotokid

Distinguished
Jan 26, 2010
246
0
18,680
...as I saw on the Nvidia site teh Titan V is NVLink compatible. However, at a total of 7,200 USD for two TitanVs and the dual NVLink widgets to pool memory for 24 GB it is far less cost effective for rendering than just dropping in a single 5,000 USD Quadro P6000.

It will be interesting to watch which direction Volta development for the Quadro line will go in. Might we actually see 32 GB of HBM2 (the architecture supports an 8-Hi stack configuration) on the p6000's successor? Most likely when Volta reaches the GTX line, it will use GDDR6 memory in top end cards as HBM2 will make them too expensive.

When Nvidia removed the GTX designation from the Titan after the Maxwell series, it pretty much said this was no longer going to be gaming/consumer hardware, particularly as they used same GP 101 processor as the Quadro P6000 and then the unlocked all the SMs giving it the same number of cores and FP performance of the far more expensive Quadro card albeit with half the memory and different drivers.
 

DDWill

Distinguished
Aug 3, 2009
28
0
18,530
Thanks for the link BIT_USER,

Looks like have more Reading to do.

So I'm guessing a GPU without Tensor cores will have to utilise CUDA cores for OptiX 5.0, but the Titan V will utilise the Tensor cores for OptiX freeing up CUDA cores. Up to 12x faster is also impressive, as was the demonstration shown with the car interior on the launch video..

Definatrly more Reading to do.

The other problem with Nvidia is knowing the right time to buy. I would have been gutted if I purchased a Titan X pascal only for nvidia to bring out the fully loaded gp102 Titan Xp a short time later for the same price, then gutted again when Nvidia drop the price for the star wars special,, but at that point the next architecture is around the corner so may as well wait... and this happens with every new architecture, making consumers hesitant each time. Would be far better to release a full fat version, and drop prices as yields get better. Yes it would annoy some that paid full walk for the card at launch, but at least they would be safe knowing they have the best in class card until the next architecture and no hesitancy to buy.

Plus it's about time nvidia addressed the heat issue with their Titan cards. As they don't let them loose to their 3rd party vendors, no hybrid or water block option without voiding the warranty, . When GPU rendering the cards get hot fast, as they stay at a constant 100% load until the render completes, unlike gaming where the load constantly varies or is distributed using SLI, and it's already proven the current air cooler is not up to the job, meaning ramping up fans before render to 85%.

Hopefully Nvidia will address this next year...

Sorry for any typo's. Typed on my mobile with predictive text..and too lazy to edit :)
 

kyotokid

Distinguished
Jan 26, 2010
246
0
18,680
...still they run cooler than AMD's Vega series (particularly the Frontier).

However I do agree that making the newer (post Maxwell) Titan series an exclusive Nvidia product is a mistake particularly with the 2 card purchase limit. Even pro grade Quadro cards are available from 3rd party manufacturers.
 

bit_user

Polypheme
Ambassador

For a business, I wouldn't think timing would be such an issue. If you can put together the ROI case for a hardware upgrade, then it remains unchanged even if the same HW becomes cheaper or something faster comes out after you buy.

I get that if you pay a lot of $ for the best, it's annoying if they turn around and release a 6% faster version right after, but I wouldn't be gutted about it. As for price drops, as long as you get some usage out of your hardware before a price cut, then at least you got more use out of it than if you'd waited. time = money.

BTW, I hadn't noticed the price drop on the Star Wars edition... $1138, heh... as if they were afraid we didn't believe they were really geeks.
 

kyotokid

Distinguished
Jan 26, 2010
246
0
18,680
...considering the government has contracted to have a super computer at Oak Ridge built with 4,600 nodes, each of which will have 6 Tesla V100s, 512 GB of DDR4 (+96 GB of HBM from the Teslas), and 2 IBM Power 9 CPUs using NVLink boards (at around 8,000$ per V100 that is roughly 48,000$ per node just for the the GPUs, yielding a total of 220.8$ million)...

...yeah no sweat springing for grants so a lab or college can pick up a couple 3,000$ Titan Vs.
 
Status
Not open for further replies.