Nvidia Lists Quadro RTX 6000 GPU at $6,300

Hetzbh

Prominent
Aug 1, 2017
4
0
510
Yeah, go buy an RTX 2080 with double the RAM (and it's ECC), lowered clock (see the cooling solution) at X6.5 the price, run! :)
 

KD_Gaming

Commendable
Nov 7, 2016
41
1
1,530
@hetzbh, are you dumb? Did you even read past the article title? These far outperform a regular card in workstation programs.
 

kinggremlin

Distinguished
Jul 14, 2009
574
41
19,010


Your first question was answered when he noted 8 times two is 24. Also, the RTX 6000 uses the same core as the 2080Ti, not the 2080. Even accounting for that mistake, 11 times 2 is still not 24.

Unlike the old days, I don't think this will outperform the 2080ti by much as the 2080ti isn't artificially crippled. You're paying for more RAM and certified drivers which are required by many enterprise software packages.
 

kyotokid

Distinguished
Jan 26, 2010
246
0
18,680
...if you are into creating3D content and rendering large scenes, the 6000 will defintiely out perform the 2080 Ti. That 24 GB (which with a second card and NVLink becomes 48) over the Ti's 11 GB is a major benefit. Also Quadro drivers are optimised for CG production rather than gaming.

You likely won't see full memory pooling in the consumer RTX cards either as NVLInk there only acts as a "beefed up" version of SLI. Also as I recently read, Iray will not take advantage of the RTX cores as Optex Prime (which is needed) is not enabled.

For less than the cost of a single RTX6000 you can get two RTX5000s with NVLInk and have 32 GB of VRAM which is a heck of a lot for most 3D artist's purposes.
 

pokeman

Honorable
Oct 30, 2014
25
0
10,530
If the ram are slower why use the 6000 for the raytrace showcase of the rtx turing annoucement on tomb raider?
 

AgentLozen

Distinguished
May 2, 2011
527
12
19,015


This Quadro card looks like it posses the full compliment of Turing resources. I'm pretty confident Quadro drivers are different than GeForce drivers, but this card should outperform the 2080Ti as long as the drivers perform the same.
 

CatalyticDragon

Honorable
Jun 30, 2015
19
5
10,515
If you can live with less VRAM a Tian V is half the price and perhaps a better card. FP64 performance on the V isn't gimped (1:2) like on this card (1:32). Ok sure it 'only' has 110 Tensor TFLOPS vs 130 but many people can live with that for half the price.

This whole RTX / Turing thing is a bit of a farce.
 

kyotokid

Distinguished
Jan 26, 2010
246
0
18,680
... for CG production rendering VRAM is your friend , the more the better. Once a scene exceeds that the render engine either crashes, or shunts to the CPU which is much much slower in comparison. I tend to create very involved scenes in large format and extremely high quality for gallery quality prints. The larger the render resolution size, the more memory the process needs. If I could afford it I'd get a pair of RTX5000s and the NVLink bridge which would give me 32 GB, along with 7,400 CUDA, 768 Tensor and 96 RTX cores.

Quadro drivers also allow you to set the cards to ignore W10 WDDM which locks out about 18% of the VRAM in Nvidia consumer cards for rendering purposes.

As I understand, the beta of Octane4 does make use of RTX cores. Octane also uses for Out of Core memory once the load exceeds VRAM sending just the excess texture load to the CPU instead of dumping completely to it (like Iray does) so even a single RTX5000 would be sufficient for most jobs I am planning to do. With Iray, once the card detects the file is too large, it immediately dumps the process to the CPU so all those GPU cores are worthless.
 

CatalyticDragon

Honorable
Jun 30, 2015
19
5
10,515
Yes that's true but for really high end stuff, say Pixar frames, data sets are hundreds of GB or TB. That's why films are still often done on CPU. Another solution is to use the VRAM as a cache instead of hoping your data always fits. AMD's HBC does this perhaps more renderers should take advantage of it. Operating systems have done virtual memory for many decades after all. AMD's SSG product puts 2TB of NAND flash on the card which is pretty remarkable by accounts too, at least in video production.

But yes of course there will be use cases for 24GB of RAM on a card but I suspect there will be _more_ cases where a faster card at half the price and 'only' 12GB of RAM is better.

You say Octane Render 4 supports RTX cores but RTX cores aren't special. They are the same tensorcores on Volta chips like the one in the TITAN V. And the TITAN V already smashes the OctaneBench beating the Xp and whipping the 1080 Ti by over 40%. The V has more compute and more tensorcores so I don't see how this Quadro will beat it in rendering unless in, perhaps edge, cases where a renderer craps out due to memory limits. We shall wait and see.

In that case I feel sorry for the person who needs to spend another $3,000 because their data set is 12.1 GB.




 

kyotokid

Distinguished
Jan 26, 2010
246
0
18,680


...that is my point. A large format render say 16,000 x 12,000 pixels running at very high quality that also has a host of effects like reflections, volumetrics, GI, and ray bounces will take up a lot of memory even on a GPU card. The idea of that extra overhead is reducing risk of the process crashing or dumping to the CPU.

Now this is just for static images, not animations which take an average of 172,000 frames for a 2 hour feature. That's a crap tonne of rendering which is why film studios have those warehouse sized render farms. For a film like Brave which used volumetrics, a good deal of displacement, SSS, AO, GI, and complex layered strand structures (Merida's hair) it was a monumental render job. Sure the characters were "toon" based but the surrounding environment scenery had a high degree of realism. All that translates to long render times even on dual Xeon HCC servers to get the level of fine detail seen in the film when projected on a large screen (and a single HCC Xeon can cost as much as a high level pro grade GPU and still be left in the dust).

OK I'm not Pixar of Dreamworks, but what I intend to produce visually is beyond what a Consumer card can handle. Were the Titan V priced lower it would have been more attractive, however Nvidia decided to make it a "baby Tesla-Quadro hybrid" and priced it thus (1,000$ more than the 16 GB P5000) but without the advantages of with of the high end Volta cards such as Quadro/Volta drivers and NVLink support. In effect, like its predecessor, the Titan Xp (500$ more than the 1080 Ti but with little advantage in performance) it is overpriced for whats in the box. Without linking ability the the move to Volta and HBM2 was sort of a wash as you cannot pair cards and pool memory (which is one of the major advantages of the Volta architecture). For 3,000$ they could have easily plopped a fourth layer of HBM2 chips in to increase the VRAM to 16 GB and/or given it full NVLink support. I look at the Titan V as sort of a dead end, compared to the Volta and RTX Quadros (we have yet to see any update to the Titan series).

I have the same concern over the 2080/2080 Ti, For the boost in price they could have easily topped the Ti out at 12 GB and the standard 2080 at say 10 GB.

As to AMD, yes SSG may be great for video production but does little for enhancing actual CG rendering performance as it is not true VRAM.which uses a different compression routine. AMD cards are also only supported by a few render engines at this time, most notably their own Pro Render, Unity (primarily for game development not fine art quality rendering), and the open source LuxRender (which continues to have teething pains). The Vulkan API should change this as it replaces OpenCL (which ended development at ver. 2.1) which could mean compatibility with more render engines than the above (Otoy is already testing this). This would be a major break as Vulkan compatibility will be enabled on older AMD cards via driver updates so Octane4 (which is Vulkan compliant) would support both Nvidia and AMD GPUs. The Vega WX9100 is half the price of the Titan V, has just about as many streaming processors (granted no Tensor equivalent) and 16 G of HBM2 memory. So if a render takes a few minutes longer, yet I am able to still get the quality and image resolution I'm after, (as well as a lower chance of process "dumping"), that's all I really need.

Nvidia's Iray seems to be looking more and more like it may become a dead end as they apparently have no plans to upgrade it to take advantage of RTX capability. The fact that few games even embrace ray tracing (and those that do suffer from frame rate lag when running in 4k mode) has left the gaming community (the primary segment of consumer GPU sales) wondering about the value of and need for these cards.

Oh, I agree, GPU cores (whether CUDA, Tensor, RTX or Stream Processors) are a major factor in render speed, but once the scene exceeds VRAM all those cores become moot and you are back down to the few CPU cores your system has. Yes you may have more physical memory available, but that will fill up fast as system memory is not as efficient as VRAM for rendering purposes. Hence that same scene will take up more physical memory space (and if it exceeds available system memory, will drop to even slower swap mode or crash).
 

Eximo

Titan
Ambassador
Well, they really couldn't have made a 2080 with 10GB, that would require a different memory bus. They could double it to 16GB more easily with bigger memory chips (like those used on the Quadros, though I am not sure those chips exist as non-ECC, would have to check Micron/Samsung/Hynix to see)

The reason the RTX2080Ti only has 11GB, just like the 1080Ti, is that the chip itself has a non-functional portion of the memory bus. The Quadros and Titans always end up with the fully functional chips. They could make one, but then it would reduce their inventory of flawless chips.
 

kyotokid

Distinguished
Jan 26, 2010
246
0
18,680
Well why not use the same memory bus as the Ti put two of the flawed chips on it and you'd have 10.

For a 500$ increase from the Pascal series along with the exorbitant prices for Quadro/Tesla cards, I think they could afford to add one more working memory chip in the Ti.

The "real" reason was because the 2080Ti might have dented sales slightly for the Titan. However, how many enthusiasts could really afford a single 3,000$ GPU that still uses GeForce drivers and is pretty much a single standalone unit. (particularly when the 16 GB GTX5000 is 700$ less, uses Quadro drivers that allow for bypassing Windows WDDM, and has full NVLink capability and the aforementioned Vega WX9100 has more VRAM, almost the same specs except for the Tensor cores, and is half the cost).
 

Eximo

Titan
Ambassador


That would basically be the 2080Ti again. They are different physical chips with different market segment targets. So you would end up with an even more crippled 2080Ti with two missing memory chips instead of 1.

Again, they can't just add another chip to the 2080Ti, it is missing the hardware to make it function. If they decide to make a Titan this generation they will use the fully functional chip and it will have the 12GB.
 

kyotokid

Distinguished
Jan 26, 2010
246
0
18,680


...if they do decide to update the Titan (but previous arguments be why go to Turing and GDDR6 if Volta and HBM 2 is so much better?). However again 3,000$ is a hefty price for most enthusiasts to pay. I would rather spend half for a WX9100 or even 1,000$ less for a P5000 to get the increased memory.

Also concerning the two flawed chips, I was referring to upgrading the 2080 not the 2080 Ti and if the Ti is "missing hardware" that keeps it from having 12 GB that doesn't make sense as "11" is a slightly odd number in hardware terms, why ddn't they just stop at 10 instead which would have made the Titan Xp a little more worth the price?

The major difference between the 1070 and 1080 was more CUDA cores for about 200$ more. Both had the same GPU chip and 8 GB of VRAM. Hence the 1070 (and it's expansion, the 1070 Ti) was were so popular as well as being the optimal cards snapped up by miners.

True the 2070 is a bit more lacking as it doesn't have NVLink support though it's listed price is closer to the 2080 (599$ which was the MSRP for the 1080 on release) which I feel will, along with the fact it is now a single standalone card will hurt its sales more.

I could have understood a 100$ increase but crikey, 500$ from the 1080 Ti to the 2080 Ti is fairly steep (and 200$ for the 2070 as well for what you get). In a sense pricing of the the 2080 Ti makes it the new "Turing Titan" but with one less GB of VRAM.

For those with higher VRAM needs but still mindful of budget, Nvidia has fallen behind AMD. Nvida expects one to pay the high price for their professional grade cards if one needs more memory (and the Titan V pretty much falls on that side of the fence price wise even though it doesn't benefit from the pro grade drivers) rather than offer a good mid range GPU for serious enthusiasts with more VRAM (16).
 

Eximo

Titan
Ambassador
RTX 2080 is also missing some hardware, but in the form of shader modules rather than the memory bus. Same with the 2080Ti, doesn't have its full complement when compared to the Quadros. But to the point of upgrading a 2080, you aren't that far off from the 2080Ti. A paired down version of that would make more sense to make a 10GB card. Just need some more chips with another memory bus that failed.

I agree on the pricing, though it is early days and Pascal chips are still out in the wild for a good while yet.

I did sit down and look at the prices, we should actually be more amazed that prices have stayed as low as they have for the last ten years or so. People used to pay $600 for little 25W cards that could be outperformed today by the cheapest of smartphone graphics. If you include inflation (about 50%, $100 back then equals about $150 today), those guys were paying huge sums of money. $1200 for a flagship GPU actually isn't all that bad in comparison. Especially when you consider what you can do with it.

I've also never seen laptop prices as low as they are right now. We'll see how this tariff thing goes in the US, which may be exactly what Nvidia has prepared themselves for. The margins on selling graphics cards aren't that great.

 

kyotokid

Distinguished
Jan 26, 2010
246
0
18,680

...indeed the "trade war" with China has me concerned as well. Just when the mining craze crashed and brought prices for the 10xx series back to earth costs for just about everything else look to increase.

As to the 2080 Ti, I have scenes that challenge even the 12 GB of my older Titan-X. 16 GB is the minimum I would consider for the work I do. Sadly NVLink for the RTX GeForce cards will not allow memory stacking as it will for the Quadro RTX series. The thought of two 2080 Ti's linked together for 22 GB would have made their price more attractive.

The one saving grace could be moving to Octane4 as it will also have a subscription track (easier on a tight budget than having to plunk down 600$ in one lump sum). Octane has Out Of Core Rendering which keeps the geometry in VRAM and shunts the excess texture load to system memory so the entire process does not totally dump from the GPU as it does in Iray. Unlike Hybrid GPU/CPU rendering it is also still fairly fast. I still be watching the ongoing development, and if indeed Vulkan will make AMD cards compatible, that will change the game.

With (as I previously mentioned) rumblings I am hearing in the pipeline of Iray not supporting ray trace cores I am a bit concerned (there is some thought that Nvidia might even pull the plug on further development of their render engine as it does not have as wide a base due to being proprietary to the CUDA graphics language). As with many in the gaming community, that pretty much makes the cost of moving up to Turing and RTX seem sort of pointless. Sure they still render faster than Pascal or Maxwell GPUs, but this is one of the first new generations that did not offer a boost in VRAM over the previous one. For myself, that makes it a very hard sell. AI denoising is fine if you seriously need speed, like for rendering an animation sequence, but it does come at a cost in final render quality if you only produce static rendered images like I do.