CatalyticDragon :
Yes that's true but for really high end stuff, say Pixar frames, data sets are hundreds of GB or TB. That's why films are still often done on CPU. Another solution is to use the VRAM as a cache instead of hoping your data always fits. AMD's HBC does this perhaps more renderers should take advantage of it. Operating systems have done virtual memory for many decades after all. AMD's SSG product puts 2TB of NAND flash on the card which is pretty remarkable by accounts too, at least in video production.
But yes of course there will be use cases for 24GB of RAM on a card but I suspect there will be _more_ cases where a faster card at half the price and 'only' 12GB of RAM is better.
You say Octane Render 4 supports RTX cores but RTX cores aren't special. They are the same tensorcores on Volta chips like the one in the TITAN V. And the TITAN V already smashes the OctaneBench beating the Xp and whipping the 1080 Ti by over 40%. The V has more compute and more tensorcores so I don't see how this Quadro will beat it in rendering unless in, perhaps edge, cases where a renderer craps out due to memory limits. We shall wait and see.
In that case I feel sorry for the person who needs to spend another $3,000 because their data set is 12.1 GB.
...that is my point. A large format render say 16,000 x 12,000 pixels running at very high quality that also has a host of effects like reflections, volumetrics, GI, and ray bounces will take up a lot of memory even on a GPU card. The idea of that extra overhead is reducing risk of the process crashing or dumping to the CPU.
Now this is just for static images, not animations which take an average of 172,000 frames for a 2 hour feature. That's a crap tonne of rendering which is why film studios have those warehouse sized render farms. For a film like Brave which used volumetrics, a good deal of displacement, SSS, AO, GI, and complex layered strand structures (Merida's hair) it was a monumental render job. Sure the characters were "toon" based but the surrounding environment scenery had a high degree of realism. All that translates to long render times even on dual Xeon HCC servers to get the level of fine detail seen in the film when projected on a large screen (and a single HCC Xeon can cost as much as a high level pro grade GPU and still be left in the dust).
OK I'm not Pixar of Dreamworks, but what I intend to produce visually is beyond what a Consumer card can handle. Were the Titan V priced lower it would have been more attractive, however Nvidia decided to make it a "baby Tesla-Quadro hybrid" and priced it thus (1,000$ more than the 16 GB P5000) but without the advantages of with of the high end Volta cards such as Quadro/Volta drivers and NVLink support. In effect, like its predecessor, the Titan Xp (500$ more than the 1080 Ti but with little advantage in performance) it is overpriced for whats in the box. Without linking ability the the move to Volta and HBM2 was sort of a wash as you cannot pair cards and pool memory (which is one of the major advantages of the Volta architecture). For 3,000$ they could have easily plopped a fourth layer of HBM2 chips in to increase the VRAM to 16 GB and/or given it full NVLink support. I look at the Titan V as sort of a dead end, compared to the Volta and RTX Quadros (we have yet to see any update to the Titan series).
I have the same concern over the 2080/2080 Ti, For the boost in price they could have easily topped the Ti out at 12 GB and the standard 2080 at say 10 GB.
As to AMD, yes SSG may be great for video production but does little for enhancing actual CG rendering performance as it is not true VRAM.which uses a different compression routine. AMD cards are also only supported by a few render engines at this time, most notably their own Pro Render, Unity (primarily for game development not fine art quality rendering), and the open source LuxRender (which continues to have teething pains). The Vulkan API should change this as it replaces OpenCL (which ended development at ver. 2.1) which could mean compatibility with more render engines than the above (Otoy is already testing this). This would be a major break as Vulkan compatibility will be enabled on older AMD cards via driver updates so Octane4 (which is Vulkan compliant) would support both Nvidia and AMD GPUs. The Vega WX9100 is half the price of the Titan V, has just about as many streaming processors (granted no Tensor equivalent) and 16 G of HBM2 memory. So if a render takes a few minutes longer, yet I am able to still get the quality and image resolution I'm after, (as well as a lower chance of process "dumping"), that's all I really need.
Nvidia's Iray seems to be looking more and more like it may become a dead end as they apparently have no plans to upgrade it to take advantage of RTX capability. The fact that few games even embrace ray tracing (and those that do suffer from frame rate lag when running in 4k mode) has left the gaming community (the primary segment of consumer GPU sales) wondering about the value of and need for these cards.
Oh, I agree, GPU cores (whether CUDA, Tensor, RTX or Stream Processors) are a major factor in render speed, but once the scene exceeds VRAM all those cores become moot and you are back down to the few CPU cores your system has. Yes you may have more physical memory available, but that will fill up fast as system memory is not as efficient as VRAM for rendering purposes. Hence that same scene will take up more physical memory space (and if it exceeds available system memory, will drop to even slower swap mode or crash).