..So finally we get the specs and as a 3D CG artist, I say *ho hum*. More CUDA cores, the addition of Tensor cores, but still the same amount of VRAM as the 10x series (I knew all that talk of a 16 GB card was nothing but smoke just like it was with the much hyped "8 GB 980 Ti" years ago). OK so a render job finishes 25% or so faster, unless of course it exceeds the card's VRAM, then it's in the CPU slow lane the rest of the way.
Yeah Nvidia wasn't about to make the same mistake like they did with the Maxwell generation where the 1,000$ Titan X had similar specs and performance compared to the 5,000$ Quadro M6000. In order to spearate the two, the doubled the memory on the Quadro to 24 GB but kept the price the same as the 12 GB version.
Next there was all the hype and speculation over "NVLink" for the 20xx series. Turns out that unlike the linking technology of the same name for the Quadro/Tesla lines (which supports full memory pooling) it is nothing more than souped up version of SLI as the bridges are still being sold in 2, 3, and 4 card models. Hence you will not be able to get 22 GB of VRAM linking two 2080 Ti's together as many hoped. You are still limited to 8 or 11 GB whereas with the RTX Quadros, you can get 32, 48 and 96 GB of combined VRAM (respectively for the 5000, 6000 and 8000) by linking two cards together. (of course those NVLink connectors also cost a bit more).
So basically, for people like myself, this is pretty much a wash as like I mentioned above, for what we do, VRAM is the single most important attribute when it comes to rendering large involved scenes. If the scene dumps from card memory, those 576 Tensor and 4,300 CUDA cores become useless. Better have a fast HCC CPU and a generous amount of memory on the board to pick up the ball when it gets dropped.