You question the RTX sales of only 15 millon in 18 months.
This is Jensens doing, when 10 series prices increased cause of the mining craze Jensen held on to that new pricing level even when the mining craze was over in April/May 2018 all the way in to the release of RTX in august.
He then transferred those prices levels WITH AN MARK UP on top of that to the RTX series.
Suddenly RTX2080 was priced higher than GTX1080Ti when it came out and the RTX2080Ti landed on insane price levels.
Jensens greed held back the evolution of the gaming market the last 18 months with over priced RTX graphic cards.
Now with Radeon RDNA2 and Intel working their way in to the GFX business Jensen can forget his greedy over prices since the competition the coming years will not allow nVidia to control prices anymore.
I recommend NOT buying RTX 3000 series up on release BUT to wait a while, as soon as RDNA2 gets a foot hold and Intel enters the gaming market prices will drop significantly.
You're missing the meaning. I wonder about the 15 million sales in 18 months not as a figure, but rather how that compares to Pascal's first 18 months -- or Maxwell's first 18 months. I suspect both sold at least that well, and Pascal probably sold much better (but may have been conflated with crypto-mining).
As to the rest of your post ... greed maybe, more likely just a complete lack of competition. GTX 1080 Ti is still basically as fast as anything AMD makes, and the 2080 and 2080 Ti were quite a bit faster than the 1080 Ti. So if your competition can't keep up with a $700 GPU, there's no need to release a faster GPU at lower prices. Especially when the new faster GPU costs more to make, which Turing absolutely does!
GP102 is a 471mm square chip. That means even the TU106 costs probably close to the same amount for the GPU (but less for 6GB GDDR6 vs. 11GB GDDR5X probably). TU104 and TU102 are even larger, and TU102 in particular was never going to get into mainstream pricing. You can't do a 754mm square chip without charging a lot.
--------------
Let me give you some example numbers. A single wafer from TSMC, with packaging, probably costs $10,000. The maximum number of TU102 chips per wafer is going to be about 68 -- using a die size of 24.5mm x 30.8mm. Probably at least 5-10 chips are going to be bad, maybe less after harvesting partially working die. Optimistically, $10,000 / 63 ~= $160 per chip.
The problem is that the chip isn't the only cost. The PCB costs some money, the RAM costs money, the heatsink and fan cost money, the VRMs and resistors cost money... you hopefully get the point. Probably the total bill of materials on the RTX 2080 Ti ends up being close to $500. And Nvidia put a ton of R&D money into the architecture that also needs to be recovered, plus the distribution chain needs to make money as well.
So: Nvidia sells the chip for $300 to Asus let's say.
Asus adds a board and cooler and all the other bits and has now spent $600 total.
Asus sells this part to a major supplier for 15% more: $690, maybe even $750 to ensure profits.
The Distributor sells the card to retail outlets with another 15-20% markup: $790 - $900.
The retail outlet sells to the consumer for a 15-20% markup: $910-$1080.
The above is basically a reasonable
minimum price structure for the whole supply chain to stay viable.
What about TU104? It's a 545mm square chip, measuring around 24.3mm x 22.5mm. That means maximum yield per wafer is about 94 chips, and optimistically 89 can be used after harvesting partially defective die. So now the GPU cost drops to around $110 instead of $160. Plus less RAM and slightly lower costs elsewhere since it's not at the same level as TU102 cards.
Nvidia sells this chip for $200 to Asus.
Asus adds a board and cooler and all the other bits and has now spent $400 total.
Asus sells this part to a major supplier for 15-25% more: $460-$500
The Distributor sells the card to retail outlets with another 15-20% markup: $529- $600.
The retail outlet sells to the consumer for a 15-20% markup: $610-$720.
(Tangent: Apple's A11 chip as an alternative only measures about 8.2mm x 10.6mm. It can get around 664 chips per 300mm wafer, which means the cost per chip plummets to around $15-$20. Big chips are expensive. Really!)
You can do the same sort of rough estimates for basically any graphics card. I'm putting in 'generous' profits in the above, because Nvidia GPUs usually are able to sell at a premium. But then my yields on the big chips are probably higher than reality -- TSMC may only get 35 or so good chips per wafer if you're more pessimistic.) The point isn't that they're fully accurate, but that larger and higher performance parts have lower yields and increase the total cost dramatically.