spongiemaster
Admirable
I don't think you've moved on from the early 2000's. There is no longer a cost per transistor improvement with new nodes. Nvidia is very likely paying TSMC significantly more for their custom 4N node than they were paying Samsung for the 8nm node for Ampere. GPU die sizes don't typically shrink with new nodes either. They're pretty much the same size with more transistors packed in for better performance. So there is no cost benefit of squeezing in more dies per wafer.I don't think you understand how computer prices work. They don't go up with time - they do down. Memory and storage have done nothing but drop in price as the years passed. Boards have tracked with inflation and CPUs have gone up a little bit, but nothing close to inflation. GPUs are the exception and most of their pricing has been due to shortages, crypto demand, and anti-competitive behavior (price fixing). The first two issues have worked themselves out. AI may be using GPU-like silicon, but off-the-shelf GPUs are not useful for commercial AI.
The costs of developing a GPU have also skyrocketed since the early days of 3D acceleration. All the low hanging fruits have long since been picked, and it is no longer just about rasterized performance. While Nvidia continues to improve rasterized performance, their focus is shifting to ray tracing cores and tensor cores. The complexity of the software stacks are also incomparable. DLSS, media encoders and decoders, super resolution video playback, Reflex and everything else Nvidia is developing costs money to make. It all adds up.