Not if it comes with a linear cost scaling with performance.
Why do you assume it will? Nvidia claimed the Tensor cores in their RTX 3000 line offered 4x the performance of the previous generation, so they cut the number in half. I recall seeing some corresponding claim about less area being devoted to them, as well, but I don't recall where.
AMD touted a number of improvements and optimizations in the ray tracing cores of its RX 7000-series, which again seemed like they probably offered disproportionately greater performance increases than the amount of transistors needed.
That's the beauty of fixed-function accelerators of complex tasks - you can often find such disproportionate optimizations. I think this will keep us in new GPUs for some time yet to come.
Once we get there, $500 will perpetually buy you about the same overall performance.
Once we get there, companies will be focusing on all sorts of ISA tweaks to squeeze out more performance per mm^2 and performance per W.
If your new features add 20% die space, you will need to pay 20+% more to get those features.
We could see FPGA-like techniques come to the fore, where dies contain a significant pool of reconfigurable logic, so it can be shared between different fixed-function tasks.
We'll be solidly in replacement economy where most people keep their PC for 10-15 years
Or quantum computing unlocks new materials that keep the upgrade treadmill going for at least a decade longer than you think. Even once density has ceased to increase, there's probably still room for efficiency improvements and cost reductions, which could then unlock greater area.
As for stuffing more GPUs in there, I doubt AMD and Nvidia have any interest in going back to SLI/CF with their grossly inconsistent results and endless support woes.
That's mainly because GPUs have been improving at such a pace that new models present an easy path to adding performance. When new models cease to be much faster than older once, adding more will be a tempting way to increase performance. And with CXL, the logistics and programming model might be simpler, as well.