To be honest, the 30 series perf/$ is still underwhelming compared to what the 10 series offered at release relative to the previous generation, when you consider length of time between releases (ignoring the 20 series which generally offered little to no perf/$ improvements upon launch).
A 1080 Ti offered ~70% higher perf/$ than a 980 Ti (75% higher perf, $700 vs $650 MSRP) 21 months (1.75 years) later. Or 40% perf/$ increase per year.
A 3080 offers ~70% higher performance than a 1080 Ti at the same MSRP, 42 months (3.5 years) later. Or 20% perf/$ increase per year.
So perf/$ increase on an annualized basis for the 3080 is only half what it was for the 1080 Ti.
For an additional comparison, the 980 Ti offered ~52% better perf/$ than a 780 Ti (41% higher perf, $650 vs $700 MSRP), 19 months (1.5 years) later. Or 33% perf/$ increase per year, still far better than for the 3080.
Well, I did say "more in line". : P It's at least not hugely underwhelming like the 20-series was at launch. And the performance jump of the 10-series was atypically large, so it's probably not fair to treat that as the norm. It's also going to get harder to increase performance over time as process node shrinks become more difficult. At least the yearly performance gains in the graphics card space are still significantly larger than what we see for CPUs. Sure, we've at least been getting more cores the last few years, but lots of software doesn't really scale to additional CPU cores, unlike what we see with graphics rendering hardware.
Another thing to consider is that we're only looking at rasterized performance here, while these newer cards have added hardware for raytracing. A lot of these raytraced lighting effects still don't really seem worth the performance hit, but others arguably do if there's performance headroom to spare, and it's certainly true that performance over the 10-series is much larger when such effects are enabled. And I fully suspect that such effects will become the norm for "ultra" graphics settings in the coming years.
Yes, plenty of people do. The size of the gap between the 3080 and 3090 defies logic even when adjusting for the halo tax. A 20GB 3080 isn't the answer. Paying more for no more performance is not an attractive answer. If AMD could release a 16GB card that performs within single digits of the 3090 and costs $850-900, there would definitely be a market for it that would basically kill off the 3090 as a gamer's option. The problem with anything AMD releases is that it is likely going to be way slower in ray tracing, which actually matters now, and they don't have any known competitor to DLSS which looks like a killer feature going into next generation.
I guess my main point there was that the higher performance of the 3090 is small enough to be barely noticeable over the 3080, at least in today's games where the difference in VRAM doesn't really matter. Sure, if the 6900XT managed to perform close to the 3090 at a significantly lower price, there would be a market for that, but ultimately it would still be competing more with the 3080 than anything.
As for raytracing performance on RDNA2 hardware, not much is known about it yet, so I would hardly say that it will be "way slower" than Nvidia's solution. It could be faster, for all we know, or perhaps faster for some effects, and slower for others. The 30-series didn't actually improve raytracing performance much over the 20-series. If certain RT effects cut performance by 50% on a 2080 Ti, then they seem to cut performance by around 45% or so on a 3080. There's more performance to start with, but the impact of enabling the effects is still nearly as large. As such, if a 2070 performs similar to a 2080 Ti without raytracing, then it should perform fairly similar with it enabled as well, likely not gaining more than 5-10% with a heavy raytracing load. So, if AMD were aiming to be competitive with the performance hit of RT on the cards Nvidia released two years ago, then they would be competitive with the new cards as well, as not much seems to have changed as far as the overall performance impact of RT is concerned.
And as someone pointed out, Radeon Image sharpening is arguably competitive to DLSS, even if it might not be quite on the same level as DLSS 2.0. When it comes down to it, these are all just upscaling and sharpening techniques. DLSS uses AI to make guesses at details to fill in to recover some sharpness, while RIS uses a non-AI method, albeit one that's definitely more sophisticated than just a simple sharpening algorithm. And again, there's the possibility AMD may have made improvements over that as well, as not a whole lot is known about the specifics of their new cards yet.