Toss out the past two years, because it's a glaring exception to the rule of how new GPU launches go. Historically, RTX 2070 basically matched GTX 1080 Ti performance for less money, less power, and more features. RTX 3070 basically matched RTX 2080 Ti performance while using less power, and the MSRP was half as high. GTX 1070 beat the GTX 980 Ti for a significantly lower price. And GTX 970 was more or less tied with GTX 780 Ti performance. You're looking at "history" of the past 24 months and I'm looking at every other GPU launch going back more than a decade
I can attest to GTX 980ti ~ GTX 1070 and RTX2080ti ~ RTX 3070 iso-performance from practical experience, which is why I find those tables rather useless, that compare GPUs generationally only based on their position within the range, rather than adding a performance per Watt comparison as well.
I'm usually not interested at getting the fastest availble GPU, but I have a set of use cases that I want to match for the lowest price.
After dozens of passive EGA/VGA cards, my first accelerated GPU was an IBM 8514 clone from ATI, the original Mach 8, and I've owned at least one of pretty much every generation since, often ATI/AMD and Nvidia units of the same generation side-by-side.
I've always been a fan of high resolutions and big screens, which usually turned into a performance issue with the GPUs after display updates. But never was the performanc abyss as bad as when I went from a multi-monitor setup with a primary 24" 1920x1200 screen a 4k 43" screen 3-4 years ago. That screen has given me a large workplace without visible pixels, which may be very hard to improve upon (can't see me going for a bigger or higher resolution screen on a desktop) but it drove up the requirements for gaming performance by a factor of almost four and beyond what high-end GPUs could do at reasonable frame rates.
While I've always used my computers professionally, the GPUs were mostly for a bit of gaming after hours. But that changed when my work drifted towards machine learning using a CUDA base. There I'd been pretty happy with a GTX 1080ti mini, because it did allow me to do quite a bit of CUDA work that actually paid for the hardare, ...until I did the 4k switch. Gaming at 4k was painful but lower resolutions would show pixels on the 43" screen that sits at arm's length.
The next RTX2080ti didn't improve the gaming performance nearly as much as it should have, but its ability to trade floating point precision for speed in machine learning, made it a worthwhile investment anyway.
Still its ~30FPS performance at 4k in one of my familys favorite games, ARK Survival Evolved, was rankling. And the other pet project, M$ Flights Simulator on an HP Reverb G1 VR headset was downright atrocious, even with a Ryzen 5950X underneath.
So here I was a couple of weeks ago, doing a similar assessment on if I should wait for an RTX40 card or go grab a high-end RTX30 before stockpiles ran out.
For starters RTX30 delivers even more lower precision data formats for machine learning and 24GB RAM means much bigger models can be trained and operated than with the 11GB the RTX 2080ti offers (Incidentally, that 20GB RTX3080 at $600 is at a great value spot there, which few seem to appreciate). Clearly a lot of the current models just don't fit into a consumer GPU's RAM any more. But since a double RAM 4080 doesn't seem to be in the books until very late in the 40 life-cycle, that meant I'd have to aim at the top.
450 Watts seem to be entry level for the 4090, truly crazy numbers are being tossed around for peak usage and that turned out to be the clincher: going from 200/250 Watts on GTX1080ti/RTX2080ti to 350 Watts on the RTX3090 was within the heat dissipation margin of the current chassis/ventilation and power supply (which still has to feed 200Watts of CPU, RAM, storage etc.) as well within the amount of heat I was tolerating in my home-lab. At 500 Watts or more for the GPU alone, it's very likely it can no longer sit under my desk in summer.
So I went with a 3090, which for some reason hasn't dropped more than €40 over the last two months in Europe.
Buying with the intent to return is not something I condone. It's dishonest at best. Find someone with an upgrade policy if that's what you really want, but you'll pay for that as well.
I'd tend to agree and yet I don't entirely at this point.
I'll freely admit, I was speculating on the legally prescribed 14 day return window, too. But the primary motivation was to know if it would deliver the gaming/simulator performance update I wanted for ARK@4k and FS in HP Reverb VR.
It failed to satisfy my gaming expectations, still falling short of a stable 60Hz on most of my mainstream titles, but it was at least noticeably better, no longer dropping to 30Hz.
In the case of Microsoft's flight simulator, it seems its abysmal performance, especially in VR mode, simply cannot be cured. I've got pretty much every other flight simulator, too, and they just race along at the FPS rates the monitor and HP Reverb headset support (I always limit the frame rates to the display capability).
But I was also quite ready to return and repurchase the card, if prices were entering free fall during that window.
Obviously the vendors are speculating on existing inventory to squeeze the last bit of revenue from consumers on expenses they have already incurred. At that point I believe it becomes almost fair for consumers to join that game.