Well it comes down to nVidia seeing that consumers were willing to pay out the nose for mediocre performance during the mining craze and they are now using malicious segmentation to try to permanently set market prices. Normally competition prevents this from happening but AMD will likely do the same thing, simply too much money to be had with this sort of soft price fixing. Your analogy was pretty accurate though likely lost on most who aren't gearheads.
For those left wondering, the displacement isn't so much the size of the engine but the volume of air moved during a full cycle, higher displacement means more fuel can be combusted resulting in more power per stroke meaning more power at the wheels. More modern engines let you get more power out of the same volume of air, but those same technologies would get even more power out of a larger engine displacement as well. GPUs absolutely drink memory bandwidth, the more the better until the compute units are saturated. This is why memory overclocking usually does more for the lower tier cards then core over clocking and why GPU manufacturers use memory bandwidth as a class divider.
nVidia's new design allows a 50 class card to perform like a previous generation 60 class card, so they just decided to call it a 60 class card and charge accordingly.
I think there's way, WAY more going on than just "Nvidia + mining." The AI situation is a real and present danger to gaming GPUs. As in, no matter what anyone on here thinks, AI is making vast amounts of profits for Nvidia. Look how much Nvidia stock went up in just the past couple of weeks. And AI needs lots of VRAM. AI also isn't going away. Jensen isn't talking hyperbole when he discusses all the areas that AI is likely to impact, HARD, in the coming decades. We're only just getting started.
Nvidia probably didn't want to put "too much" VRAM on lower tier cards, because then people and businesses might be tempted to run lower priced GPUs instead of paying for the professional stuff with double the VRAM. I think that's also why the RTX 4060 Ti 16GB isn't releasing until July, to provide more of a buffer. (I'm actually rather surprised that there will be a 4060 Ti 16GB, even if it's $100 extra... but even 16GB is pretty paltry for a lot of the AI LLMs.)
The memory capacity tied to bus width is a problem, though. Nvidia only wanted to do a 128-bit bus on the 4060 and 4060 Ti, not to screw over consumers, but because it felt that's all it needed. The larger cache generally overcomes the reduced bus width and raw bandwidth. AMD did the exact same thing with RDNA 2 (and is doing it with RDNA 3 as well). But a 128-bit bus limits capacity to 8GB mostly, 16GB if you do chips on both sides. It's too big of a jump for a "mainstream-budget" class GPU.
RTX 3060 was a really good compromise solution: 192-bit bus, 12GB VRAM. If the pandemic and supply chain stuff, coupled with Ethereum mining, hadn't been a thing, I'm pretty certain all the rumors of a 20GB RTX 3080 refresh would have happened (maybe that would have been the 3080 Ti), and a 16GB RTX 3070 Ti. RTX 3060 getting 12GB was evidence Nvidia was moving in that direction. But then demand was all jacked up from mining and shortages, and so Nvidia decided to stick with lower capacities on everything except the 3060. 3070 Ti with 8GB was still one of the worst decisions of 2021 IMO.
Fundamentally, I think in 2023 that ~$400 GPUs should have 12GB VRAM. That's a good balance between capacity and complexity. But it requires a 192-bit bus to do that (or 3GB non-binary chips that don't exist for GPUs), so we're left with 8GB and 16GB on a 128-bit bus as the only options. 8GB is too little, 16GB is too much — "too much" for the amount of actual GPU horsepower we're talking about. Or alternatively, 32MB of L2 cache (for Nvidia) isn't enough for the target market. A 48MB L2 cache would have been better, or even 64MB maybe, to overcome the 128-bit limitations, but that would have required a fundamental reworking of the architecture. Probably would have been more expensive than just doing a 192-bit interface as well. Dropping from GDDR6X and 21~24 Gbps to GDDR6 and 18 Gbps was also a compromise.
The other big problem is Nvidia's continued pushing of DLSS 3 Frame Generation as a "higher FPS" solution. I don't mind having DLSS 3, but it's really just smoothing out the frames to screen. It can look better on high refresh rate displays. But it doesn't usually feel better. The charts Nvidia shows where it's 50% or even 100% "faster" with FrameGen are super misleading in that sense.
It's all damn irritating, basically. I know all the reasons why Nvidia chose this route. They're not even wrong reasons, from a business perspective and mostly from a performance perspective. That's why this is a solid C- in terms of my score. It did the homework, half-heartedly, and left out some critical items. It had a fundamentally flawed premise. Maybe it should have been a 3-star (D-). Maybe a 10% jump is too big and we need to be more granular. If I was still writing for PC Gamer, it would have been a 65 instead of a 3.5-star. Ironically,
PC Gamer actually scored it a 79, which is far too generous IMO.