After a few years of having systems with and without V-cache side-by-side, my personal experience is that it's overrated, most of the time.
Of course I prefer playing at 4k and ultra settings using an RTX 4090 and at that point everybody seems to agree that the main bottleneck is the GPU.
But I wonder how many mainline gamers will actually care about 400 vs 200FPS?
The main attraction about buying a V-cache chip was resting assured that you'd get the best no matter what, while it was certainly good enough for a bit of browsing and office work.
I know, because that's why I bought one, too: a 5800X3D, to replace a 5800X and with a 5950X side by side for some time.
It's also why I kept buying 5800X3D for the kids even after it had officially become last-gen tech, because it was still in the lead pack and by far good enough as the main bottleneck remained the GPU.
And when I do a full Linux kernel compile or an Android build, I get a cup of coffee anyway, a few seconds more or less don't really matter, while 8 extra cores mean I won't drink two. As it turned out the 5950X really wasn't bad enough at gaming to notice, but those extra cores were really cheap (at one point in time), and sometimes as useful as V-cache could be as well. So of course I went with a 7950X3D next to get both
So while Intel knew their CPUs were really good enough for gaming even without V-cache, AMD was able to use the same top performer laurels relentlessly against them as Intel had been using their #1 spot to push AMD into 2nd fiddle. And #2 simply isn't a good place to be, as AMD knows full well from long suffering. Running hot and burning down didn't help Intel, either.
Yet I just can't see Intel claw back that #1 slot even with that cache, because they won't be able to sustain that, given their cost structure. AMD didn't just get where they are today because they managed to beat Intel
once: they showed that they could
consistently beat Intel generation after generation and even using the same socket for the longest time.
And they did that at a price that didn't break the bank, I've seen estimates of $20 extra production cost for a V-cache CCD.
V-cache did as much as double the performance on some specific HPC workloads and I've also heard EDA mentioned. And that's where it originated, the consumer market was a skunkworks project that turned out a gamer crown guarantee, while V-cache EPYCs helped pay for the R&D and production scale.
And that may be missing again from Intel: the ability to scale their variant of V-cache far and wide for economy, or they just risk doing another Lunar Lake, a great performer with the help of niche technology, but not a money maker across the board because it's too expensive to make.
What most people do not appreciate is that AMD won and is winning the x86 battle not just on a performance lead, but at price/performance at production. And without similar or even lesser production cost for better performance, Intel doesn't stand a chance catching up.