Most of the innovations on the CPU side now are in the packaging. Something AMD has nothing to do with since they don't manufacture anything after the failure of their spun off manufacturing arm. AMD's only real innovation over the last decade is 3D Vcache. Except for the fact, that wasn't their innovation. It's TSMC's.
If it were as you say, they wouldn't have been way out in front of everyone else in their use of chiplets and 3D cache.
AMD partnered with TSMC to co-develop these things. AMD also had to do all the design work to make chiplet communication efficient and to avoid significant penalties, when using 3D cache. From Intel's first experiments with chiplet-based CPUs, especially with their superior EMIB technology, we can now appreciate that it's not exactly a trivial to get the design right.
They don't have to, anyone can license the 3D stacking technology from TSMC.
Yeah, now that they had early partners, like AMD, who helped them refine it. Even then, it's not like you just click a checkbox in your chip layout tool.
Intel has chosen not to as they don't see a profitable enough market for such a niche product.
That's backwards logic: "Intel isn't doing it, so it must not be smart or worthwhile." I think the real answer is that Intel is still trying to master chiplets, as we see from Arrow Lake's underwhelming performance. They'd better figure out how to do that right, before adding any more complexity.
Then we move over to the GPU side and things get really ugly. Real time ray tracing in modern games, Gsync, DLSS, Reflex, frame generation, ray reconstruction,
AMD's GPU situation mirrors where their CPU situation was, about 5 years ago. They had fallen behind their competition and were steadily catching up. The big difference is that Nvidia never took its foot off the gas, like Intel did (i.e. due to its 10 nm problems).
With that said, AMD used HBM on their GPUs long before Nvidia did. AMD went big with on-die caches in RDNA2, yet it took Nvidia until RTX 4000 to match them on it. AMD went 64-bit with CDNA, which Nvidia didn't match until Hopper. Also speaking of CDNA, AMD's Matrix cores supported all data types, well before Nvidia's Tensor cores went there.
GPU Direct is somewhat of a gimmick, in that it only works in very specific hardware setups. For its part, AMD had SSG back in 2016.
The irony being that Nvidia was/is trashed for most of the features when they were released,
No, pretty much just DLSS. And that
was pretty bad, until version 2.0.
Ray tracing on RTX 2000 was pretty much a gimmick, like it was on RDNA2, except even fewer games supported it back then.
GSync was good, but Nvidia kept it as a proprietary standard for way too long, and even when it open up the certification, it still didn't let partners implement the full featureset in monitors that didn't have their chip.
AMD gets credit for innovations that other companies brought to market first.
That's not what I see, but maybe we run in different crowds.