they’ve had less impact in the GPU space … data center / AI parts with standing , the purchase of ATI didn’t turn into the same trajectory in GPU ….
I'll agree that AMD did flounder about, after the ATI acquisition. They made many promising moves, like in GPU compute and (as I mentioned) with Fury. However, I think these intentions were deeply undermined by the way the company was circling the drain, financially. Their GPUs were good enough to keep them in the console business, which provided a vital financial life line when they needed it most.
But, Nvidia is a force to be reckoned with. Nvidia has been continually innovating and their attempts to break into the phone/tablet and embedded markets forced them to jump on the efficiency train early, and that served them well. You can see the effect it had on Maxwell and Pascal, for instance. Nvidia is also quite ruthless in their business practices, which complemented their engineering prowess and provided it with the fuel needed to realize their GPU compute ambitions in a way AMD couldn't. And it was that success in GPU compute which put them in the prime position to benefit from the Deep Learning revolution, which they recognized and exploited boldly and very effectively.
I'm not sure it's really fair to judge AMD using the measuring stick of Nvidia, because I think AMD innovated faster in its GPUs than just about anyone else would've. Let's not forget there were
reasons ATI and Nvidia were the lone survivors of the first decade of hardware-accelerated 3D.
Lastly, if you use AMD vs. Intel as the standard for comparing AMD vs. Nvidia, you're missing one glaring detail. In the fight against Intel, AMD was able to ride TSMC's meteoric rise, at a time when Intel's foundries couldn't seem to do anything right. I don't know if you recall the bad old years when Intel got stuck on 14 nm for about 6 years, hence the reason why they kept launching Skylake derivatives. TSMC propelled Zen 2 and 3 just when Intel was at its most vulnerable. Well, AMD had no such advantage vs. Nvidia, who
also used TSMC during that period. In fact, one of the reasons RDNA2 was able to catch (and slightly pass) Ampere was Nvidia's il-fated decision to move over to Samsung.
As tough as it's been to watch AMD struggle against Nvidia, I think it's been for the best. It's meant that neither company could afford to be lazy, and that's put them in a position where even the (once) mighty Intel couldn't really touch them. At one time, this would've been unthinkable! Not to mention the Chinese upstarts gunning for them now.
Yeah,There have been a few moments here or there not enough to move the needle and that’s no good for the average consumer in this space.
Let's face it: Nvidia is the Ferrari of GPUs. They have a brand cache' that buttresses them when they make the rare misstep, and has helped support stronger pricing than they've sometimes deserved.
I wouldn't count out AMD, but I'm not anticipating them leapfrogging Nvidia any time soon. There's always a chance they'll do something radical, like their rumored RDNA 4 flagship (now definitely cancelled) that employed die-stacking. It's not outside the realm of possibility that they pull another rabbit out of their hat, like they did with Infinity Cache.
I know AI-based rendering is a fraught topic among gamers, but they did announce a partnership with Sony to create a truly next-gen neural rendering architecture, where both companies can utilize the resulting IP. Let's see what comes of that.