News Intel Arc GPU performance momentum continues — 2.7X boost in AI-driven Stable Diffusion, largely thanks to Microsoft's Olive

Status
Not open for further replies.
The question we always have to ask is: what was the tradeoff? Model optimization usually comes at the expense of accuracy. In the case of something like generative AI, how do they characterize & quantify that loss in accuracy?
 
Last edited:
Every couple of weeks, my faith in the Arc series of cards is reinforced.

Buying this card was a great decision.

While I don't think it's quite ready for prime time yet, I'm very happy that Intel is going hard in the paint this time around. Focusing on the abandoned low to mid tier segments was a pretty smart move. I'm very interested to see what Battlemage looks like when it comes out.
 
  • Like
Reactions: bit_user
I'm 99% certain AMD does indeed have some sort of "AI Accelerators" in RDNA 3, the just haven't given them some fancy acronym or name.
Yeah, it seems like they're just bolted on to the execution pipeline, since they use the existing VGPR (Vector General Purpose Registers) and you access them with instructions like WMMA (Wave Matrix Multiply Accumulate).

You can read a little about them in Jarred's excellent RDNA3 writeup:


To be honest, that + the ray tracing improvements of RDNA 3 are the only reasons I haven't jumped on a discounted RX 6800 card. If you want to do anything with AI, you're going to be much better off with RDNA 3.
 
Status
Not open for further replies.