Arc A750 vs RX 6600 GPU faceoff: Intel Alchemist takes on AMD RDNA 2 in the budget sector

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
🤦 There's nothing even remotely "pseudo-matrix" about the WMMA AI acceleration units in RDNA 3. The "MM" literally stands for "Matrix Multiplication". 😑

Aka, they are totally dedicated matrix multiplication (and thus addition & division as well) units, but just integrated into the existing CU's/floating point ALU's to save on transistor count & die size instead of as a completely discrete ASIC block ala Nvidia/Intel (although ofc at a performance deficit if said SP's/ALU's are needed for something else compute-wise at the same time).
AMD's "Ray Accelerators" are integrated in a very similar way (although RDNA 4 is beefing them out to similar capability to Nvidia/Intel's fully separate ASIC's).

Each strategy has very different pro's/con's but all three vendors very much have proper dedicated hardware for low precision matrix math acceleration on their current GPU architectures for AI workloads, with absolutely nothing "pseudo" about it. 🤷

And judging that the reason that Intel's XeSS is often better quality on Intel Arc than AMD's FSR is on Radeon because it has fully dedicated tensor units for matrix math vs AMD's WMMA souped up CU's is a pile of absolute NONSENSE considering no existing version of FSR actually uses AI acceleration yet... 🤦
(As AMD doesn't want to restrict compatibility to only native matrix math capable GPU's [Turing/Alchemist/RDNA 3 onwards], or at least they don't want to yet.)

Also, critically, AMD can get away with this kind of "core CU/ALU extension instead of full blown separate ASIC blocks" strategy vastly more successfully that their competitors would be able to do so thanks to their absolutely BEST IN CLASS Asynchronous Compute capabilities.

Being first to implement Async Compute by many, MANY years has left AMD's ACE units head & shoulders above the competitions', allowing them to recapture a bunch of otherwise unused/wasted compute performance each clock-cycle to use for stuff like hardware ray-tracing or AI/matrix math acceleration, making the performance hit for ALU sharing less significant than it'd normally be. (Although ofc it's most definitely still there/a thing!)
 
Last edited:
Wow, this thread is right on the edge of going down that bad track....

I admit that I was staunchly against recommending Arc on release. Now that the work has been put in I feel it is in a place it should have been closer to at that release date instead of two years later. Just the same Intel gathered their eggs back into the cart and are offering a product that is absolutely workable for a variety of folks that don't want to put in the resources ($) that the other makers budget to mid cards fall. IMO they have strengthened their spot at the bottom a bit and hopefully we will see better and more mature products out of them on this next round.

I admittedly cannot understand all the vitriol towards the entire line of RX cards being presented in this thread. If you don't care about the all holy RT feature, the RX lines perform admirably across the high low end to perhaps the low high end. A lot of grunt for the money spent. Good VRAM parameters in some of them as well.

I am taking a bit of a wait and see on these rumors that the 4000 series cards may have a GPU degradation issue.
 
  • Like
Reactions: King_V
Status
Not open for further replies.