while I don't care anything about gaming, I'd be interested in seeing more detail about performance boosts of Intel's GPUs on creator features and apps that use the ai matrix operations.
From one of the other tom's articles, I see Intel's new GPUs have 16 1Kbit matrix units per Xe core, and the current gen ARCs will go up to 32 cores. I'd be interested in some reviews of how well these are being utilized by non-gaming apps.
I'd also like to see their hardware encoding performance for other than AV1. Is the claimed 50x boost for AV1 just a fluke, or can we expect that kind of boost for other encoding formats?