while I don't care anything about gaming, I'd be interested in seeing more detail about performance boosts of Intel's GPUs on creator features and apps that use the ai matrix operations.
From one of the other tom's articles, I see Intel's new GPUs have 16 1Kbit matrix units per Xe core, and the current gen ARCs will go up to 32 cores. I'd be interested in some reviews of how well these are being utilized by non-gaming apps.
I'd also like to see their hardware encoding performance for other than AV1. Is the claimed 50x boost for AV1 just a fluke, or can we expect that kind of boost for other encoding formats?
Intel doesnt need to make anything higher than a 3070. As long as they make good drivers and decent prices they just need to pump a ton of them out. So honestly this doesn't really matter cause that should be the lowest end sku