With jlake3's nice list here and the picture from that HP workstation I'm ready to make a first set of predictions about Strix Halo:
1. It almost certainly will come to mini-PCs in time. It’s not even out yet.
Yup, the HP Z2 proves that point
2. It has quad-channel unified memory, and thus cannot be a drop-in option for AM5 based desktops.
Actually I believe it's 8 channel but at 32-bit instead of 4 channels at 64-bit, but that's LP-DDR5 vs. DDR5...
The basic truth is that a socket like AM5 can't just duplicate the number of RAM channels, so any Strix product will most likely be soldered BGA, which you can still put into a desktop or tower chassis and give a few slots.
3. It’s equal to a 4070 laptop with a 65w TGP, but that configuration is easily surpassed in gaming by midrange desktop dGPUs.
And there the main question is: how much will you be changed for the "premium" of having an APU instead?
In a desktop: who'd want to pay extra?
In a laptop: the more portable the more you'll have to pay.
So AMD's current strategy is to recycle Apple's Mx argument and charge a premium for an "AI workstation", that is essentially cheaper to make than a CPU/GPU combo with the same performance... while those exact combinations can currently be bought for $600-1000
For gaming, Strix Halo it won't get better than a Lenovo LOQ with an RTX 4070 and a Phoenix APU, for portability you might get twice the mobile endurance for iso performance. But that could be 1 hour instead of 30 minutes: nothing strix halo will enable mobile gaming for eight hours at full APU power.
4. The complexity and large iGPU and extremely fast unified RAM and the fact it can’t drop into AM5 means it would likely be very expensive for a desktop APU, and desktops needing good graphics performance but not willing to use a dGPU and being willing to pay a premium for that is a slim niche, so it’s not AMD’s launch priority. Someone will fill it eventually and people will probably complain it’s a poor value.
Actually, mass production seems quite capable of eating the complexity overhead, much of which is just shifted from mainboard and dGPU into the APU.
And AMD's genius is in making sure that not every part is completely new and bespoke: IP is largely recycled and silicon dies might see lots of reuse. Mostly I guess AMD plans to sell quite a lot of these, since Intel has nothing to counter.
So after accounting for the higher APU price vs. Strix Point, the mainboards wouldn't be much more expensive than an AOZ+dGPU combo, the laptop's incremental power dissipation potential doesn't have a linear cost if the form factor and weight isn't fixed to crazy limits.
RAM production cost should be far more reasonable than what vendors will want to charge. And here those first pictures hint at soldered RAM, which will vastly reduce production cost while it vastly increments the initial sales prices.
Unfortunately, adding LPCAMM2 module support would significantly increase mainboard and memory module production cost, only to then have to compete on a cut-throat (consumer friendly) RAM market.
That can only happen if consumers fail to bite and fall for soldered RAM.
Are these systems really in any way "Workstations for transformative AI performance"?
For training anything this puny has zero mass appeal.
In LLM inference their speed advantage would really just be twice the [CPU] RAM bandwidth over any model that oversteps dGPU VRAM boundries. That would be a sweet spot only if there was anything sweet or valuable to have there. After spending a couple of hours with the biggest DeepSeek R1 I could fit on my RTX 4090, I am less convinced than ever. It told me Marie Antoinette had no biological mother... few things are as certain as everyone needing a biologic mother to be.
And even then I wonder if it's worth paying twice the price of a current gamer laptop or getting half the gaming performance of a dGPU system: 6 vs 3 token/sec, that's a lot like 6 vs 3 FPS for gaming.
Especially if you're not into gaming or AI that much or you current hardware will do nicely.
And then there is Nvidia's new AI NUC...
Well, let's just say: the more you guys buy this stuff this year, the cheaper it will be for me next year.