News AMD's Gorgon Point APU line-up breaks cover — Allegedly aiming for a 2026 launch

Medusa Halo will be a fairly big evolution over Strix Halo on the iGPU side, but it'll still be RDNA 3.5 as far as I know.

There won't be RDNA 4 for laptop devices, they'll go directly to UMDA around 2027.
 
  • Like
Reactions: usertests
Gorgon Point appears to be a mix of Strix Point and Krackan SKUs.

At the very bottom is the Ryzen AI 3 quad-core Zen 5 with only 2 CUs of RDNA 3.5. That would be a massive cut even for Kraken, but there's no way it could be a new die like the missing in action "Sonoma Valley" because of the PCIe lanes. So it's probably Krakan.

Another interesting point is how the 12-core and the 8-core get the 10% clocked up NPU, but not the 10-core. Those could be the top Strix Point and Krakling dies respectively, but I don't know if Krack has that many PCIe lanes. The 8-core can probably be dual-sourced from Strix Point dies, although it doesn't mention the big/little split (which can't be 3/5 for Kracker).

I have no idea where the drop off in cache from 22 to 14 MB is coming from, other than 8 MB being disabled for "reasons". The figure is combining L2 + L3, and Krakane 6-cores do have 22 MB L2+L3.

Finally, I hope this refresh is in parallel with Medusa Point and is not pushing it back significantly.

Medusa Halo will be a fairly big evolution over Strix Halo on the iGPU side, but it'll still be RDNA 3.5 as far as I know.

There won't be RDNA 4 for laptop devices, they'll go directly to UMDA around 2027.
Some of these may use a more advanced node, so it would be interesting to see RDNA 3.5 on two different nodes.

Strix Point is bandwidth starved, so I hope we see some Infinity Cache on Medusa Point.
 
Last edited:
Following a two-year cadence, we anticipate Zen 6 to launch sometime later in 2025.
This should 2026 then, no?

I'm not sure why anyone thinks that Zen 6 is coming out this year when there's no reason to unless Intel was going to completely dominate everything in the market with PTL (hint: they're not). AMD has followed the 2 year cadence for a while now and Zen 5 launched in 24Q3.

I could see a Strix Halo Zen 6 successor using RDNA4, but all leaks have pointed towards Zen 6 APUs using RDNA 3.5. It makes sense to continue using RDNA 3.5 since memory bandwidth holds these parts back already and there really isn't a way around it without adding silicon.
 
  • Like
Reactions: usertests
This should 2026 then, no?
Definitely, even Wikipedia says 2026: https://en.wikipedia.org/wiki/Zen_6

Now if you check the exact number of months between desktop gens, I think it's usually a bit less than 2 years, like around 18-22 months on average. And there are extra mobile gens thrown in that aren't refreshes. But Gorgon Point looks like a bog standard refresh and there's no reason it can't come out before the end of 2025.

Signs point to Zen 6 + RDNA 3.5 for Medusa Point, possibly with Infinity Cache, and possibly with an "RDNA 3.5+" FP8 enhancement to have full support for FSR4. I'm not sure that Medusa Halo can't use RDNA4 though. But we're still in early 2025 so let them cook and let the leaking continue. I want to know if Sonoma Valley ever sees the light of day.
 
  • Like
Reactions: thestryker
Medusa Halo will be a fairly big evolution over Strix Halo on the iGPU side, but it'll still be RDNA 3.5 as far as I know.

There won't be RDNA 4 for laptop devices, they'll go directly to UMDA around 2027.
If it's still RDNA3.5 how it's going to be a big improvement on the iGPU side. cpu side, sure, being Zen 6 and using 12 core ccd's. I read they are bringing IC to more and more products too.

Gorgon point sound like a total nothing burger, Medusa point will be out in 2026 (Zen 6 is NOT releasing this year in any form) as well, you'd be insane not to wait. Medusa Halo though would be more likely early 2027
 
If it's still RDNA3.5 how it's going to be a big improvement on the iGPU side. cpu side, sure, being Zen 6 and using 12 core ccd's. I read they are bringing IC to more and more products too.
Medusa Point: same 16 CUs on a more advanced process node (Strix Point is N4P), faster LPDDR5X memory support, and hopefully at least 16 MiB of Infinity Cache.

Medusa Halo: 48 CUs at the top instead of 40 CUs (+20%), more advanced process node?, LPDDR6 memory support, and possibly a 384-bit bus width option.

AMD's APUs are almost always bandwidth starved, so Infinity Cache arriving in the mainstream APUs could be a game changer. Better power efficiency from a new node is always helpful, especially when these APUs are used at around 10-30 Watts.

The "Halo" APUs are good for two things: LLMs/AI, and replacing or competing with overpriced mobile dGPUs. Until the AI bubble pops, their graphical prowess doesn't actually matter that much. They need more TOPS, more memory, and more bandwidth. Maybe that will be the purpose of an "RDNA3.5+": deliver FSR4 support for APUs, and lots more TOPS for Medusa Halo.
 
  • Like
Reactions: heffeque
This will be great for cheaper home LLM inferencing with 192GB max ram and 92GB allocated to the igpu. Use Framework or similar to add a PCI slot and used 100Gbps RoCE ethernet card, then make more of those until the cluster can hold your choice of LLM model in gpu memory; ‘get it working and you’re all set with a functional though somewhat sluggish setup.
 
Last edited: