News Mysterious AMD Ryzen AI MAX+ Pro 395 Strix Halo APU emerges on Geekbench — processor expected to officially debut at CES 2025

Seems the GPU would be severally held back by the shared memory bus. Unless there is some onboard memory, like the old AMD boards, this will show good specs but come up short at least for gaming.
 
Somehow I think we're still missing a crucial thing or two about the Strix Halo...

So far all we hear is essentially a Dragon Range successor with a beefier IOD, but that doesn't add up.

40 CU should require the equivalent of 300GB/s of bandwidth to stretch their legs and even with that it's still a little low on graphics power to complement those 16 CPU cores: for pure gaming I'd trade twice the CUs for half the CPU cores.

And I can't quite see the wisdom of stuffing these ingredients with this spread in a single box.

So
a) where does it get the bandwidth it seems to need?
b) why bother spending so much on an iGPU that's still no match for those CPU cores in gaming?
c) how that all of that work out in a mobile use case?
d) is there a server angle we're missing?

For a) HBM seems out because of pricing and perhaps heat. A stack or two of Lunar Lake like LPDDR5 RAM on the die carrier might deliver the bandwidth, but where to put it? Strix halo isn't AM5 so there could be a rather roomier die carrier, perhaps even two or more stacks but all that effort doesn't seem to offer deployment scale... unless consoles were to suddenly use that exclusively. But I can't see those going for 16 cores anytime soon.

Btw: did I mention that I'd just love to still have [SO-]DIMM sockets for normal RAM expansion and pure CPU use?

b) that's the biggest puzzle for me: it's just too much CPU power for what the iGPU can deliver. And without extra memory channels, not even AI would be attractive with them. And with an iGPU this size, adding a dGPU just seems very wasteful. Some might still want that, but that's too niche for AMD. I even wonder if they might significantly reduce PCIe lanes from that IOD, if desktop/workstation isn't really a focus area.

c) How much of that power would be usable at 15/30/45/60/90/120 Watts? How would the devices look like? And how would that translate into the vast numbers that AMD typically requires to build a bespoke chip?

d) My impression is that AMD starts all designs with a server angle first. And only grudgingly serves the less lucrative markets: so are we missing a smart server angle for Strix Halo?
 
Last edited:
  • Like
Reactions: Marlin1975
Somehow I think we're still missing a crucial thing or two about the Strix Halo...

So far all we hear is essentially a Dragon Range successor with a beefier IOD, but that doesn't add up.

40 CU should require the equivalent of 300GB/s of bandwidth to stretch their legs and even with that it's still a little low on graphics power to complement those 16 CPU cores: for pure gaming I'd trade twice the CUs for half the CPU cores.

And I can't quite see the wisdom of stuffing these ingredients with this spread in a single box.

So
a) where does it get the bandwidth it seems to need?
b) why bother spending so much on an iGPU that's still no match for those CPU cores in gaming?
c) how that all of that work out in a mobile use case?
d) is there a server angle we're missing?

For a) HBM seems out because of pricing and perhaps heat. A stack or two of Lunar Lake like LPDDR5 RAM on the die carrier might deliver the bandwidth, but where to put it? Strix halo isn't AM5 so there could be a rather roomier die carrier, perhaps even two or more stacks but all that effort doesn't seem to offer deployment scale... unless consoles were to suddenly use that exclusively. But I can't see those going for 16 cores anytime soon.

Btw: did I mention that I'd just love to still have [SO-]DIMM sockets for normal RAM expansion and pure CPU use?

b) that's the biggest puzzle for me: it's just too much CPU power for what the iGPU can deliver. And without extra memory channels, not even AI would be attractive with them. And with an iGPU this size, adding a dGPU just seems very wasteful. Some might still want that, but that's too niche for AMD. I even wonder if they might significantly reduce PCIe lanes from that IOD, if desktop/workstation isn't really a focus area.

c) How much of that power would be usable at 15/30/45/60/90/120 Watts? How would the devices look like? And how would that translate into the vast numbers that AMD typically requires to build a bespoke chip?

d) My impression is that AMD starts all designs with a server angle first. And only grudgingly serves the less lucrative markets: so are we missing a smart server angle for Strix Halo?
Looks it it will have quad channel memory, so bandwith is not that bad.

B: looks like it is not a gaming APU, the leaks suggest it will be marketed for laptop workstations, wich require a lot of CPU power.

C: the leaks suggest it will max out at 120 Watts, wich is perfect for mid-size workstations, wich are neither ultra thin or bulky.

D: server CPUs dont have iGPUs.
 
Looks it it will have quad channel memory, so bandwith is not that bad.
Hmm, how that? Externally like a Threadripper would be way too expensive on a desktop motherboard, so it would have to be split in two on-die and two external.

I'd love something like that, but can they sell such a thing in the volumes that justify it?
B: looks like it is not a gaming APU, the leaks suggest it will be marketed for laptop workstations, wich require a lot of CPU power.
That's too niche a market for AMD, I'd say: I just can't see it scaling to where they'd get the return they need.
C: the leaks suggest it will max out at 120 Watts, wich is perfect for mid-size workstations, wich are neither ultra thin or bulky.
True, such a machine would be attractive for some, if the price was right. But you can build something like that much easier using a dGPU and a normal SoC.
D: server CPUs dont have iGPUs.
One might argue that the M300 are APUs. And with that IOD they might be able to build GPGPU machines... if they play it really smart.

Again, I think we're missing something, there has to be a scale solution for that IOD that's not "laptop workstation".
 
  • Like
Reactions: Marlin1975
Looks it it will have quad channel memory, so bandwith is not that bad.

B: looks like it is not a gaming APU, the leaks suggest it will be marketed for laptop workstations, wich require a lot of CPU power.

C: the leaks suggest it will max out at 120 Watts, wich is perfect for mid-size workstations, wich are neither ultra thin or bulky.

D: server CPUs dont have iGPUs.


Problem with that is after all that the extra cost may be as much or more than just a separate GPU.
Quad ch memory alone will cost more not just for the extra memory but also PCB design design and testing.

Don't get me wrong. I am curious to see what it can do and how it will play out. But this seems more like a chip made for a stand alone gaming system or some other high end task not well served maybe I am not seeing.
 
Seems the GPU would be severally held back by the shared memory bus. Unless there is some onboard memory, like the old AMD boards, this will show good specs but come up short at least for gaming.
It has a 256-bit LPDDR5X interface, so well over double the bandwidth of a typical desktop CPU.

Overall performance may be comparable to the desktop RX 7600 in some situations.
Strix Halo is not a desktop product, it's for thin and light gaming laptops and workstations. MacBook pro competitors.
 
Last edited:
Hmm, how that? Externally like a Threadripper would be way too expensive on a desktop motherboard, so it would have to be split in two on-die and two external.

I'd love something like that, but can they sell such a thing in the volumes that justify it?

That's too niche a market for AMD, I'd say: I just can't see it scaling to where they'd get the return they need.

True, such a machine would be attractive for some, if the price was right. But you can build something like that much easier using a dGPU and a normal SoC.

One might argue that the M300 are APUs. And with that IOD they might be able to build GPGPU machines... if they play it really smart.

Again, I think we're missing something, there has to be a scale solution for that IOD that's not "laptop workstation".
I think the purpose is not to save money, since it likely will aim at workstations.

But save space. A dedicated GPU needs its own VRM and package wich uses space, its own RAM dies, by integrating all that, you have a more compact design and thus more space for more SSDs or a larger cooler.

Also, a laptop can be made with LPCAMM2, so that all the memory can be repaired, unlike the GPU VRAM. (Most decent workstations still have replaceable RAM, the Thinkpad P1 already uses LPCAMM2)