News Intel's rumored 'Nova Lake-AX' allegedly packs insane specs but might never launch — reportedly featured 28 CPU cores, 48 Xe3 GPU cores, and an upg...

Admin

Administrator
Staff member
That kind of GPU horsepower demands serious bandwidth, and Intel seems ready. Nova Lake-AX is rumored to support LPDDR5X memory at speeds up to 9,600 MT/s, possibly even 10,667 MT/s, across a wide 256-bit bus. That would both match only match and exceed AMD’s Strix Halo, which also uses a 256-bit interface but tops out at only 8000 MT/s. Apple’s M-series chips use unified memory up to 8,533 MT/s in the M4 Max, though the narrower bus limits their actual throughput.
Even if it's not cancelled or "paused", they may end up competing with LPDDR6 devices that make 10,000 MT/s less impressive.
 
I wonder if the Strix Halo itself actually has a proper market, I'm not seeing it storming the shelves or the top selling ranks anywhere.

And IMHO the main problem is price.

Strix Halo should be cheaper than a normal CPU + dGPU combination, for what it delivers to normal consumers in productivity and gaming and the BOM bill of devices based on it.

Yet the market is still filled with last generation hardware that sports say a Zen 8 core APU with an RTX 4060 at far below €1000. And that delivers better gaming, while it lasts long enough at 2D desktop work on battery to let single laptop owners survive outside their office, home or dorm.

I know it's technically very different, but who's ready to pay for a difference that doesn't provide practical value to the vast majority of the market?

Notebook entry level LLM inferencing isn't a mass market, if it's anything but marketing hallucinating itself, or letting LLMs do it for them. Especially if you ask HBM prices for getting 128GB of LPDDR5 soldered in.

And paying in excess of €3000 for RTX 5050 performance on a desktop?

AMD is trying to grab market share from Apple, I guess, using what is basically a console design... without a console using it to pay for the scale to make it cheap.

But Strix Halo doesn't seem to deliver 24H on a single charge nor does it deliver on whatever Fruity Cult fans look for when they buy a top end Mx laptop.

Strix Halo is designed to be cheaper than that CPU + dGPU combo, lowering component count and using commodity DRAM, but they aren't selling it that way.

That low value tiny niche market doesn't get any bigger by Intel trying to catch up: they'd be stupid to try for halo, given their current state. AMD could lower the price towards their production cost, which in Intel's case would be far higher, whether they use their own fabs or not, again for lack of scale.

It's not surprising me at all that Strix Halo is already seeping into the Aliexpress superNUCs market, where it might sell like wildfire, once the prices come into range, perhaps only because AMD has its successors pushing out of the fabs and they do have surplus.

So why is AMD not selling Strix Halo nearer to production cost already?

I got plenty of speculations, very few insights.
 
Last edited:
Where is the article of the rumored AMD's insane Zen 6 specs: 12 core CCD, 24 cores, 48 threads, 48mb cache per CCD, 7ghz core speed?
I want to comment "AMD's Sandy Bridge moment".
The Ryzen 7 12-core 10800X3D will boast at least 144mb cache, and so will the Ryzen 5 8-core 10600X3D.
it will be interesting to see the effects of the cache boost from 96mb to 144mb (or even 192mb) to gaming performance. The cache amount didn't change from the 5800X3D.
 
Where is the article of the rumored AMD's insane Zen 6 specs: 12 core CCD, 24 cores, 48 threads, 48mb cache per CCD, 7ghz core speed?
I want to comment "AMD's Sandy Bridge moment".
The Ryzen 7 12-core 10800X3D will boast at least 144mb cache, and so will the Ryzen 5 8-core 10600X3D.
it will be interesting to see the effects of the cache boost from 96mb to 144mb (or even 192mb) to gaming performance. The cache amount didn't change from the 5800X3D.
They seem to have the option of using 0-2 V-cache tiles per CCD and AMD is probably testing these combinations for EPYCs and desktop configurations right now.

But without hands on or direct access to that data, you can just as well let some LLM generate you as many articles as you want for similar quality results.
 
1) Intel has a LOT of heavy lifting to just CATCH UP to AMD in CPU performance. Nova Lake sounds great ON PAPER, but we won't know how the new architecture actually performs until it's released and we have real benchmarks.

2) Intel will be forced to use TSMC if they want to compete at all, their own foundries are not up to the task.

3) Even IF it's twice as fast as AMD (it won't be), I still wouldn't consider it, considering how they gaslit their customers over the failures that are continuing to happen with their Meteor Lake and Raptor Lake CPUs!

They lost TRUST. They will have a very hard time earning it back.
 
I wonder if the Strix Halo itself actually has a proper market, I'm not seeing it storming the shelves or the top selling ranks anywhere.
Do you mean the Strix Halo Framework Desktop that I've pre-ordered months ago (just a couple hours after it was presented) and hasn't shipped yet?
The waiting list is large, but some brands prefer to wait a bit until drivers/bios/software are a bit less rough (and/or until AMD actually sends them the chips, you know how AMD does tons of paper-launches, especially on laptop hardware).
 
Yeah, I was about to say... The Framework Desktop is back ordered several months out.

As for the Intel iGPU configuration, wow that is spicy. 384EU would put it at 6x of what Lunar Lake has, and that chip rocks AMD's Z1E with 12CU, especially in low power mode, but only in Windows -LNL doesn't work well in Linux for now-. Strix Halo iGPU, on the other hand, is only 40CU, or 3.33x.

I know specs don't equate to a 1:1 performance gain, but I think the iGPU will be starved for memory bandwidth, even with 10,667MT/s memory.
384EU is larger than B770 with rumored specs of 256EU/32Xe and 256-bit membus using GDDR6 (=644.6Gbps?).
If I did my math right, 10667MT/s * 256-bit = 341.3Gbps.
So 1.5x bigger GPU, but almost 1/2 the memory bandwidth.
 
The way I see it integrated GPU with CPU devices are likely to disrupt the current profitability of mobile and desktop CPUs along with middle range GPUs.

The idea that tighter integration wins both performance and cost isn't deep, however, given Intel's recently stated policy that projects not expected to turn a profit will be cancelled, it's possible no matter how obvious the next innovation is that Intel will miss it.