News Lunar Lake's integrated memory is an expensive one-off — Intel rejects the approach for future CPUs due to margin impact

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,419
944
20,060
What do you even mean by that?
It can solve the "Low Profile" issue for NoteBooks while giving you flexibility in installing Memory Modules.

First of all, there are different CAMM form factors for LPDDR5 vs. regular DDR5.
Yes I know!

Secondly, CAMMs don't magically erase the downsides of using regular DDR5 vs. LPDDR5.
Obviously, but NoteBook OEM's / ODM's get the flexibility of choice when making their MoBo's as to which one they can use by making MoBo's with connectors for each type, ergo giving options when deciding which system will have regular DDR5 vs LPDDR5 modules by selling the SKU with the MoBo that supports the appropriate Memory Type.
 
  • Like
Reactions: bit_user
Jul 31, 2024
13
9
15
Intel is sacrificing on-package memory (which is not magical but improves power efficiency which is great for mobile) because they couldn't pass along the costs to the OEMs or consumers. This news is very bad. Intel can't even innovate properly anymore. Adamantine cache is also considered to be dead.

There were some weak arguments against Lunar Lake's on-package memory. 16 GB and 32 GB serves most. Adding 24 GB or 48 GB options could help (24 GB is a gaming sweet spot). Soldered memory is already ubiquitous in the industry.

It will be interesting to see if AMD's mobile bets pay off. They want to make mobile dGPUs but struggle to get adoption. Strix Halo is a big, innovative chip that will require some hard work and marketing to make a permanent fixture on roadmaps. It could become a big deal or end up being a one-off like Lunar Lake if it doesn't go well.
I certainly hope AMD has foreseen at least two or three generations of this type of product. Let's hope it does not get canned after a first gen delivery.
 

subspruce

Proper
Jan 14, 2024
143
35
110
Well at least they didn't get low volume on low margins. Wouldn't one normally price a niche market product for having higher margin on lower volume? It just seems like Pat is contracting himself. I mean if anything, most of the margin drop is due to using, TSMC, yeah?

They must be banking on LPCAMM2 being a big leap forward to say they won't do this again.
LPCAMM2 is like the best form factor of the best memory type.
 

SunMaster

Commendable
Apr 19, 2022
216
195
1,760
What report are they quoting? Here's a hint: they aren't because there isn't one. They've been using nothing but anonymous sources for all of their hit pieces. This reporting hasn't been backed up by any other media outlets either. You feel free to believe that in 2021 TSMC was giving Intel a 40% discount on N3 wafers though.

Journalists protect their sources all the time. They're entitled to do so. They're not entitled to make up stories. As time unfolds a lot of the information used get verifyable. It's a huge news agency, If an organization like that were caught in making up facts it would be easy to discover as stories unfold. These companies live by their reputaiton.

As Reuters is humongous it's inevitable that there, over time, will be news/stories that turns out not being true, so it's not like Reuters can't be wrong. But you're presenting a conspiracy-like idea that Reuters is doing some vendetta-like campaign against Intel. That would ruin a reputable news agencys reputation.

I encourage you to reproduce facts that Reuters is lying (intentionally, about anything). Elon Musk claiming Reuters is lying is unfortunately not proof, nor any unsubstantiated MAGA-claim.
 
Last edited:
  • Like
Reactions: bit_user

Pierce2623

Prominent
Dec 3, 2023
485
368
560
It'd be interesting if Intel worked with multiple DRAM suppliers and gave customers a choice. Then, you could conceivably have the DRAM vendor doing some of the legwork to sell these CPUs and the competition between them might help offset some of the higher costs associated with the on-package solution.


Yeah, but they didn't give Raptor Lake a real brand name, either. It was just launched as Gen 13, except of course not all of the Gen 13-branded CPUs even had a Raptor Lake die!


Yeah, maybe they need to better distinguish their different product segments. Like, they could use Lytrium to brand their "thin & light" CPUs, Econium for lower core-count value CPUs, Rushnium for high-clocking CPUs with mid core counts good at gaming and moderate creative tasks, and Worknium for high core-count workstation-oriented models (okay, dumb names, but you get the point). You could still have 3/5/7/9 tiers within each, but it would be a less confusing situation than having like a i5 HX model that's faster than a i9 U model, for instance.

I think if you put the product line first, instead of as a suffix, then it's less surprising if like a Worknium 5 is faster than a Lytrium 9. People would be like "duh, of course it's faster - it's a Worknium. Lytriums are for thin & light."


I do take issue with their use of "Pro" for one of their product tiers. Plus, uh, "Max"? You're going to name a Mac CPU tier something that's a homonym of "Macs"? Call me unimpressed. And then Ultra, when you already have Max? Isn't Max already the maximum? Why is there a tier above that?

But yeah, it's simple and once people know it, they probably don't have much trouble remembering.


I have to agree with this and your point about them creating too many SKUs. I think Apple has only like 2 SKUs based on each die, and those are actually different dies!
I tend to agree with everything you said here.
 
Last edited:
  • Like
Reactions: bit_user

JRStern

Distinguished
Mar 20, 2017
170
64
18,660
Lunar Lake was a joint effort between Intel and TSMC with CPU, GPU and NPU tiles produced by TSMC.
TSMC was giving a 40% discount to Intel.
Pat Gelsinger disparaged TSMC to acquire clients for Intel Foundry, so TSMC has canceled the discount.
It's why some months ago Intel was labeling Lunar Lake as a flagship and now It's labeled as an experiment that will not be reproduced.
Does TSMC own the rights to integrated memory? I can't think that is true, but maybe some drone at Intel signed a stupid piece of paper?
 

JRStern

Distinguished
Mar 20, 2017
170
64
18,660
SSDs have a best-case read latency on the order of about 10 microseconds. An electrical signal can travel about 1.5 km, in that amount of time. So, please explain to us what you think the benefit would be of having NAND flash chips so closely integrated with the CPU.
Same as with DRAM.
Also have to use more power and get more delay from interface circuits to come in from even SSD on the memory buss. Save a microsecond or three per access, and it adds up.
Esp since the memory buss is now in the package.
 

bit_user

Titan
Ambassador
Same as with DRAM.
No, recent CPU reviews have showed DRAM latencies in the ballbark of 80 ns. That's two orders of magnitude faster than SSDs! DRAM also has an order of magnitude higher bandwidth (or more, depending), which makes energy efficiency a more pressing concern.

Also have to use more power and get more delay from interface circuits to come in from even SSD on the memory buss. Save a microsecond or three per access, and it adds up.
You're being ridiculous. "A microsecond or three" is 150 to 450 meters! That's about 492 or 1476 feet, in case that helps.

Do the math. If you can't or won't do some basic math, then you should just leave the engineering to engineers.
 
Last edited:
Same as with DRAM.
Also have to use more power and get more delay from interface circuits to come in from even SSD on the memory buss. Save a microsecond or three per access, and it adds up.
Esp since the memory buss is now in the package.
Apple has been integrating basically everything into their chips to a degree which wouldn't make sense for any other manufacturer, but the only thing to do with NAND that is integrated is the controller. No matter what hypothetical benefit there could be if integrating any form of NAND hasn't passed Apple's cost/benefit requirements I feel like it's safe to say it isn't worth it.
 
  • Like
Reactions: JRStern

bit_user

Titan
Ambassador
Apple has been integrating basically everything into their chips to a degree which wouldn't make sense for any other manufacturer, but the only thing to do with NAND that is integrated is the controller.
Don't some Macs use a M.2 slot? Do they speak PCIe across it? I believe I've heard their M.2 drives aren't NVMe, but I'd be really surprised if whatever they used wasn't at least based on PCIe. That would mean the SSD should at least have some type of simple controller on it.
 
Don't some Macs use a M.2 slot? Do they speak PCIe across it? I believe I've heard their M.2 drives aren't NVMe, but I'd be really surprised if whatever they used wasn't at least based on PCIe. That would mean the SSD should at least have some type of simple controller on it.
Some definitely use M.2 (or at least did in the early generations) and I'm pretty sure it's all PCIe (and I think uses NVMe), but the controller is on the SoC. That's why all SSD upgrades involve replacing the NAND (whether solder or new card) and you can't use off the shelf SSDs.
 

bit_user

Titan
Ambassador
Who is filling poor Pat’s head with the idea consumers want AI on their CPU’s?
Corporate customers? At my job, we have a subscription to some sort of LLM service, but maybe they want to be able to inference the model on client PCs to save some money.

To be honest, if there were AI integrated into my code editor and it were good enough at stuff like highlighting where it thinks it sees bugs in the code I'm editing, I'd consider using it.
 

upsetkiller

Commendable
Sep 19, 2022
9
8
1,515
And offending them

That’s some market power right there. For TSMC to take back the 40% discount and Intel to suck it up despite the increased cost.
Why us everyone saying this as if its a fact? Its just a rumour that has not veen confirmed, besides any such discounts are agreed up way before production ever starts and one cannot simply back out ofban agreement :) especially not with a us govt darling company, especially not if you are a foreign entity under US boots
 

Kondamin

Proper
Jun 12, 2024
115
73
160
Corporate customers? At my job, we have a subscription to some sort of LLM service, but maybe they want to be able to inference the model on client PCs to save some money.

To be honest, if there were AI integrated into my code editor and it were good enough at stuff like highlighting where it thinks it sees bugs in the code I'm editing, I'd consider using it.
It's going to take a couple of generations before those tiny npu's are going to be anywhere near useful to run LLM's nevermind the extra memory those machines will need.

far cheaper for corporate to have your and your colleagues coding monitored by a big central server so it is seeing a proper utilization vs a local one that will be sitting idle while waiting for it's human
 

bit_user

Titan
Ambassador
It's going to take a couple of generations before those tiny npu's are going to be anywhere near useful to run LLM's nevermind the extra memory those machines will need.
I'm pretty sure NPUs with Microsoft's minimum 45 TOPS can already inference at a usable rate.

far cheaper for corporate to have your and your colleagues coding monitored by a big central server so it is seeing a proper utilization vs a local one that will be sitting idle while waiting for it's human
Distributed computing is usually cheaper and easier to scale than when it's centralized.
 

Kondamin

Proper
Jun 12, 2024
115
73
160
I'm pretty sure NPUs with Microsoft's minimum 45 TOPS can already inference at a usable rate.


Distributed computing is usually cheaper and easier to scale than when it's centralized.
Those are small reduced models, as I understood it llms become smarter the bigger they are.
And the bigger they are the more memory they need.

I’m sure there will be an inflection at some point but a single big server doing inferencing for a number of clients is probably going to be cheaper than having your software engineers do their coding on 4x5090’s workstations
 

JRStern

Distinguished
Mar 20, 2017
170
64
18,660
The interface circuitry accounts for the delay.
Same as on DRAM.

Don't criticize what you don't understand.
No, recent CPU reviews have showed DRAM latencies in the ballbark of 80 ns. That's two orders of magnitude faster than SSDs! DRAM also has an order of magnitude higher bandwidth (or more, depending), which makes energy efficiency a more pressing concern.


You're being ridiculous. "A microsecond or three" is 150 to 450 meters! That's about 492 or 1476 feet, in case that helps.

Do the math. If you can't or won't do some basic math, then you should just leave the engineering to engineers
 

bit_user

Titan
Ambassador
Those are small reduced models, as I understood it llms become smarter the bigger they are.
And the bigger they are the more memory they need.
Given how Nvidia and AMD made a bunch of noise about fitting like 192 GB on their H200 and MI300X accelerators, that seems to be about the size of current leading-edge models. Any larger and inferencing would involve multiple of those GPUs, which introduces a nonlinearity in cost scaling.

At present, desktop platforms can support 192 GB. This will go up to 256 GB, once 32 Gb DDR5 chips are on the market.
 

bit_user

Titan
Ambassador
The interface circuitry accounts for the delay.
How and why would they need to delay data by "1 or 3 microseconds"? Do you have a source on that?

Same as on DRAM.
DRAM latency comes from a few places, but being off-package isn't one of them.

Don't criticize what you don't understand.
I asked you to explain and if this is the best you can do, then it's clear there's a lack of understanding on your end.