News TSMC's N2 process reportedly lands orders from Intel — Nova Lake is the likely application

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
The other way around, I think that the next gen not having the memory might lead them into putting htt back.
It would be too easy of an option for them to not think about it.
I still don't understand what you're on about here because the benefits of on package memory are lower power and latency not performance. Adding SMT back doesn't help with either of those things and Intel already made the decision that the benefits of removal were greater than having it.
They would only need to design larger dies if they need to fit more cores into future CPUs.
Yes... exactly... unless you think 8p/16e is the configuration of the future and it will be able to keep up with whatever AMD is doing.
If the next gen is going to use multiple dies it is going to be to shrink the dies and not to make them bigger/keep them as big as they are, there is near to zero reason for intel to go up to 50+ cores on a desktop CPU let alone a mobile CPU, since you are so keen on thinking that desktop is only a byproduct of mobile, but keeping the same number of cores as they have now and simultaneously have smaller dies=higher yield from their 18A especially in the beginning makes all the sense in the world.
It seems like you're either mixing a bunch of things up or just don't quite understand what I was saying.

What I'm saying is that Intel will likely make some sort of compute tile (all rumors are pointing towards NVL being 8p/16e) that makes the most sense for the majority of their client desktop parts and high performance mobile volume. If they double up the compute tile for say the top two desktop SKUs this likely beats whatever AMD has in that segment. Going up to 16p/32e doesn't matter if it's more beneficial to the company than making a separate 12p/24e compute tile is.
 
I still don't understand what you're on about here because the benefits of on package memory are lower power and latency not performance. Adding SMT back doesn't help with either of those things and Intel already made the decision that the benefits of removal were greater than having it.
More performance per power per area, not lower power, there is a difference.
And without the memory, having htt will give better perf/power/area than not having htt.
Yes... exactly... unless you think 8p/16e is the configuration of the future and it will be able to keep up with whatever AMD is doing.
Yeah, ultimately, you where talking about the next gen not an undefined future.
What I'm saying is that Intel will likely make some sort of compute tile (all rumors are pointing towards NVL being 8p/16e) that makes the most sense for the majority of their client desktop parts and high performance mobile volume. If they double up the compute tile for say the top two desktop SKUs this likely beats whatever AMD has in that segment. Going up to 16p/32e doesn't matter if it's more beneficial to the company than making a separate 12p/24e compute tile is.
If amd goes to 12core ccd, as the rumors go, they will probably go bankrupt, and if intel goes to dual 8p/16e they will probably also go bankrupt.
I'm not ruling it out but unless they both like double the price of the CPUs I don't see it happening, and with the tariffs doubly so.

AMD barely makes any money from desktop as it is, it would make sense for AMD if they made zen 6 server CPUs only since those sell well for them.
And since AMD can't do it intel won't have to do it either.
 
the benefits of on package memory are lower power and latency not performance.
Common misconception: on-package doesn't benefit latency. The speed of an electrical signal in copper is somewhere on the order of 0.15 meters per nanosecond. Simply moving the memory on-package doesn't make a dent in DRAM latency.

The top spec Lunar Lake has LPDDR5X-8533 memory, so the immediate benefits it's getting are frequency & power. That's by contrast to the current top spec LPCAMM2, which is only 7500 MT/s.

Adding SMT back doesn't help with either of those things
If memory bandwidth goes down, then memory contention goes up, latency goes up, and SMT could help hide that a little bit. That would be the theory. But, I'm sure not enough to outweigh the benefits Intel got by removing it. Don't forget that SMT isn't the only trick for dealing with DRAM latency!

BTW, just switching from regular DDR5 to LPDDR5 increases latency by like 40-50 ns. So, if SMT were that critical for latency-hiding, they wouldn't have gotten rid of it, in the first place.
 
  • Like
Reactions: thestryker
Common misconception: on-package doesn't benefit latency. The speed of an electrical signal in copper is somewhere on the order of 0.15 meters per nanosecond. Simply moving the memory on-package doesn't make a dent in DRAM latency.
They definitely lowered latency using it and I assume (haven't had any time to dive in) JEDEC LPDDR/X scales latency similarly to desktop so there isn't much difference as it scales clocks up. Of course this could be something they engineered specifically for LNL given that the memory specifications were always going to be identical.
 
They definitely lowered latency using it and I assume (haven't had any time to dive in) JEDEC LPDDR/X scales latency similarly to desktop so there isn't much difference as it scales clocks up.
Running LPDDR memory at a higher frequency seems like it might offset the overhead of how it multiplexes commands and addresses over the same pins. So, the ability to run at up to 8533 MT/s could provide a latency benefit, in that sense. I suppose the timings could also be tuned more tightly, since they know the exact DRAM chips that will be used.

As this confirms, the DRAM latency on Lunar Lake is still much more typical of LPDDR5 than regular DDR5. It looks to be about 128ish ns, compared to the 99.5 ns of Arrow Lake.

https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68e5a5cd-50f4-4250-89ea-2213603b5bc4_1853x995.png

Source: https://chipsandcheese.com/p/analyzing-lion-coves-memory-subsystem

In fact, it's basically about the same as the 128 ns managed by Ryzen AI (Strix Point):
 
Last edited:
  • Like
Reactions: thestryker
Running LPDDR memory at a higher frequency seems like it might offset the overhead of how it multiplexes commands and addresses over the same pins. So, the ability to run at up to 8533 MT/s could provide a latency benefit, in that sense. I suppose the timings could also be tuned more tightly, since they know the exact DRAM chips that will be used.
Yeah I'm not sure where the improvement comes from I just know Intel compared LNL and MTL showing something small like 5-6% reduction in latency. It seems unlikely this was memory speed related, but since there's only one speed it's also impossible to really test. I mostly assumed it was design optimizations since the type of memory and path was always going to be identical.
 
Yeah I'm not sure where the improvement comes from I just know Intel compared LNL and MTL showing something small like 5-6% reduction in latency.
Oh, well Meteor Lake had the memory controller on a different die than most of the CPU cores. Remember, one change in Lunar Lake is that they put it on the same tile as the the CPU cores and GPU.

Meanwhile, Arrow Lake still has basically the same SoC architecture as Meteor Lake and its memory latency regressed vs. Raptor Lake.