News TSMC's N2 process reportedly lands orders from Intel — Nova Lake is the likely application

The article said:
To some extent, even Arrow Lake is dual-sourced with Arrow Lake-U (for low-power devices) using the Intel 3 process.
Not really. What you're talking about is a Meteor Lake refresh, involving a port from Intel 4 to Intel 3. Other than the name, it has nothing more in common with the rest of the Arrow Lake lineup than Meteor Lake did.
 
Dual sourcing with N2 seems like a really odd strategy in general unless it's enough better than 18A without BSPDN and the comparative cost makes sense. It also seems unlikely that high end products (at least non-mobile) would be made using a TSMC node given the mediocre high clock scaling their nodes have had (unless there's a problem here with 18A which would be bad for Intel). I'm also working on the assumption that Intel will not start making Xeons externally.

I can certainly see a reasonable argument for Intel sourcing other parts of the CPU from TSMC since they have nothing that covers N7/N5 and derivatives. It just seems like Intel 3 and 18A should be able to reasonably cover the compute side of whatever is being made.
 
I'm chalking this up as a risk management strategy. If 18A does continue to improve yields and ramp prod numbers, along with being sufficiently competitive with TSMC N2, they could go all-in for the successor to Nova Lake (I forget what that is). If it doesn't, they have TSMC to fall back on, or even "defaulting" to as they will with Nova Lake. Indeed production capacity is probably key as well, with 18A being prioritized for server chips and probably not expected to match the needed prod numbers for mid to high-end Intel desktop for this go-around.
 
  • Like
Reactions: PixelAkami
I'm chalking this up as a risk management strategy. If 18A does continue to improve yields and ramp prod numbers, along with being sufficiently competitive with TSMC N2, they could go all-in for the successor to Nova Lake (I forget what that is).
Keep in mind the timeframe, though. Nova Lake is due out in 2026. I assume mid or late in the year, but I could be wrong about that. The Nova Lake CPU tiles are pretty much the only ones it seems like would need/warrant N2. So, this is really confounding, especially if Panther Lake ships on time, using 18A, and has similar-size tiles to the CPU tiles in Nova Lake.

Could we be seeing something like a repeat of Meteor Lake? So, like maybe Intel couldn't get clock frequencies high enough on 18A and that's why it's viable for Panther lake but not Nova Lake? However, instead of simply cancelling it, they're moving Nova Lake to TSMC N2?

Otherwise, I think it's very worrying news for their server CPUs.

If it doesn't, they have TSMC to fall back on, or even "defaulting" to as they will with Nova Lake.
I think layout is too expensive for them to do this simply as a hedge. I believe like a couple $B. I think they wouldn't undertake it, unless/until they're reasonably certain they can't use 18A, for whatever reason.
 
Keep in mind the timeframe, though. Nova Lake is due out in 2026. I assume mid or late in the year, but I could be wrong about that. The Nova Lake CPU tiles are pretty much the only ones it seems like would need/warrant N2. So, this is really confounding, especially if Panther Lake ships on time, using 18A, and has similar-size tiles to the CPU tiles in Nova Lake.

Could we be seeing something like a repeat of Meteor Lake? So, like maybe Intel couldn't get clock frequencies high enough on 18A and that's why it's viable for Panther lake but not Nova Lake? However, instead of simply cancelling it, they're moving Nova Lake to TSMC N2?

Otherwise, I think it's very worrying news for their server CPUs.
It's a random off hand comment from a leaker on X that was made two moths ago.
Take your salt pill and stop rubbing your hands together and twirling your mustache.
 
I'm chalking this up as a risk management strategy. If 18A does continue to improve yields and ramp prod numbers, along with being sufficiently competitive with TSMC N2, they could go all-in for the successor to Nova Lake (I forget what that is). If it doesn't, they have TSMC to fall back on, or even "defaulting" to as they will with Nova Lake. Indeed production capacity is probably key as well, with 18A being prioritized for server chips and probably not expected to match the needed prod numbers for mid to high-end Intel desktop for this go-around.
Nova Lake successor is Razer Lake IIRC but it is supposed to use 14A, not 18A. Nova Lake will be barely able to paper launch in 2026, so if 18A isn't mature and of great yields by then, it would a huge indictment of Intel's foundry business. They will pay a fortune for TSMC's N2 which sees another ~30% price rise per wafer over N3. Apple is already mulling price rises for iPhone 18 due to using N2. They would be better using N2 for Clearwater Forest, where pricing isn't as crucial for desktop.
 
  • Like
Reactions: bit_user
They would be better using N2 for Clearwater Forest, where pricing isn't as crucial for desktop.
I was agreeing with you, until this part. Five years ago, sure. But, today, Intel's server business is in trouble. AMD is beating them on data center revenue[1] and they're having to slash prices, in order to stay relevant[2]. But, it's not just AMD they have to worry about. There's also that other 3-letter company, starting with an A.[3]
  1. https://www.tomshardware.com/pc-com...er-amd-outsells-intel-in-the-datacenter-space
  2. https://www.tomshardware.com/pc-com...ashes-prices-of-xeon-6-cpus-by-up-to-usd5-340
  3. https://www.tomshardware.com/pc-com...-50-percent-of-data-center-cpu-market-in-2025

I think servers and laptops are Intel's two most important markets. With the rise of mini PCs, which are based on laptop chips, the mainstream socketed desktop PC is less important than ever, and only ranks above dGPUs and workstations in importance to the company.
 
  • Like
Reactions: thestryker
The June vlsi highlights pre-release show 18a with a 25% performance improvement vs Intel-3.

The decisions of Intel 18a vs TSM processing were likely made a couple of years ago.

Looks like new automotive chips are using 18a, so capacity is going to be an issue for a while. Also, assuming 18a is healthy, AWS and MSFT probably have some reserved capacity.
 
The decisions of Intel 18a vs TSM processing were likely made a couple of years ago.
No, the decision to book capacity at TSMC might have been done a couple of years ago.
The using TSMC instead of 18A isn't even sure, in fact it's so far away from sure that it's ridiculous that we are even talking about it.
Even Intel using n2 capacity is not given, it would be a pretty bad decision to use the most expensive node when they are selling fine on older nodes.
If we learned anything from intel it is that they would use n3b again and just refine it a bit, boost the clocks a bit add a bit of cache maybe re enable htt, no need to go for a new node.

Now, intel is for sure going to use tsmc in some capacity, the same as they have been doing for many years now, but if they go for another CPU gen where the compute tiles are made at TSMC is very much in question.
 
Every wafer of a TSMC advanced node that Intel reserves and uses productively for their products also cuts out their competition's ability to create products. This situation will continue until TSMC has surplus capacity on the advanced nodes. This could even lift the prospect of the Intel's foundry operations.

We could still be in a market where demand outstrips the productive capacity of all fab operators combined. Especially true with the AI boom not looking to slow.

Both TSMC and Intel report lack of production capacity on older nodes. Maybe there will be a market for the Russian fabs working a decade back as long as they can produce something.
 
The using TSMC instead of 18A isn't even sure, in fact it's so far away from sure that it's ridiculous that we are even talking about it.
Even Intel using n2 capacity is not given, it would be a pretty bad decision to use the most expensive node when they are selling fine on older nodes.
If we learned anything from intel it is that they would use n3b again and just refine it a bit, boost the clocks a bit add a bit of cache maybe re enable htt, no need to go for a new node.
JayNor is Intel. If he's not refuting it, then it's probably true.

Also, there's zero chance of Intel reversing course on HTT in client chips.
 
Every wafer of a TSMC advanced node that Intel reserves and uses productively for their products also cuts out their competition's ability to create products.
Intel using N3B didn't keep AMD off it. If anything, it just steered them in the direction of using it where it offered the maximum value. Their N4P products are still competitive enough and likely better for their bottom line than if they'd have done them on N3.

We could still be in a market where demand outstrips the productive capacity of all fab operators combined. Especially true with the AI boom not looking to slow.
The AI boom is going to be using the very best nodes, because current hardware prices are more than enough to support that. This leaves open the n-1 nodes for CPUs.

Both TSMC and Intel report lack of production capacity on older nodes. Maybe there will be a market for the Russian fabs working a decade back as long as they can produce something.
China is eating everybody's lunch, on older nodes. It's so bad that GloFo is talking about merging with UMC and restarting efforts to tackle newer nodes.
 
  • Like
Reactions: thestryker
Every wafer of a TSMC advanced node that Intel reserves and uses productively for their products also cuts out their competition's ability to create products.
While this can be true (in cases like where Apple would buy entire first runs) it very much isn't the case with N2. Apple isn't buying the first run and it seems like everyone is using it for competitive advantage rather than moving everything over. I'd be surprised if Zen 6 was on N2, but we already know the next Epyc will have at least some N2. It seems likely that Intel's decision to use N2 for some products is predominantly related to volume.
 
Care to explain your reasoning?
Because zero is a very bold statement.
Intel made a very clear and logical argument for removing HTT in client processors. I can dig up the slides from Lunar Lake, if you want. There's no reason to think the underlying facts have changed and, unless they do, Intel isn't about to reach a different conclusion.

For myself, I'm a little sad to see it go, but I understand and accept their argument. I do look forward to seeing how SMT continues to evolve in AMD's architectures.
 
Intel made a very clear and logical argument for removing HTT in client processors. I can dig up the slides from Lunar Lake, if you want.
Lunar lake is only the mobile part, it makes a little more sense to not use htt there because battery life. (a 30% increase in perf/power/area is not bad at all)
You could even argue that they removed htt to fit the additional memory into the package, or to compensate a little bit for the cost of that memory.
That memory was a one off though so future gens will have to deal with the loss of performance that comes with the loss of that mem, take a really wild guess on what could help there.......

Client is also a lot more than just mobile.
 
Lunar lake is only the mobile part, it makes a little more sense to not use htt there because battery life. (a 30% increase in perf/power/area is not bad at all)
Mobile is the client market that's growing. The absolute size of the desktop market has been pretty much flat. As a relative share, it's currently at 20% of the total client PC market, and that's only predicted to decline.

So, if Intel is going to evolve its client architecture to favor one or the other, then mobile definitely wins out. And a smaller, leaner Intel is not one that's going to make a distinct core for desktop that differs from both mobile and server.

You could even argue that they removed htt to fit the additional memory into the package,
First, the area impact isn't that big. Second, they said they reinvested those transistors into improvements in single-thread performance.
 
  • Like
Reactions: thestryker
Mobile is the client market that's growing. The absolute size of the desktop market has been pretty much flat. As a relative share, it's currently at 20% of the total client PC market, and that's only predicted to decline.

So, if Intel is going to evolve its client architecture to favor one or the other, then mobile definitely wins out. And a smaller, leaner Intel is not one that's going to make a distinct core for desktop that differs from both mobile and server.


First, the area impact isn't that big. Second, they said they reinvested those transistors into improvements in single-thread performance.
How does any of that change the fact that lunar lake had integrated memory that the next gen won't have?!
Without that mem they will have to get more performance from somewhere, so the chances for htt coming back is higher than zero because that is a super easy way to increase performance again and something that intel already owns and has already perfected.
 
Without that mem they will have to get more performance from somewhere,
The only benefit Lunar Lake got from it was higher frequency. However, LPCAMM2 can already reach fairly respectable frequencies (I'm reading current spec is 7500 MT/s). Presumably, that standard will be updated to incorporate a CKD, like what's featured in CUDIMMs. And then there's the new SOCAMM form factor:

The other way on-package memory can benefit performance is in the ability to support a much wider interface, such as in Apple's Max and Ultra M-series SoCs. However, Lunar Lake didn't do that - it retained the traditional 128-bit interface.
 
Last edited:
How does any of that change the fact that lunar lake had integrated memory that the next gen won't have?!
Without that mem they will have to get more performance from somewhere, so the chances for htt coming back is higher than zero because that is a super easy way to increase performance again and something that intel already owns and has already perfected.
I'm not sure why you think the removal of SMT had anything to do with the on package memory in LNL. There appear to have been two versions of Lion Cove and if the implementation with SMT made sense for client use we'd have likely seen it in ARL. The thing that has to be kept in mind here though is that Intel uses desktop processors to fill out mobile. In that power restricted market ARL appears to be much closer to a universal upgrade over RPL unlike desktop. On the desktop side ARL without SMT is mostly better than RPL except for latency sensitive workloads and those that can leverage the much higher single core boost neither one of which have anything to do with SMT.

As long as mobile makes up the majority of the market it seems unlikely that desktop will get its own compute tile. I could see the potential for desktop sharing compute tile with enterprise LCC, but those come out less often and haven't been on the leading edge core since pre-GC so this seems unlikely.

All of this is also likely why there's talk of NVL top desktop SKUs using two compute tiles. It makes sense that it would be better for Intel to use two compute tiles rather than designing a larger compute tile. If they design a larger compute tile for desktop they have to manage volume very carefully or use it for the whole stack and all of the lower SKUs cost them more. Packaging has to be modified either way so there's nothing to gain or lose here.

Unless Intel abandons the hybrid architecture for client parts I just don't see SMT returning.
 
  • Like
Reactions: bit_user
I'm not sure why you think the removal of SMT had anything to do with the on package memory in LNL.
The other way around, I think that the next gen not having the memory might lead them into putting htt back.
It would be too easy of an option for them to not think about it.
All of this is also likely why there's talk of NVL top desktop SKUs using two compute tiles. It makes sense that it would be better for Intel to use two compute tiles rather than designing a larger compute tile.
They would only need to design larger dies if they need to fit more cores into future CPUs.
If the next gen is going to use multiple dies it is going to be to shrink the dies and not to make them bigger/keep them as big as they are, there is near to zero reason for intel to go up to 50+ cores on a desktop CPU let alone a mobile CPU, since you are so keen on thinking that desktop is only a byproduct of mobile, but keeping the same number of cores as they have now and simultaneously have smaller dies=higher yield from their 18A especially in the beginning makes all the sense in the world.