News Intel might cancel 14A process node development and the following nodes if it can't win a major external customer — move would cede leading-edge ma...

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
The only way I can see this turning around for Intel (as a manufacturing company) is if this is a bid to get the US government to bail them out in a big way. But I don't think even that would work, as the problems appear to be systemic with Intel's foundry side. Basically, the government would have to agree to pour tens of billions per year into the problem to keep leading edge manufacturing by Intel in the US. It would be shocking to actually see that happen — whether you think it's a good idea or bad, I just can imagine there will be enough consensus among the various political factions to make that happen over the long term.

Mind you, I personally think putting $50 billion per year into leading edge silicon manufacturing in the US would be a far better way to spend tax dollars than much of what seems to happen. I just don't believe you'd ever get legislation to make such a move happen that wouldn't end up with a ton of bloat, pork fat, etc. Politicians just can't help themselves

Two perhaps more palatable ideas have been suggested by Stratecherry. First, with the reality:

To summarize, there is no market-based reason for Intel Foundry to exist ….

[Step 1: spin of Foundry; let it be run independently: even as Lip Bu Tan said, nobody wants to fab at their competitor's fab]

the best idea at this point is a new company that has the expertise and starting position of Intel Foundry. Critically, though, it shouldn’t be at all beholden to x86 chips, have hundreds of thousands of employees, or the cultural overhang of having once led the computing world.

[Step 2: force the fed gov't to do purchase gurantees, not subsidies]

That is why a federal subsidy program should operate as a purchase guarantee: the U.S. will buy A amount of U.S.-produced 5nm processors for B price; C amount of U.S. produced 3nm processors for D price; E amount of U.S. produced 2nm processors for F price; etc. This will not only give the new Intel manufacturing spin-off something to strive for, but also incentivize other companies to invest; …

[Conclusion]

And, if the U.S. is going to pay up, that means giving that foundry the best possible chance to stand on its own two feet in the long run. That means actually earning business from Apple, Nvidia, AMD, and yes, even the fabless Intel company that will remain. The tech world has moved on from Intel; the only chance for U.S. leading edge manufacturing is to do the same.
 
Kudos to Gina Raimondo and Biden admin to have the foresight to implement milestone based payments for CHIPS ACT, to avoid corporate giveaway and burning taxpayer cash.

As predicted, Intel takes free taxpayer grants with humongous layoffs, prolonged fab delays, cancelled nodes, delays in promises, outsourcing to foreign rivals, which is why milestone payments are so important. CHIPS ACT also diversified by including TSMC and Samsung, just in case Intel face plants. Good foresight there.
 
Last edited:
  • Like
Reactions: bit_user
If you don't build factories unless you are sure of demand, you are already out of the game.
You need to lead and technological leadership creates demand. If you wait for the orders before building factories, you will get no orders, because the competition already took those orders.
Intel could have retooled existing fabs that already have EUV machines for low run 18A to ensure yields before committing to huge fab expansion. Instead, Gelsinger's unlimited budget approach put the cart before the horse, and now these fabs expansion plans are just hold because they overcommitted to "5 nodes in 4 years" ridiculous schedule which has crippled IFS foundry. Yea, if you cancel half the nodes and outsources the rest, you technically did achieve 5N4Ys.
 
Last edited:
I don't think Intel will generate $30b in profits on internal 18A products to offset this expenditure—for many years, if not a decade.
Since you are capable of looking up numbers look up how much gross profit intel made last year, under the worst conditions meaning paying for building fabs and also paying tsmc to make their chips.
Now imagine that intel is done building fabs uncontrollably and only makes as many as they need and also don't have to overpay for production by paying an outside fab.

If the coming years are as bad as last year it's gonna take two years, if their new gen is a hit it's gonna be one year.

And yes, the thing to look at is gross profit because beyond that we go back to investing into future products. If intel uses all of that money on r&d and building fabs and whatever else, they might still have a negative net income.
 
  • Like
Reactions: shady28 and rluker5
Intel could have retooled existing fabs that already have EUV machines for low run 18A to ensure yields before committing to huge fab expansion. Instead, Gelsinger's unlimited budget approach put the cart before the horse, and now these fabs expansion plans are just hold because they overcommitted to "5 nodes in 4 years" ridiculous schedule which has crippled IFS foundry. Yea, if you cancel half the nodes and outsources the rest, you technically did achieve 5N4Ys.

I think Intel has done the right thing shooting for 18A. There is a big difference between 18A (and TSMCs N2) vs prior node 'leaps' like N5 and N3. They also appear to have fixed something with their SRAM density on 18A, that they couldn't or didn't fix on prior nodes.

Big thing people fail to note when comparing Intel to TSMC nodes is that there are different libraries, and Intel tends to rule the HP (High Performance) library. TSMC has ruled the HD (High Density) side that attracts foundry business.

For example, Intel 7 HP density is actually very close to TSMC N5 HP library density.

The HP libraries are important for servers, GPUs, and high end desktop/workstation.

On prior nodes TSMC has also had an significant advantage in SRAM density, but that should be (mostly) gone with 18A.

i.e.

Logic Density (HP Library):
  • Intel 18A: ~180 MTr/mm². Uses RibbonFET GAA transistors and PowerVia BSPDN, achieving ~30% density improvement over Intel 3 (~124 MTr/mm²).
  • TSMC N2: ~130 MTr/mm². Uses nanosheet GAA transistors, offering ~1.05x density scaling over N3E (~124 MTr/mm²).
Logic Density (HD Library):
  • Intel 18A: ~238 MTr/mm². Leverages RibbonFET GAA transistors and PowerVia BSPDN, achieving ~30% density improvement over Intel 3 (~200–250 MTr/mm², estimated).
  • TSMC N2: ~260 MTr/mm². Uses nanosheet GAA transistors, offering ~1.2x density scaling over N3E (~215 MTr/mm²).
 
The writing was on the wall with the valuation collapse and it became pretty much inevitable the second they forced Gelsinger out. The supposed investors don't want to build a company's future if it interferes with making money now.

While bad leadership certainly caused the situation that Intel found themselves in this is at best equally bad. The fabs will likely be sold off and we'll have another GloFo except I suspect there will be a lot more job losses because it would cost money to convert DUV fabs to be usable.
As an Intel shareholder, I can tell you it is not making ANY money for me now, nor has it in some time.
 
That is fair; I don't mean 18A fabs are literally empty , but all the capex that Intel put into fabs designed for 18A and yet zero major external customers.

Intel annual capital expenditures:

Pre-Gelsinger average is roughly $15.3B / year. Using that as 100%:

2018-2020 avg: $15.3 B / yr - 100%
2021: $20.3B - 133%
2022: $24.8B - 162%
2023: $25.8B - 169%
2024: $23.9B - 156%

This was an excesss $30 billion that Intel has not only made next to no profit, but also didn't land even one significant customer.

Intel confirms significant capex went for 18A / 18A-P, but does not break it down (how much for 4 / 3, which also needed some capex):



I don't think Intel will generate $30b in profits on internal 18A products to offset this expenditure—for many years, if not a decade.

MS, Amazon, US Gov: unfortunately, Intel also confirmed all of these are insigificant contracts in the same 10-Q:



The 10-Q is dated July 24, 2025, so it is the latest update.

//

My overall point, as you note: Intel is not in a financial position anymore to keep building expensive new fabs tooled for expensive new nodes ad infinitum without confirmed customers.
You present some good data.
I think Tan is making it clear they are pulling back on the capital expenditures.
Hopefully not back to the pre Gelsinger 2020 Skylake 14+++ levels, but somewhere in between would be healthy. Maybe just a few extra billion a year instead of 8.4.
Pre Gelsinger capex spending levels got them in a mess and they fell behind in manufacturing. Since then they have caught up fast. They probably could slow down a bit if 14a leads. 18a looks competitive with the latest the rest of the world has.
 
That is fair; I don't mean 18A fabs are literally empty , but all the capex that Intel put into fabs designed for 18A and yet zero major external customers.

Intel annual capital expenditures:

Pre-Gelsinger average is roughly $15.3B / year. Using that as 100%:

2018-2020 avg: $15.3 B / yr - 100%
2021: $20.3B - 133%
2022: $24.8B - 162%
2023: $25.8B - 169%
2024: $23.9B - 156%

This was an excesss $30 billion that Intel has not only made next to no profit, but also didn't land even one significant customer.

Intel confirms significant capex went for 18A / 18A-P, but does not break it down (how much for 4 / 3, which also needed some capex):



I don't think Intel will generate $30b in profits on internal 18A products to offset this expenditure—for many years, if not a decade.

MS, Amazon, US Gov: unfortunately, Intel also confirmed all of these are insigificant contracts in the same 10-Q:



The 10-Q is dated July 24, 2025, so it is the latest update.

//

My overall point, as you note: Intel is not in a financial position anymore to keep building expensive new fabs tooled for expensive new nodes ad infinitum without confirmed customers.
Is this adjusted for inflation? There was a rather large event between 2018-2024.
 
Far too many US companies have CEO's from the marketing department.
Tan has a Masters degree in Nuclear Engineering from MIT and went on to pursue a PhD in the subject, but quit after the 3 Mile Island disaster basically killed the US nuclear energy industry. That's when he apparently turned to the dark side and got a MBA at USF.

Compare that to Germany where most CEO's are from R&D. It isn't surprising that AMD's resurgence came about when they hired Lisa Su an engineer to be CEO.
Gelsinger was an engineer.

And the engineer didn't help, did he?
Gelsinger didn't do perfect, but Intel had a run of misfortune and Gelsinger simply got overwhelmed.

I doubt there are many others who would've survived the same time period. On the flip side, if Intel had dodged a few of those potholes, Gelsinger might still be there and executing on his plans.

Yes he is an engineer but nuclear engineering has nothing to do with (AFAIK) anything Intel has in their portfolio of products.
Yes, but... having a foundation in science and engineering gives someone a basis for problem-solving and data-driven thinking that most purely business types don't have. Tan stayed involved in the tech industry, further exposing him to R&D. So, I think he has the necessary background, if not an ideal one.

Lisa Su on the other hand has her degree in electrical engineering with computer chip design being part of it.
True, but I think that also has downsides. I think her deep background in hardware design & fabrication contributed to somewhat of a blind spot for software.
 
Last edited:
Intel half attempted to do this with their hybrid design, but it hasn't done enough yet.
Somewhat counterintuitively, their hybrid design enabled Intel to make the P-cores ever bigger and more power-hungry. If you can rely on the E-cores for efficiently scaling multithreaded performance, then you can optimize the P-cores for single/lightly-threaded performance, without having to balance efficiency so much.
 
  • Like
Reactions: helper800
IMO, Intel could and should still focus on advanced nodes as there's plenty of market capacity for this and decent margin, just not leading-edge.
The big money is in leading-edge nodes. If they're not competitive with TSMC, then I think the issue is that they won't recoup the R&D cost and the whole business model gets upended.

So, it's basically: compete or go home. That's not to say they need to beat TSMC, hands down, but at least be close enough that they can charge a similar amount.
 
The big money is in leading-edge nodes. If they're not competitive with TSMC, then I think the issue is that they won't recoup the R&D cost and the whole business model gets upended.

So, it's basically: compete or go home. That's not to say they need to beat TSMC, hands down, but at least be close enough that they can charge a similar amount.
If you are not close, you are dead in the water. It's a feast or famine type of industry.
 
  • Like
Reactions: bit_user
The big money is in leading-edge nodes. If they're not competitive with TSMC, then I think the issue is that they won't recoup the R&D cost and the whole business model gets upended.

So, it's basically: compete or go home. That's not to say they need to beat TSMC, hands down, but at least be close enough that they can charge a similar amount.
I think looking at GloFo v Samsung v TSMC is a good reflection of this. GloFo has decided to specialize and I don't think they have any EUV nodes (I think they have some machines for specific layers though). Samsung has tried to compete, got too aggressive (feels like Intel 10nm) and then didn't seem to adjust expectations to fill fabs running say 5nm. TSMC has just kept executing with minimal delays and problems likely due to a more conservative node to node approach and relying on node refinements as time passes.

Intel is in a bind where they could probably make Intel 16/12/3/18A into a relatively profitable business over time. If they stop innovating though the only thing that will be viable is continuing to milk those nodes as long as it's viable. This path I think leads to selling IFS as soon as financially viable.
 
Sure, but he is not a bean counter. He is a science guy to. He was the CEO of CADENCE, the company responsible for Electrical Engineering software to create electronics and chips.
He quit his studies of science & engineering, then went and got a MBA. Since then, he's been focused on the business side of things. So, while he has a background in science and engineering, none his professional life has been on that side of the house.

He is no Bob Swan, and even there, Bob was probably the best CEO Intel had in the last 10 years. He was not a clown and wanted to focus on preventing the competition from gaining ground in the server market.
Swan only looks good because he was the beneficiary of good timing. He missed crucial opportunities to make investments and instead opted to engage in more share buybacks and dividends. Those lack of investments helped create the predicament Intel is in, today.

Pat went Nuclear and decided to rival TSMC in chip manufacturing to gain government subsidize, which was the last nail in the coffin.
There was no other way to finance the R&D needed to develop new nodes. Either Intel needs external customers for its fabs, or it has to get out of the manufacturing business.

tip for Intel: ... Try leading in innovation and investing in R&D instead of stock buybacks.
True, but we're pay beyond that. Gelsinger initially reduced the dividends and then shut them off, completely. I don't know when the last stock buyback was, but I think that might've been the first to get axed.
 
if Intel knew how to do it they wouldnt of kept pushing power and have the whole 13/14th gen issue.
Their newest lineup are cut back on power and that is exactly why they are not an improvement.
No, power isn't the reason for Arrow Lake's lackluster performance. A big part of the problem is that they went all-in on the chiplet architecture of Meteor Lake, which it turns out wasn't fully baked.

If Arrow Lake kept the monolithic architecture of Raptor Lake, it wouldn't have such poor L3 and memory latency. Those are some of the main issues affecting its gaming performance.

In multi-threaded performance, it does manage better, but there are two reasons it doesn't mop the floor with Zen 5 on that front. The first is the loss of Hyperthreading, which is a small deficit, but does serve to water down the other improvements in the cores. The second is that they're still lacking AVX-512, which helps Ryzen 9000's averages, when you look at aggregate benchmark scores.

Cores cost money per core. If anyone thinks these next gen cpus are going to be same price are cheaper, please pass that pipe i need some dreams fulfilled too.

18a supposedly cost 2.5 times more to produce the same wafer. 52 cores to beat 24 man this is some bad logic.
Of those 52 cores, only 16 are P-cores. The rest are E-cores, which are probably about half to 1/3rd the transistor count of a Zen 5 core.

That said, I think you're right that the 52-core version will probably cost more than AMD's 24-core model.
 
Last edited:
  • Like
Reactions: helper800
Of those 52 cores, only 16 are P-cores. The rest are E-cores, which are probably about half to 1/3rd the transistor count of a Zen 5 core.

That said, I think you're right that the 52-core version will probably cost more than AMD's 24-core model.
I would not be surprised if the 52 core model cost 800-1200 dollars. Its going to be a lot of binning to get that many good cores, but we shall see. Maybe intel fixes up some if its architecture, tightens some of the latencies, adds AVX back in. If they then have competitive core to core (p-cores) performance with Zen 6, Intel might see a bit of claw back for its desktop market. It all comes down to price at the end of the day. if they come out swinging and the 52 core model is 750 dollars and it smashed the 24 core Zen 6 CPU, they will gain some market share back. The lionshare of the market is in the sub 350 dollar segment though, and who knows what that will look like at this point.
 
  • Like
Reactions: bit_user
They should rather "spin out" their board of directors, which have made the strangest decisions in the past 10-15 years and are directly responsible for the current desolate state of Intel.
I wish that were a thing, I would be 100% on board with that!! Sadly, they will all ride off into the sunset on their golden saddles laughing all the way to the bank after driving a company into the ground.
 
I would not be surprised if the 52 core model cost 800-1200 dollars. Its going to be a lot of binning to get that many good cores, but we shall see. Maybe intel fixes up some if its architecture, tightens some of the latencies, adds AVX back in. If they then have competitive core to core (p-cores) performance with Zen 6, Intel might see a bit of claw back for its desktop market. It all comes down to price at the end of the day. if they come out swinging and the 52 core model is 750 dollars and it smashed the 24 core Zen 6 CPU, they will gain some market share back. The lionshare of the market is in the sub 350 dollar segment though, and who knows what that will look like at this point.
ZEN 6 is supposed to be a 12core ccx, with tsmc increasing prices all over the world and the added tariffs in the US how surprised would you (not) be if AMDs 24core CPU would be close to twice the price of the 9950x ?
Also good luck to amd trying to make a sub $350 CPU with a 12core ccx ... if there are enough bad ccx that fail the binning to be 12core for them to make cheaper cpus out of binned ccx' then the full 24 core one will be super expensive.
 
ZEN 6 is supposed to be a 12core ccx, with tsmc increasing prices all over the world and the added tariffs in the US how surprised would you (not) be if AMDs 24core CPU would be close to twice the price of the 9950x ?
Also good luck to amd trying to make a sub $350 CPU with a 12core ccx ... if there are enough bad ccx that fail the binning to be 12core for them to make cheaper cpus out of binned ccx' then the full 24 core one will be super expensive.
I would not be surprised to see 500-600 dollar 12 core single CCD CPUs from AMD, or 1000-1200 dollar 24 core 2 CCD CPUs.

@TerryLaze @bit_user @thestryker I have a question; Could AMD not make lower core count chiplets on a cheaper process to fill in those budget CPUs, or even two different chiplet sizes on the currently planned node for the cheaper CPUs with lower core counts? To my understanding, the cost is determined by the die area so the smaller core count chiplet would not cost more because they would be a smaller area. Packaging would become more complex though.
 
Could AMD not make lower core count chiplets on a cheaper process to fill in those budget CPUs, or even two different chiplet sizes on the currently planned node for the cheaper CPUs with lower core counts? To my understanding, the cost is determined by the die area so the smaller core count chiplet would not cost more because they would be a smaller area. Packaging would become more complex though.
Yeah, intel definitely did that already, and I think amd as well, although I think they did it with APUs , so I guess it depends on if you count that or not.

The thing is that amd would have to spend more to make more masks for lower end CPUs ,and at the same time reduce the amount of high end CPUs they could potentially make by splitting their orders from tsmc. Both of these things would increase costs, and my guess is that it would increase it way more than just make the whole order the top ccx , in the best case you sell all of them as top end and make so much more money.
 
  • Like
Reactions: helper800
Yeah, intel definitely did that already, and I think amd as well, although I think they did it with APUs , so I guess it depends on if you count that or not.

The thing is that amd would have to spend more to make more masks for lower end CPUs ,and at the same time reduce the amount of high end CPUs they could potentially make by splitting their orders from tsmc. Both of these things would increase costs, and my guess is that it would increase it way more than just make the whole order the top ccx , in the best case you sell all of them as top end and make so much more money.

I really don't think AMD is going to go all N2. They'll use N3E or something similar for a chunk of their Zen 6 is my bet. N2 is going to be expensive as will 18A, just look at how expensive Lunar Lake is.
 
I really don't think AMD is going to go all N2. They'll use N3E or something similar for a chunk of their Zen 6 is my bet.
That would make a lot of sense for AMD for sure.
N2 is going to be expensive as will 18A, just look at how expensive Lunar Lake is.
Lunar lake is expensive because it's made on tsmc, intel has to cover tsmcs margins and their own ones on top of that, 18A has already been very expensive on intel but they did get a couple of customers so it may be ok-ish, they will only need to cover their own margins on 18A.
intel might also just swallow the extra cost to get people to buy intel again and to advertise their new node to more customers (if it's good)
 
Could AMD not make lower core count chiplets on a cheaper process to fill in those budget CPUs, or even two different chiplet sizes on the currently planned node for the cheaper CPUs with lower core counts? To my understanding, the cost is determined by the die area so the smaller core count chiplet would not cost more because they would be a smaller area. Packaging would become more complex though.
They need enough volume for it to be worth doing. I think consumer chiplet-based CPUs are sort of piggybacking off the server chiplet-based CPUs, but if you made special desktop-only chiplets, then you'd have to sell a lot of them, to make it worthwhile.

That's why I think they have mostly a 2-pronged strategy of designing chiplets for server and desktops (and high-end laptops), plus designing monolithic APUs for laptops and desktop. If/when their mainstream laptop APUs adopt their chiplet architecture, then you might find a bit more creativity in the desktop market.

IMO, it would be really interesting to see them bring their C-core chiplets to the desktop. They wouldn't be cheap and I probably wouldn't buy one, but I'd be curious to see what sort of multithreaded performance they can achieve with a 32-core/64-thread CPU in an AM5 socket.
 

TRENDING THREADS