News Intel Foundry Roadmap Update - New 18A-PT variant that enables 3D die stacking, 14A process node enablement

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I for one am happy that we keep hearing more and more about the positive progress of Intel processes. I'm really staring to believe they have a shot at recovering their competitiveness at the leading edge. I'm holding on to some skepticism, but I'm hopeful.

I see two main areas of concern, the ability of Intel to deliver the technology on-time but more importantly the ability to deliver the technology for use by their clients. Intel's focus on building foundry processes where they have been their main consumer has limited their know how to deliver solutions for potential adopters. I hope they aren't just focusing on catching up and surpassing TSMC in technology but really focusing on a strategy to address their ability to deliver products , packaging, and design efficiency suitable for potential adopters. In fact, I believe this part will be the biggest determination of their strategy to right the ship ... and the key to their success. Engineers make ....ty users everyone knows this ... while I believe they will produce the right tech , I question the "user experience" and "user interface". And I haven't heard any plans to address this problem in their scheming. It's great that they are becoming an engineering company again ... but is it great that they are becoming "an engineering company" again?
 
  • Like
Reactions: snemarch
I'm always excited about new tech and what Intel has in stall for us with 18A and beyond, but really they need the rubber to hit the road and Panther Lake just has to be on time and a great step up over Lunar Lake and Arrow Lake H. 18A needs to shine in raw performance, efficiency, efficiency scaling against N3. Further delays to Panther Lake will be devastating for Intel and let's be clear Panther Lake has already slipped from a hard launch in 2025 to Q1 2026, giving them maybe 2 quarters lead time before AMD launches Zen 6 Medusa.

Intel is great with the presentation slides, but they need products from their own fabs for sale ASAP. People have stopped listening to the hype.
 
  • Like
Reactions: Marlin1975
Intel's focus on building foundry processes where they have been their main consumer has limited their know how to deliver solutions for potential adopters.
This is an odd way of putting what has actually happened. All of Intel's DUV nodes were designed with proprietary tools Intel would never share. This is by far the biggest barrier for contract foundry work. All of Intel's EUV nodes are designed using industry standard tools which eliminates that issue entirely.

They obviously still have to show they can deliver at volume with expected defect ratios. PTL should show this for 18A and CWF will be equally important on the packaging front.
 
I keep hearing that 18A is "faster, but less dense." I know the site is almost exclusively interested in whatever is faster! 😁 What is the use case for something that's maybe not quite as fast, but more dense. Just size of the chip? More space for more things on the chip? Maybe cheaper to make?
All other things being equal, faster transistors are better, lower-leakage (power efficient transistors) are better, and higher-density transistors are not better (except where chip size is a concern). However, not all things are equal. Higher-density transistors allow more to be put into a processor. More transistors means more specialized logic, more parallel logic, or better power management, any of which can make up for the lower performance or higher leakage.

Compare N4P and N3B. N4P is higher performance but also higher leakage (more power consumption) and lower density. Apple M3 is built with N3B, and it has really good power efficiency and generally performs well, but at a really low clock speed. AMD Zen 5 is built on N4P. It can reach really high clock speeds and often beats M3 on desktop, but it generally performs worse in laptops. N3B is the newer and more advanced node, but desktop CPUs built on it probably won't beat desktop CPUs built on N4P. TSMC has a higher-performance N3P node coming to address this.
 
I hope they aren't just focusing on catching up and surpassing TSMC in technology but really focusing on a strategy to address their ability to deliver products , packaging, and design efficiency suitable for potential adopters. In fact, I believe this part will be the biggest determination of their strategy to right the ship ... and the key to their success.
Intel is working with arm to make sure that they will be able to use 18A and works on getting people to use risc-v with intel fabs as well.
Also both nvidia and broadcom are testing 18A for future products.

https://www.intc.com/news-events/pr...ntel-foundry-and-arm-announce-multigeneration
https://www.tomshardware.com/pc-com...tel-foundrys-18a-process-node-for-gaming-gpus
 
I'm always excited about new tech and what Intel has in stall for us with 18A and beyond, but really they need the rubber to hit the road and Panther Lake just has to be on time and a great step up over Lunar Lake and Arrow Lake H. 18A needs to shine in raw performance, efficiency, efficiency scaling against N3. Further delays to Panther Lake will be devastating for Intel and let's be clear Panther Lake has already slipped from a hard launch in 2025 to Q1 2026, giving them maybe 2 quarters lead time before AMD launches Zen 6 Medusa.

Intel is great with the presentation slides, but they need products from their own fabs for sale ASAP. People have stopped listening to the hype.
Producing products simply from their own FAbs for themselves isn’t enough to re-invigorate … ImHO producing products for others and finding some serious acquisitions to address their AI failures is tantamount to their success. I liken Intel to Apple circa 2000. Fix your base problems first , ie Apple really bring the Mac back to life and then transforming to the forefront of the personal computing anywhere movement with the iPhone brought them back from the dead. So yes really fix their current core products and fab processes but they need to have a future plan that prepares for current and future revolutions in terms of innovation.
 
Intel is working with arm to make sure that they will be able to use 18A and works on getting people to use risc-v with intel fabs as well.
Also both nvidia and broadcom are testing 18A for future products.

https://www.intc.com/news-events/pr...ntel-foundry-and-arm-announce-multigeneration
https://www.tomshardware.com/pc-com...tel-foundrys-18a-process-node-for-gaming-gpus
Let’s see. I hope their approach has changed Nvidia has been flirting with Intel and Samsung for the better part of a decade. If Intel learns how to be a partner th skies the limit.
 
This is an odd way of putting what has actually happened. All of Intel's DUV nodes were designed with proprietary tools Intel would never share. This is by far the biggest barrier for contract foundry work. All of Intel's EUV nodes are designed using industry standard tools which eliminates that issue entirely.

They obviously still have to show they can deliver at volume with expected defect ratios. PTL should show this for 18A and CWF will be equally important on the packaging front.
I see your point. I think it’s more nuanced. The EUV node standardizations are an ingredient not a cake. Everyone knows you need flower, eggs, salt, sweeteners to bake a cake …. But there are better bakers than others … and Intel has lacked the proper mindset to be a top baker. Imagine a woman with all the beauty, success, and ingredients to be a great partner but is simply horrible at relationships that’s Intel they are going to the gym, got a new career .., whatever but they just haven’t learned how to be in a relationship and thusly are bad partners and therefore never expanded their foundry business. Thats just my opinion. Nvidia didn’t just become synonymous with AI because of their GPUs.
 
whatever but they just haven’t learned how to be in a relationship and thusly are bad partners and therefore never expanded their foundry business.
They never expanded their foundry business because they never had one, 18A will be the first time they will have a foundry business and that hasn't even started production for intel yet.
Customers not trusting them is a real concern though, but on the other hand intel has been partners with several OEMs for decades now, so they know how to be good partners.
 
I'm interested to see this develop, but 2028 feels a bit late to the '3D-die stacking party' considering AMD launched their first one in 2022 and have been continuously improving the technology. I feel like their lead is too big for Intel to steal the gamer market back unless they find a good balance between price, core count, and the 3d die stacking performance.

I thought they would have been looking at getting this technology off the ground ASAP since at least 2023 when the 7800X3D was released and made raptor lake look bad for the gaming market.
Don't get me wrong, I'm glad they're doing it and it's always good to have competition, but I worry that this will follow the same path as Intel's ARC, where it is a good product (after the driver fixes) but too late to the party to realistically make a splash in the market.
 
You'll cowards don't even make X3D CPUs
They are good for gaming because all of that extra cache is quite important to games. As you progress from frame-to-frame very little changes so the chance of data being in the cache is quite high. That most certainly does NOT mean that high-cache chips are the best for general computing though.

I game, but 80% of the tasks I do on my PC are NOT games. That makes just going out an buying a 9800X3D a much more critical decision. Tom's makes a lot about absolute gaming performance (using a 4900 at 1080p) resulting in astronomical frame rates that are, in my opinion, well beyond usefullness. How much is one willing to sacrifice, for instance, to go from 190-205 FPS in a 1080p game? In my case absolutely nothing, but that's what they have to do to differentiate the processors. So, I'm not being critical here, it just is what it is. As we see the minute you move away from a $3000 GPU paired with a $800 OLED monitor, things begin to immediately even out. My brand-new monitor won't even quite reach the lower end of that. I doubt there are many playing 1080p games on a 49/5900 (unless e-sports still played at 1080?)

Then again I'm old enough to remember when we struggled with everything and 30FPS was pretty much the holy grail. There was an entire era of PC gaming when the ultimate rig had multiple video cards and only exquisitely expensive monitors could exceed 72hz. I'm making buying decisions now based on 1440 and factoring in other things as well and the number of FPS difference from one processor and GPU to another is much smaller and my needs do go beyond just gaming so I'm very much interested in these new Intel chips and how they perform in other areas, because they will almost certainly do well enough in gaming to satisfy me.

Anyway, if all you do with a PC is play games, high-cache chips are probably for you, but Intel (and AMD for that matter) have a much wider market.
 
  • Like
Reactions: TheSecondPower
It is part of the 5 nodes in 4 years plan. 3/4/20a were all taking resources too.
That would run out this year with 18A and the roadmap only shows one node beyond 18A.
I feel like their lead is too big for Intel to steal the gamer market back unless they find a good balance between price, core count, and the 3d die stacking performance.
Intel want's the gamer market about as much as they want the console market...
As long as they can sell tons of CPUs to OEMs they don't care.
(Basically they don't care for the additional cost in material but much more in production time and floor space to add such a big cache to any of their CPUs)
 
Oh that one is easy. Let the old timer tell you! 😜

In the old days "NM" was the measure, in nanometers, of the separation of different components, mainly transistors on a chip. I say the old days because "NM" ceased to have any real meaning around the turn of the century (300 NM nodes.) So that is the standard that was set, and then broken long ago.

Notice in the article Tom's says something like "2nm equivalent." That's because 1.8A of 2.0 or whatever aren't really accurate anymore. It's marketing.

A half decade ago or so there was a big hullabaloo about how TMCS had advanced to "7nm", but Intel was stuck on "10 NM". But, if you looked at the chips, while not exactly the same, they were, in fact quite comparable.

So, in the beginning the naming scheme was about the size of features on the chips. The smaller the size the more features you could put on the chip. But, as I said. It's really misleading now. I think we're actually somewhere around 17-20 actual nanometers still even though the branding has gotten away from all companies involved.
Thanks for more added history. I think I got confused when the article was talking about 16nm, 12nm then switched to 20A, 18A and so on.
 
For the longest time, the nm metric described the distance from the contact points on the transistor - basically the size you could make the entire transistor. Then at some point in the 2000's it kind of morphed to the smallest structure you could fab. Then it turned into the width of the smallest line you could lithographically imprint. Now it's just lost all meaning whatsoever. Intel was the last holdout that actually measured from contact to contact. We stopped doing that with 7nm and just started going with the flow. If everyone else is going to lie, may as well follow suit.
Well I don't agree with lying just cuz everyone else is lying. But I also think you mean, market product the way competitors are because they are getting away with it and it's helping them
 
I see your point. I think it’s more nuanced. The EUV node standardizations are an ingredient not a cake. Everyone knows you need flower, eggs, salt, sweeteners to bake a cake …. But there are better bakers than others … and Intel has lacked the proper mindset to be a top baker. Imagine a woman with all the beauty, success, and ingredients to be a great partner but is simply horrible at relationships that’s Intel they are going to the gym, got a new career .., whatever but they just haven’t learned how to be in a relationship and thusly are bad partners and therefore never expanded their foundry business. Thats just my opinion. Nvidia didn’t just become synonymous with AI because of their GPUs.
This is why I very much favored Gelsinger in this. His point was that the chip design team was dictating the manufacturing process. Separating the two allowed the foundry more freedom. It needed that freedom to attract outside customers.

There are a LOT of chips out there these days, not just x86. Intel was pigeonholed into x86 production of it's own chips which was fine when x86 pretty much stood alone and was 80% of the chip market. Nothing is going to bring those days back. So to increase revenue and grab a slice of that larger market Intel really needed to separate the two. Remember, with very few exceptions, Intel never made chips for anyone but Intel.

One of the things you see in the article is Intel working with various design houses and software vendors to make it easier to work with these new processes.
 
Anyway, if all you do with a PC is play games, high-cache chips are probably for you, but Intel (and AMD for that matter) have a much wider market.
Some non-gaming workloads DO benefit from additional L3 cache. It's why AMD is selling Genoa-X Epyc. The biggest, the 9684X, is their most expensive 96-core processor at over $14k.
 
Another article on here talked about Intel fabs changing to be able to do that! Funny how intels pivot to P and E cores didn't pan out but AMDs increased V-cache did.
Didn't pan out? When Intel first introduced P and E cores in the 12900K on desktop, Intel went from 8 cores to 16 and on a monolithic die suddenly had the ability to compete with a chiplet CPU with 3 dies, the AMD 5950X.
 
This is why I very much favored Gelsinger in this. His point was that the chip design team was dictating the manufacturing process. Separating the two allowed the foundry more freedom. It needed that freedom to attract outside customers.

There are a LOT of chips out there these days, not just x86. Intel was pigeonholed into x86 production of it's own chips which was fine when x86 pretty much stood alone and was 80% of the chip market. Nothing is going to bring those days back. So to increase revenue and grab a slice of that larger market Intel really needed to separate the two. Remember, with very few exceptions, Intel never made chips for anyone but Intel.

One of the things you see in the article is Intel working with various design houses and software vendors to make it easier to work with these new processes.
Agreed with x86 being the king of work loads is a gone era whether it’s GPUs ISAs , ARM , etc … the world is changing and Gelsinger had the right ideas but not the right panache. Intel needs a dynamic leader … you are right in that they are starting to take the right steps but I’m not sure they have the cultural foundation to make it happen.
 
They never expanded their foundry business because they never had one, 18A will be the first time they will have a foundry business and that hasn't even started production for intel yet.
Customers not trusting them is a real concern though, but on the other hand intel has been partners with several OEMs for decades now, so they know how to be good partners.
They’ve had a foundry business for years just not a very good one and were never taken seriously. Nvidia was testing them and Samsung to make their chips more than a decade ago. Rumor has it Nvidia walked away from the discussions stating they had no clue what they were doing. Again this is heresay and we all know Nvidia continued exclusively with TSMC. Intel got so bad at the foundry game meaning being prepared they themselves have to rely on TSMC .
 
They’ve had a foundry business for years just not a very good one and were never taken seriously. Nvidia was testing them and Samsung to make their chips more than a decade ago. Rumor has it Nvidia walked away from the discussions stating they had no clue what they were doing. Again this is heresay and we all know Nvidia continued exclusively with TSMC. Intel got so bad at the foundry game meaning being prepared they themselves have to rely on TSMC .
Around 10 years ago Nvidia offered its GPU designs for use in other processors, like Apple or Intel or MediaTek could've made their chips with Nvidia iGPUs. That never happened. So is Nvidia a bad partner?
 
I'm interested to see this develop, but 2028 feels a bit late to the '3D-die stacking party' considering AMD launched their first one in 2022 and have been continuously improving the technology.
Intel was first to that party with Lakefield. The most likely reason they haven't used it in volume products since is the cost. The article is referring to specific node capability which Intel 3 already has.
 
  • Like
Reactions: TheSecondPower