News Intel outlines plan to break free from TSMC manufacturing — 70% of Panther Lake at Intel fabs, Nova Lake almost entirely in-house

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I thought they just started using them a couple of years ago, if that.
Intel really needs to figure out what the [] they are doing.
Intel has been a major client of TSMC for years. As recently as 2019, before TSMC was fabbing CPUs for Intel, Intel was still a bigger client than AMD was. These international mega corporations aren't run by fanboys. As long as the checks are clearing, they will do business together.

index.php
 
Intel has been a major client of TSMC for years. As recently as 2019, before TSMC was fabbing CPUs for Intel,
Yes, it wasn't CPUs TSMC was making for them. If you recall, Intel had a major fab crunch back then, because they had expected their 10 nm production lines to be handling a fair bit of product volume. Due to the delays in getting a working 10 nm node, all of their products had backed up onto their 14 nm production, which was further stressed by the need for them to add more cores (hence, larger dies) in order to try and keep pace with AMD.

So, what they did was to take some simpler, low-value chips and outsource those to TSMC, such as their chipsets. As you say, they weren't making full CPUs at TSMC before 2024, though. Not least because Intel had all their own proprietary tools, so it would've been a substantial undertaking to port a large, complex chip like a CPU over to TSMC.

Also, it occurs to me that Altera would've had at least some legacy chips still being fabbed by TSMC. Maybe they also diverted some newer chips, due to the delays in 10 nm. Probably also Habana.

Oh, and then did Ponte Vecchio have TSMC tiles? Yeah, they did!

These international mega corporations aren't run by fanboys. As long as the checks are clearing, they will do business together.
Back then, Intel wasn't a serious player in the foundry business. The conflict of interest for TSMC was minimal. Basically, helping Intel just meant that AMD's sales would possibly be lower, which meant fewer orders from AMD. However, if Intel's orders were enough to offset that, then it wouldn't make much difference to TSMC.

In much more recent times, there was that whole kerfuffle about TSMC tearing up that favorable pricing agreement with Intel. They used the excuse of some careless words of Gelsinger, where TSMC claimed he was spreading FUD, but I'm sure TSMC was probably wanting to get out of that agreement one way or another.

Edit: updated to reflect correction by @spongiemaster .
 
Last edited:
In fact, these are not only Intel's problems, TSMC is starting to admit problems with new process technologies, because the silicon dead end is getting closer. Each new process technology and new factory for it requires more and more investment and eventually there will be only one monster, no matter how much someone wants competition. It's just getting expensive, very expensive within the framework of the entire planet. And there is practically no progress.

Look at the shame of NVidia with 5090 vs 4090. The new version consumes 575W, the old 450W. The difference is about +28%. At the same time, as recent new tests in several games showed, the total, average difference in performance was only +33-35%. It turns out that you, people, are offered to buy a new version of 4090 at a monstrous (just crazy) price (and the same consumption), if you use 5090 at 450W TDP with a difference of a ridiculous +5% in performance. This is exactly what once led Intel to collapse. They sold new series with a 5-10% difference (while AMD was fumbling with the development of the first Zen) and rested on their laurels for many years, instead of investing in R & D - and read directly - in human capital. In the end, they lost it - human capital, not money. The loss of human capital is an order of magnitude worse than the loss of money - no human capital, the company is already doomed. All this is described in detail and superbly in excellent analytical articles on a number of sites. TSMC could not even find the necessary engineers in the USA in several states. They had to import hundreds of highly skilled Taiwanese to train local staff and send some of them to Taiwan for internships. This is a public disgrace for the US, which is still formally the world's technological leader, mainly now in terms of the pool of retained patents and the remains of top minds, but thanks to the greed of capital (and government officials who saw all this and approved for decades), entire layers of human capital were lost in the country and they will NOT be returned in 10-15 years. Only the dependence on suppliers in the US for ASML and the dependence of TSMC on the latter is the lever that forced TSMC to start building factories in the US, but again, not on the most advanced technological process - they laid down straw for themselves.

It seems to me that it is much more important - where will Intel get the coolest human capital to get out of the silicon dead end. This is definitely not a question of money. Look how much money China is pouring into an attempt to simply compare with TSMC - so what? Where are they now? At the same time, problems with the semi-market economy are quickly multiplying for them. And the same applies to TSMC and ASML. We are quickly approaching stagnation in the market for all electronics due to the impossibility of further cheaply and compactly increasing performance per 1W of consumption.

Now is the very moment when the country (group of countries) that will make a technological revolution on the planet, at the time of such a dead end in silicon technologies, will remake the entire world order for itself, becoming the unconditional new leader. And naturally, high-quality human capital will never appear in totalitarian countries, in such quantities as to create an extensive base, as a foundation for a new breakthrough and mass production. This requires an emancipated consciousness from childhood, which leads to a creative breakthrough. But it also requires high-quality education in schools in the natural sciences. Where the failure of the United States, according to PISA research, is also obvious. The hope for some "AI" that will become a new genius for humanity in the next 50 years is definitely pointless, no matter how much those in power secretly dream about it.

Since humanity has no new leader and there is no place to take one now, we are heading into a technological dead end for the next 15-20 years. There will be breakthroughs, but in other areas, horizontal, not vertical, for the computing market...
 
In fact, these are not only Intel's problems, TSMC is starting to admit problems with new process technologies,
Source? I'd love to know specifically what you're referring to.

naturally, high-quality human capital will never appear in totalitarian countries, in such quantities as to create an extensive base, as a foundation for a new breakthrough and mass production. This requires an emancipated consciousness from childhood, which leads to a creative breakthrough.
This might be a comfortable opinion to hold, but it's not supported by the data. China is leading in publication of papers in the most prestigious western scientific journals, such as Nature. In order to get published, you have to get your paper through their reviewers. So, this distinction isn't something you can accomplish without many researchers of real merit.

we are heading into a technological dead end for the next 15-20 years.
Every time "the end of technology" was predicted in the past, it's been wrong. Technology is a virtuous circle, where advancements in one area lead to advancements in others.

That said, progress definitely slows in certain areas, while the pace picks up elsewhere. Back in the 1950's, people seriously predicted that electricity would be so low-cost that it'd be effectively free. That's because they assumed advances in power generation would continue at the same brisk pace, but they didn't. However, virtually nobody in the 1950's foresaw the kind of revolution that would arise from the combined advancement in digital computing and communications technologies. This shows why progress is hard to predict.
 
Source? I'd love to know specifically what you're referring to.
I'll find it later, I have it somewhere in my bookmarks, from some analytical sites, but it will take time. Try to find it yourself - after all, Google is quite smart now, isn't it?

This might be a comfortable opinion to hold, but it's not supported by the data. China is leading in publication of papers in the most prestigious western scientific journals, such as Nature. In order to get published, you have to get your paper through their reviewers. So, this distinction isn't something you can accomplish without many researchers of real merit.
This is an inconvenient opinion, but it is an irrefutable fact throughout the history of mankind. The societies that won were those with more freedom, including for free education without censorship. The USA rose on this, having managed to absorb and then grow the best human capital, which gradually gave it advantages over other countries and blocs. As the last country, although its history looks bloody. But where else? Now the most passionate people have nowhere to run on the planet - there are more "blank" spots left.

Most of China's patents are useless garbage from the point of view of the edge of science. This has also been discussed many times in analytics. There are even sites that conduct fake scientific work from China, in huge quantities, so there are more than enough crooks there. However, science is now such a reflection in the mirror, as a complete cross-section of society in all countries - there has become too much falsehood and deception. Work for the sake of work (to live a good life, powdering the brains of venture investors), and not a real result. China has some achievements, of course, but they are still lagging behind in the overall development plan. The world has already lost some of its technologies and ideas with the collapse of the USSR. But let them try - it's like an adrenaline shot for the US, i.e. its ruling and intellectual class. If this doesn't help, it will be very bad. The collapse of the US as an advanced society will be depressing for the entire planet.

Now the question is how quickly the US will restore its human potential. It is not easy and it will take a long time. And they are pressing from all sides and not at all with simple intentions.

Every time "the end of technology" was predicted in the past, it's been wrong. Technology is a virtuous circle, where advancements in one area lead to advancements in others.

That said, progress definitely slows in certain areas, while the pace picks up elsewhere. Back in the 1950's, people seriously predicted that electricity would be so low-cost that it'd be effectively free. That's because they assumed advances in power generation would continue at the same brisk pace, but they didn't. However, virtually nobody in the 1950's foresaw the kind of revolution that would arise from the combined advancement in digital computing and communications technologies. This shows why progress is hard to predict.
You see, the thing is that the human brain, as a biochemical machine, is finite in its creative and insightful capabilities (with the maximum coverage of scientific horizons), no matter how much we would like the opposite. And even larger, even well-coordinated teams will no longer be able to bear such fruits as before, with the exponential growth of the complexity of scientific knowledge and its maintenance between generations. Even with all the means of automation and computerization of such work. Somewhere new frontiers and questions are being developed, but too slowly. That is why all eyes have turned to "AI", but for now this is more of a marketing gimmick for cutting up the huge budgets of those who believed in the new "silver bullet". And the real problems with population growth (another +80-82 million - or immediately +10 cities the size of New York in 2024 alone, although so far without its consumption level, if you believe the data in the press) are only growing. At the same time, the overall efficiency of each individual gradually falls with the growth of the complexity of scientific and technological progress - more and more people are engaged in unproductive "labor" from the point of view of the tasks of effective survival of civilization. And they cannot be "fired" in any way, as in companies... This is a stalemate that cannot be resolved well in the next 50 years.

Humanity has grown, since the first flight into space, almost 3 times in number. But we are still on Earth and there is not even a colony on the Moon, which is literally nearby (and remember what fantasies there were in the 1960s about the near future). Nature itself is hurrying us, and it is merciless in punishing those who have forgotten the levels of risks...

I hope that at least the problem with silicon will be solved, because humanity needs to greatly increase the "power-to-weight ratio" in calculations (and in energy) per individual, so that this finally begins to bring qualitative changes to civilization with a new frontal thrust...

Of course, few people suspected then that there would be such an Internet and smartphones with cellular networks, but by and large they enslaved us, and did not liberate us. And technologies are increasingly used by the ruling strata to control ordinary citizens, and not to improve their lives and optimize the costs of swollen governments.
 
Most of China's patents are useless garbage from the point of view of the edge of science. This has also been discussed many times in analytics. There are even sites that conduct fake scientific work from China, in huge quantities, so there are more than enough crooks there.
This is false, they are highly competitive at AI, which is why US gov't so eager to ban it. Talk to any AI research about how competitive China is at AI research. The whole "patents are garbage" is some massive cope.
 
Talk to any AI research about how competitive China is at AI research.
Well, all that remains is for them to raise their production base to the level of the USA and its allies. Time will soon show what chances the Chinese leadership and their society have to survive the impending global cataclysms. And where the real intellectual layer (in the broad sense) is more powerful.

Patents are a very shaky basis for a real assessment of the level of intellectual difference and the capabilities of different societies. I have repeatedly encountered such insinuations in the press and on forums.
 
Source? I'd love to know specifically what you're referring to.
2N vs. N3E:
You have a choice (with growing huge costs for each new step):
1. Reduce consumption by -25-30%, but without any increase in performance.
2. Without reducing consumption, add a shameful +10-15% of performance.
3. Do not add either performance or reduce consumption, but increase the complexity of the circuits by +15%.

What do you choose when you normally need everything at once?

People will not buy if there is no improvement in performance. People will not buy if there is no improvement in autonomy (laptops/smartphones and at the same time at least some visible improvement in performance). The increase in the complexity of the circuits must be justified to the buyer, by some practical benefit. If they don't see the benefit, they won't buy again.

In your opinion, with the growing costs of new technological processes, is this not a complete dead end in the retrospective of performance growth per 1W since the 1960s?

And here is the second reason, according to an expert from Intel:
“The nanosheet architecture actually is the final frontier of transistor architecture,” Ashish Agrawal, a silicon technologist in Intel’s components research group, told engineers. Even future complementary FET (CFET) devices, possibly arriving in the mid-2030s, are constructed of nanosheets. So it’s important that researchers understand their limits, said Agrawal.
---
This is a stalemate at the production level and at the demand level, which provides resources for further R&D and new production. This is where we are rapidly heading...

p.s.
NVidia's 5xxx as an example, we can see that they chose, with a real +5-10% over 4xxx with the same consumption, they raised consumption by +27-28% at the same time. But this is practically a dead end. Buyers won't buy new hardware with a 15% difference and the same consumption and rising prices - this makes no practical sense. And increasing consumption above 600W is complete madness for 6xxx with 2N nodes...

To sum it up at the everyday level - consumers have less and less reason to frequently upgrade their hardware. Their desire to use the old, which is almost no different from the new, and sometimes even worse, only increases every year. A simple example is the new dominance of flickering AM(OLED) in laptops, instead of non-flickering semi-matte IPS, without any choice in most new models 2025, which is extremely harmful to vision and the nervous system during long-term work at such a screen, and even glossy with wildly glare - it's one thing when you have 90% of models with safe for eye IPS and quite another when there are less than 10% of them, like in smartphones today. If you want video on a smartphone with 4k@60 and 8k@30+OIS+ the fastest hardware - go to only flickering AM(OLED) screen.

In such a situation, a consumer who understands (or clearly feels the harm) of the problems of such technologies, will prefer not to upgrade at all until such problems are solved. And when will they be solved (i.e. the PWM frequency will exceed the minimum safe 1.25 kHz, like on IPS, where it has long started at 2 kHz and higher) - in 5, 10 years? Maybe microLED will save us, who knows...
 
Last edited:
NVidia's 5xxx as an example, we can see that they chose, with a real +5-10% over 4xxx with the same consumption, they raised consumption by +27-28% at the same time.
They didn't change process nodes with Blackwell, but rather use an optimized version of the same custom node as Ada.
2N vs. N3E:
You have a choice (with growing huge costs for each new step):
1. Reduce consumption by -25-30%, but without any increase in performance.
2. Without reducing consumption, add a shameful +10-15% of performance.
3. Do not add either performance or reduce consumption, but increase the complexity of the circuits by +15%.

What do you choose when you normally need everything at once?
The density part is just an improvement it isn't conditional.
 
  • Like
Reactions: bit_user
You should have understood my irony on a larger scale. The point is that the next steps will be shorter and less valuable for consumers, in these terms, and the costs will be higher for chipmakers.

They will definitely want to immediately take either 2-3 prices above the usual now, in order to survive, or they will start introducing elements of artificial aging into chips (both chips and software) with increasing persistence. They are doing this now, but not so zealously. Otherwise, their entire market (especially if the population stops growing) will be reduced only to those whose old hardware fails. And this layer of consumers, remaining with them after some time for a long period of stagnation, will not be able to recoup the costs of R&D and new technological processes.
 
2N vs. N3E:
You have a choice (with growing huge costs for each new step):
1. Reduce consumption by -25-30%, but without any increase in performance.
2. Without reducing consumption, add a shameful +10-15% of performance.
3. Do not add either performance or reduce consumption, but increase the complexity of the circuits by +15%.

What do you choose when you normally need everything at once?
There's a misunderstanding here, in the the way they characterize node improvements. Each process node has a curve that shows the tradeoff between performance vs. power. What they're citing (and this is standing practice for them, going back a long ways) is how the curve for their new process node compares to an older one, but at just two points - the point of equal power and the point of equal performance.

As @thestryker said, the density improvement is independent of the other two. Density means the same design takes up less wafer area, and thus could potentially lead to cost reductions, depending on what happens with wafer pricing.

People will not buy if there is no improvement in performance.
Again, you're missing the point of these comparisons. They're not saying you would normally take an existing design, port it to their new node, and call it a day. What people usually do is use some of the additional density & power budget to make a more complex design that achieves performance gains through higher IPC. Then, they use the remaining perf/W budget to add a little more performance via higher frequencies.

A counter-example might be how someone like Sony will do several refreshes of the PS5, over its lifespan. Other than the "Pro" model, which uses a different underlying design, all PS5 have the same performance. The point of their refreshes is mostly about cost reduction. If you can achieve some power savings at the SoC level, then you can save money elsewhere in the console, by using a smaller power supply, smaller thermal solution, etc. This also enables size & weight reductions elsewhere in the system, all of which also results in savings on shipping and warehousing costs. So, there are cost-savings to be had, even beyond the component parts.

TSMC issues these announcements for the benefit of chip designers, who understand all of the nuances. The announcement isn't intended for public consumption, so they really don't care whether the general population understands the implications of these data points.
 
Last edited:
  • Like
Reactions: thestryker
A counter-example might be how someone like Sony will do several refreshes of the PS5, over its lifespan. Other than the "Pro" model, which uses a different underlying design, all PS5 have the same performance. The point of their refreshes is mostly about cost reduction. If you can achieve some power savings at the SoC level, then you can save money elsewhere in the console, by using a smaller power supply, smaller thermal solution, etc. This also enables size & weight reductions elsewhere in the system, all of which also results in savings on shipping and warehousing costs. So, there are cost-savings to be had, even beyond the component parts.
In the case of the PS5 Pro they leveraged a newer process node which allowed the chip to be about the same size and power consumption as the PS5. This gives Sony the same type of benefits refreshes do with regards to manufacturing and then anyone buying one gets the benefit of it not using a lot more power than the base model.
 
  • Like
Reactions: bit_user
This is an inconvenient opinion, but it is an irrefutable fact throughout the history of mankind. The societies that won were those with more freedom, including for free education
If we look at WW2, Germany and Japan were both very strong, technically. The reasons they lost weren't primarily due to things like freedom or technological sophistication, but a lot more to do with the scale of their ambitions.

Also, definitely agree about free education (not that it has anything to do with freedom). In the USA, the cost of an undergraduate degree at a decent state university used to be well within reach of most people. It wasn't free, but it was subsidized to the point of being affordable without taking on a massive debt burden.

Most of China's patents are useless garbage from the point of view of the edge of science.
The patent system is broken. There are vast numbers of junk patents filed in the US, also. I wouldn't judge technological advancement on the basis of patents. A much better metric is to focus on research and scientific publications.

This result should be pretty unsurprising. China just spends a lot of money on research, sort of like the US has done in the past. The private sector doesn't do science. The only way you fund science is through the university system and government grants. And without advances is basic science, the rate of technological improvement will slow to a crawl. That pipeline begins with research in basic and applied sciences.

You see, the thing is that the human brain, as a biochemical machine, is finite in its creative and insightful capabilities (with the maximum coverage of scientific horizons), no matter how much we would like the opposite. And even larger, even well-coordinated teams will no longer be able to bear such fruits as before, with the exponential growth of the complexity of scientific knowledge and its maintenance between generations.
Science works to improve the volume and accuracy of knowledge (i.e. at a societal level), where there are indeed some scaling problems - especially in more applied sciences.

Engineering has a secret weapon, which is called abstraction. It's only by way of abstraction - interfaces and compartmentalization - that designs can continue growing in sophistication and complexity. This process does have its weaknesses, but it's quite robust and seems able to grow without bound.

And the real problems with population growth (another +80-82 million - or immediately +10 cities the size of New York in 2024 alone, although so far without its consumption level, if you believe the data in the press) are only growing. At the same time, the overall efficiency of each individual gradually falls with the growth of the complexity of scientific and technological progress
Society problems do face scaling challenges, but cities are a lot more efficient than if the same number of people lived in a suburban or rural setting. You can easily see this in New York City's relatively low per-capita energy usage, compared to elsewhere in the US.

As for efficiency, economists like to use the term "worker productivity", which is definitely not plummeting. Specialization means that not everyone needs to know everything.

Humanity has grown, since the first flight into space, almost 3 times in number.
Population growth is leveling off. One of the biggest problems is actually how the population is distributed, with a lot of wealthy countries facing the prospect of population declines, while the highest growth is in the poorest countries.

But we are still on Earth and there is not even a colony on the Moon, which is literally nearby
Colonizing space will never solve the big problems we have on earth. There might be real value in mining the moon or asteroids, but there aren't many metallic asteroids and they're all pretty far, while mining the moon is quite an undertaking.

I hope that at least the problem with silicon will be solved, because humanity needs to greatly increase the "power-to-weight ratio" in calculations (and in energy) per individual, so that this finally begins to bring qualitative changes to civilization with a new frontal thrust...
Advancements in science and technology have the potential to reduce human resource consumption, relative to quality of life. That's not so much where the focus is, but it's possible.
 
The point is that the next steps will be shorter and less valuable for consumers, in these terms, and the costs will be higher for chipmakers.
Part of the problem we currently face is due to the massive demand for AI hardware. That's distorting prices and limiting availability of cutting-edge nodes beyond what would normally happen. It's an outlier, not trend (hopefully).

or they will start introducing elements of artificial aging into chips (both chips and software) with increasing persistence.
Nope. As long as people have choice, they won't accept artificial aging in silicon. Unfortunately, chips do wear out and newer chips face this problem much more so than older ones did. NAND is a prime example, but similar things happen with DRAM and even logic.

The bigger issue with forced obsolescence is just companies dropping software support for older devices. This only bothers me to the degree that those older devices are still viable.
 
No, it's not CPUs TSMC was making for them. If you recall, Intel had a major fab crunch back then, because they had expected their 10 nm production lines to be handling a fair bit of product volume. Due to the delays in getting a working 10 nm node, all of their products had backed up onto their 14 nm production, which was further stressed by the need for them to add more cores (hence, larger dies) in order to try and keep pace with AMD.
My post was 4 sentences long. You couldn't even make it to the 2nd one?

As recently as 2019, before TSMC was fabbing CPUs for Intel, Intel was still a bigger client than AMD was.
 
  • Like
Reactions: bit_user