News TSMC Will Reportedly Charge $20,000 Per 3nm Wafer

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

bit_user

Champion
Ambassador
Taiwan's monopoly on chips, motherboards and many other electronics is bad for everyone.
A technological lead isn't the same as a monopoly. Samsung and Intel aren't so far behind TSMC, in the grand scheme of things.

As for motherboards and such, most of the actual manufacturing happens in mainland China, but also some in places like Vietnam and Thailand. There are Mainland Chinese brands, also, but they're not sold in the US.

I found it interesting that Raja Koduri mentioned working with Indian graphics card makers, in the future.

It's bad for human development worldwide.
LOL.
 

bit_user

Champion
Ambassador
In the early 1960's the company pulled it off the shelf and sold it for over 10 years. We are headed in the same direction with IC's. The Intel I-19 and the Ryzen 17000 may have 20 year lifetimes. There will still be a place for exotic high end machines, but their use will be limited. Intel's tic-toc might still happen, with a new rev every 5 years.
Technology is a feedback loop. Faster, more powerful computers enable us to improve design and manufacturing, which has produced the miraculous pace of technological development we've witnessed.

For instance, AI is now being used to speed & improve chip layout. Quantum computing will help push the envelope in semiconductor materials and manufacturing. And we haven't even begun to talk about higher-level chip design or new ISAs and programming models.

The smaller, cheaper, and more efficient technology becomes, the more aspects of society it permeates, which creates even more incentives to optimize it. So, no. I don't see chip design or manufacturing stalling out for the foreseeable future. About the only thing that can stop it is a global cataclysm, of some sort.
 

bit_user

Champion
Ambassador
I read once, people did things with Amiga that it wasn't designed for. Simply because it was as it is. No option to upgrade. Now they just slam it with more compute power and don't care about the rest.
There's continual optimization work that goes into operating systems, device drivers, compilers, etc. Big players, like Google, Facebook, and Amazon spend many $millions/year on that, because it saves them even more in operating costs.

Application developers can indeed be lazy, especially when someone else is paying the hardware costs. You do have a point that if the hardware stalled out, they'd be under more pressure to look for optimizations and hacks, in order to continue innovating and being able to add features.
 

bit_user

Champion
Ambassador
I suspect that gaming as a major technology driver is over. The next big thing is probably self driving vehicles.
If you include robotics, then maybe. However don't underestimate gaming, as it's still the largest share of Nvidia's revenue and the gaming industry now dwarfs the movie industry in overall size.

On the other side, repurposing the powerful vehicle computers as a gaming machines could be quite interesting.
Actually, there might be something to that. Image Synthesis is a growing application of deep learning, and perhaps isn't too far off from having direct applications in gaming. In some ways, DLSS is already doing this, but you could imagine something leaps and bounds beyond...

 

bit_user

Champion
Ambassador
That doesn't apply to GPU's as Nvidia and AMD stick to roughly the same die sizes from gen to gen.
Not really.

Let's look at Nvidia's top-end consumer dies:

Year-MonthNodeGPUSize (mm^2)
2013-02TSMC 28HPGK110561
2015-03TSMC 28HPGM200601
2016-08TSMC 16FFGP102471
2018-09TSMC 12FNTU102754
2020-09Samsung 8LPPGA102628
2022-10TSMC 4NAD102609

...there's a decent amount of variation in there. Especially going from 471 to 754 mm^2, between two consecutive generations!

I won't go through the same full exercise with AMD, but you have Vega 64 at 486 mm^2, followed by Radeon VII at 331 mm^2, then Navi 21 comes along at 520 mm^2. So, they also do jump around.
 
Last edited:

bit_user

Champion
Ambassador
Intel's largest MAX GPUs have 16 compute tiles using TSM N5. The successor, Rialto, is on the roadmap with 20 compute tiles.
Yeah, but talking about HPC accelerators is a whole different market that most of us here aren't terribly concerned about.

More to the point, Cerebrus has shown the sky is basically the limit on how big & expensive you can go, in those high-end specialty markets.
 

bit_user

Champion
Ambassador
Sandy Bridge 2700k - released Jan 2011 - $332
Kaby Lake 7700k - released Jan 2017 - $339

6 years, 6 releases, price increased by a total of $7. 2% price increase while inflation was 9% over the same time period.
There's only 5 generations' difference, and the reason for the price stability is that Sandybridge was made on 32 nm, but Kaby Lake used 14 nm. That enabled Intel to make a Sandybridge client die of 216 mm^2, while Kaby Lake's die was a mere 126 mm^2, just 58.3% as big.

Intel simply kept the savings and used it to line the pockets of the execs and shareholders.
 

elforeign

Distinguished
Oct 11, 2009
87
105
18,720
That's only true if the chips stay the same size. However, while the new wafers are getting more expensive, they're also denser. This enables existing chips to be fabbed on the new nodes at the same or lower prices. The main way that chips are getting more expensive is they're increasing in complexity enough to nullify the density-adusted cost savings, and then some. Plus, there are various non-recurring costs, which are also going up.

What I'd have really liked to see is a table of the per-transistor cost, for all the process nodes. It was awesome to see the wafer prices all listed, but that doesn't tell us the whole story.


Chiplets help counter the issue of defects.

Thank you. I agree with you both on the density argument and the shift towards chiplets. But that just kicks the can down the road for a few more years before we're at the same issue we are today. The cutting edge of tomorrow on the angstrom scale with industry wide manufacturing being 7nm or below.

Costs will continue to increase and the consumer will inevitable get the passthrough. Unless the industry figures out either larger wafers (won't hold my breath) and/or has advances on the manufacturing front with respect to the mirrors, lasers and exotic materials to bring down defect rates and complexity then I still don't really see how we get cost under control.
 

bit_user

Champion
Ambassador
I agree with you both on the density argument and the shift towards chiplets. But that just kicks the can down the road for a few more years before we're at the same issue we are today.
Isn't it all about kicking the can, though? There could yet be materials and process breakthroughs that occur by then. Faster computers could help us unlock those breakthroughs, so it's not a pointless exercise to eke out a few more generations of improvements.

Costs will continue to increase and the consumer will inevitable get the passthrough.
As long as per-transistor costs don't increase, we're mostly okay. When the per-transistor costs of new process nodes (once they've matured) start going up, that's when we get in trouble.

Unless the industry figures out either larger wafers (won't hold my breath)
From what I recall, that's what High NA (Numerical Aperture) technology is about.

and/or has advances on the manufacturing front with respect to the mirrors, lasers and exotic materials to bring down defect rates
I think TSMC's defect rates are actually pretty good.



This article claims EUV will improve Intel's yields:



and complexity then I still don't really see how we get cost under control.
I think the increase in costs has more to do with the number of steps involved, as well as the increased cost of the equipment. When you tie up a more expensive assembly line for a longer span of time, the effect on cost is multiplicative.

Of course, then you have to divide that by your yield. So, if you yield is poor, that can definitely be a deal-breaker.
 

elforeign

Distinguished
Oct 11, 2009
87
105
18,720
Isn't it all about kicking the can, though? There could yet be materials and process breakthroughs that occur by then. Faster computers could help us unlock those breakthroughs, so it's not a pointless exercise to eke out a few more generations of improvements.


As long as per-transistor costs don't increase, we're mostly okay. When the per-transistor costs of new process nodes (once they've matured) start going up, that's when we get in trouble.


From what I recall, that's what High NA (Numerical Aperture) technology is about.


I think TSMC's defect rates are actually pretty good.



This article claims EUV will improve Intel's yields:




I think the increase in costs has more to do with the number of steps involved, as well as the increased cost of the equipment. When you tie up a more expensive assembly line for a longer span of time, the effect on cost is multiplicative.

Of course, then you have to divide that by your yield. So, if you yield is poor, that can definitely be a deal-breaker.

Thanks for taking the time with the reply. All good points towards the concerns.

High NA:
A high-NA EUV lithography scanner is projected to print the most critical features of 2nm (and beyond) logic chips in a smaller number of patterning steps.

The transition towards high-NA lithography is again justified by the Rayleigh equation. This provides a second knob for improving the resolution: increasing the numerical aperture (NA) of the projection lens. The NA controls the amount of light (more precisely, the number of diffraction orders) that is used to form the image. Which means it also determines the quality of the image.

Transitioning to higher NA imaging equipment has been applied before. Remember the move from 193nm dry to 193nm immersion lithography. At that time, the optical trick of replacing the air between lens and wafer with water allowed a 45% increase in NA.

In essence, higher NA will enable the scanners to print more fine etched lines which will reduce the need of multiple steps in patterning and thus reduce the number of defects in the process leading to reduced manufacturing costs.

This video at 3:49 explains it well:
View: https://www.youtube.com/watch?v=en7hhFJBrAI&t=721s


In the case of EUV, ASML will move from the current 0.33 to 0.55NA (i.e., a 67% increase in NA). This will be achieved by redesigning the optics within the lithography system. 0.55NA EUV lithography promises to ultimately enable 8nm resolution. This corresponds to printing lines/spaces of 16nm pitch in one single exposure
 

bit_user

Champion
Ambassador
Thanks for taking the time with the reply. All good points towards the concerns.
Sure, and thanks in-kind!

FWIW, I'm just a forum nobody. The "Ambassador" title used to be called "Herald", and it's just a designation that a some of us old-timers got after posting a lot on here. Think of it as a step below Moderator.

High NA:
A high-NA EUV lithography scanner is projected to print the most critical features of 2nm (and beyond) logic chips in a smaller number of patterning steps.
...
In essence, higher NA will enable the scanners to print more fine etched lines which will reduce the need of multiple steps in patterning and thus reduce the number of defects in the process leading to reduced manufacturing costs.
Yes, you're right. I remembered it was supposed to improve costs, just forgot exactly how.



From what I'm reading, the high-NA production equipment will be even more expensive. I hope the improvements in production speed and yield outweigh the increase in equipment cost...
: /
 
D

Deleted member 431422

Guest
Thanks for taking the time with the reply. All good points towards the concerns.

High NA:
A high-NA EUV lithography scanner is projected to print the most critical features of 2nm (and beyond) logic chips in a smaller number of patterning steps.

The transition towards high-NA lithography is again justified by the Rayleigh equation. This provides a second knob for improving the resolution: increasing the numerical aperture (NA) of the projection lens. The NA controls the amount of light (more precisely, the number of diffraction orders) that is used to form the image. Which means it also determines the quality of the image.

Transitioning to higher NA imaging equipment has been applied before. Remember the move from 193nm dry to 193nm immersion lithography. At that time, the optical trick of replacing the air between lens and wafer with water allowed a 45% increase in NA.

In essence, higher NA will enable the scanners to print more fine etched lines which will reduce the need of multiple steps in patterning and thus reduce the number of defects in the process leading to reduced manufacturing costs.

This video at 3:49 explains it well:
View: https://www.youtube.com/watch?v=en7hhFJBrAI&t=721s


In the case of EUV, ASML will move from the current 0.33 to 0.55NA (i.e., a 67% increase in NA). This will be achieved by redesigning the optics within the lithography system. 0.55NA EUV lithography promises to ultimately enable 8nm resolution. This corresponds to printing lines/spaces of 16nm pitch in one single exposure
That's what I remember reading all this time. EUV, reduced number of manufacturing steps, lower costs of production. Now it's EUV = increased costs of production. I don't understand which is it anymore.
 

elforeign

Distinguished
Oct 11, 2009
87
105
18,720
That's what I remember reading all this time. EUV, reduced number of manufacturing steps, lower costs of production. Now it's EUV = increased costs of production. I don't understand which is it anymore.

It seems the reduction in complexity on the manufacturing side is being outpaced by the increased complexity in the tolerances and design restrictions based on our current materials and technologies.

The cost to create the improvements in manufacturing and build out the machines is greater in some aspects than the cost reduction to print the circuits.

I think a different way of seeing it is that without the current improvements in manufacturing technology, costs would be ballooning exponentially due to catastrophically low yields.
 

bit_user

Champion
Ambassador
That's what I remember reading all this time. EUV, reduced number of manufacturing steps, lower costs of production.
Hmm... I've been reading that EUV increases the number of steps, which is one reason why Intel avoided it for 10 nm (to their detriment).

It's specifically High NA which regains some of that lost ground.
 

umeng2002_2

Prominent
Jan 10, 2022
128
75
670
All I here about is TSMC and Jensen complaining about the expense of making chips. Meanwhile AMD, Intel, and, Apple are here keeping CPU and phone prices consistent with basic inflation or giving discounts.

Frankly, I think TSMC and Jensen are having a negotiation through the news media. Maybe the problem is TSMC and nVidia.
 

bit_user

Champion
Ambassador
All I here about is TSMC and Jensen complaining about the expense of making chips. Meanwhile AMD, Intel, and, Apple are here keeping CPU and phone prices consistent with basic inflation or giving discounts.
Nvidia used a 4nm node for their 4000-series cards, while AMD used 5nm and 6nm for their 7000-series. When you also consider Nvidia's larger die (609 mm^2 vs 522 mm^2), I think that probably accounts for a lot of the price difference between the RTX 4090 and the 7900 XTX.

As for phone SoCs, they're typically a lot smaller than GPUs. Apple's latest A14 phone SoC has only 16 B transistors, whereas Nvidia's AD102 (powering the RTX 4090) has 76.3 B. So, the manufacturing cost of the SoC won't be as dominant.

Then, consider that a graphics card has other expensive components (the latest GDDR memory, VRMs, thermal solution) and the partner + channel markup. Compare that with Apple basically making the whole device in house, and selling many of them through its own stores and website.

As a point of reference, Raptor Lake i9 is only 257 mm^2 and is made on a mature Intel 7 node. So, much smaller than the 609 mm^2 AD102, before you even account for the node difference. And Zen 4 chiplets are only 70 mm^2 on the fairly mature TSMC N5 node, with their IO Die being 125 mm^2 on TSMC N6. And again, when you compare CPU and GPU prices, remember that the GPU includes more than just the chip, plus there's another intermediary.
 
Last edited:
  • Like
Reactions: elforeign
D

Deleted member 431422

Guest
Nvidia used a 4nm node for their 4000-series cards, while AMD used 5nm and 6nm for their 7000-series. I think that probably accounts for a lot of the price difference between the XTX and the 4090.
....

I did not know that. It's actually impressive for RX7900 XTX to hold it's ground with RTX4080, it being on less dense node and all. In the end no one cares about that, still it restored some hope in AMD GPU's.
 

bit_user

Champion
Ambassador
It's actually impressive for RX7900 XTX to hold it's ground with RTX4080, it being on less dense node and all.
Well, Navi 31 has 58 billion transistors (522 mm^2 combined area), while the 4080's AD103 has only 46 Billion transistors in 379 mm^2.

The node difference accounts for something, but don't forget that Nvidia was on an inferior node than AMD, for both of the previous generations. In fact, you could stretch it to 3, since GTX 1000's were on 16 nm, when Vega was on 14 nm.

Perhaps a more significant factor is that RX 7000-series GPUs apparently shipped with some significant hardware bugs that resulted in certain functional units being disabled. AMD's best hope is to fix them in a "refresh" iteration.
 

Latest posts