News TSMC's wafer pricing now $18,000 for a 3nm wafer, increased over 3X in 10 years: Analyst

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Big tech got greedy during COVID and is looking to maintain that bottom line at all costs. In fact, some (AMD/Nvidia) would rather maintain a high margin / low volume business model. The consumer is being given short shrift.

Nevertheless, I feel we are in a bubble of sorts, and big tech is getting high on its own supply. As they say: vote with your wallets.
 
Big tech got greedy during COVID and is looking to maintain that bottom line at all costs. In fact, some (AMD/Nvidia) would rather maintain a high margin / low volume business model. The consumer is being given short shrift.

Nevertheless, I feel we are in a bubble of sorts, and big tech is getting high on its own supply. As they say: vote with your wallets.
The consumer can't really be given short shrift. Someone has to want to buy something or the money doesn't flow. It must be the case that what the consumer is buying is a product that is further downstream of amd/nvidia if they are no longer selling b2c.
 
so if the Apple A18 pro die area is 105 mm^2 and the cost is 0.25 $/mm^2 then chip only costs $26 yet the phone costs $1000.
You seem to be assuming every single mm^2 of the wafer is usable and they get perfect yield. Once you account for the overhead and yield issues, I'd guess it's more than $30. However, this doesn't include masks or packaging. Then, $Billions worth of NRE (Non-Recurring Engineering) costs need to be factored in.

Once you add in all the other parts and assembly costs, I'm sure you're looking at a couple hundred $, easily. That's before Apple adds their profit margins, and then the retailer or carrier adds theirs.

Is it overpriced? Yes, how else do you think Apple got to be one of the most valuable companies? However, it's hardly as if their markup is 3846%!
 
Big tech got greedy during COVID and is looking to maintain that bottom line at all costs. In fact, some (AMD/Nvidia) would rather maintain a high margin / low volume business model. The consumer is being given short shrift.
I think it's funny to hear people say this, when AMD has cut prices on CPUs in every generation since Zen 3, in spite of inflation that's hitting them just like everyone else.

I also don't see their GPUs as being so overpriced. Before the pandemic, their flagship was the $700 Radeon VII. Now, you can get a Radeon RX 7900XTX for $870. Okay, that's 24% more expensive, but that's exactly what it should cost, if you adjust for inflation! According to the US Inflation Calculator, the inflation from 2019 to 2024 was 23.4%.
 
People try to make excuses about covid time. No one work will work for free. If the demand goes up. The price will rise.
Pcb at that time is very low
Eletronics have shortage
The whole transport is a mess on that time.
It's not premium price when you have struggles everywhere. Nvidia keep the price in the sky because they don't have competition.
Amd still S in every sector. But don't worry they now are enterprise business only.
The only hope is intel with their faildozer time.
 
Once you add in all the other parts and assembly costs, I'm sure you're looking at a couple hundred $, easily.
There's no BoM analysis for the 16 that I could find, but the couple of 15 I could find indicated ~$550 for the 15 Pro Max which started at $1200. Prices have definitely been going up on the material side while phone prices have somewhat stagnated. They're still overpriced, but at least the difference between material and end cost has been getting lower.
 
  • Like
Reactions: bit_user
Source? In post #6 @spongiemaster directly contradicts this claim.
Sure, here is my napkin math.

The advertised 300nm wafer price of TSMC 5nm is $16,988 and yields a transistor density of 138.2 Mtr/mm^2 with 70,695 mm^2 per wafer.

Thus $16,988 / (138.2 Mtr * 70,695 mm^2) = $1.74 per billion transistors

TSMC N3? (Article doesn’t state which N3 process so I’ll assume the least dense N3B process which has been replaced by N3E) 300nm wafer price is $18,000 and yields a transistor density of 197 Mtr/mm^2.

Thus $18,000 / (197 Mtr * 70,695 mm^2) = $1.29 per billion transistors

And let’s do worst case speculated price while utilizing the lowest density N3 process currently available. $25,000 per N3E 300nm wafer at 216 Mtr / mm^2 = $1.64 per billion transistors.

So if a chip design made on TSMC 5nm was directly ported to 3nm, the cost per chip should theoretically go down since the chip will occupy less mm^2.

Now my stipulation is only concerning 5nm to 3nm. 7nm to 5nm goes from $1.44 per Btr to $1.74 Btr. The only reason 3nm is a better transistor value is because 5nm had a disproportionate price increase from N7 ($9,346) and N3 had a very modest price increase comparatively.

Unwdy4CoCC6A6Gn4JE38Hc-1024-80.jpg
 
Last edited:
  • Like
Reactions: Flayed
Why is anyone surprised. The chips with 3nm node pack in way more transistors than 28 nm so you have a choice of making the chips smaller with the same processing power or the same size with a lot more processing power. Given the propensity of apps to add more features get increasingly complex the latter seems to be happening hence costs increase but with some new smartphones hitting $1500 there will be a point where people will stop paying.
I don't think it is surprising that cost and prices will go up because it gets harder to shrink transistors. However, a large part of the high price is due to monopoly by TSMC. Essentially, they can charge whatever they want, increase the price when they like it, but they still get overwhelming orders. If you consider the price increases we have read online since 2022/2021 till now, that compounding effect is substantial.

At this point, I believe most people no longer upgrade their devices on an annual cadence. It is too expensive for little benefits. The demand for chips is driven by companies pushing very hard to make AI attractive and worth paying for. Till now, I don't see that happening.
 
Sure, here is my napkin math.

The advertised 300nm wafer price of TSMC 5nm is $16,988 and yields a transistor density of 138.2 Mtr/mm^2 with 70,695 mm^2 per wafer.

...

TSMC N3? (Article doesn’t state which N3 process so I’ll assume the least dense N3B process which has been replaced by N3E) 300nm wafer price is $18,000 and yields a transistor density of 197 Mtr/mm^2.
Where did you get these two density figures? According to this, N3E only has an advertised density increase of 1.3x over N5:

Also, there seems to be an assumption that N3B is the cheaper node, but N3E sounds like it's the cheaper of the two, and maybe what the $18k figure is citing. Anton characterized N3E as:

"a relaxed version of N3B, eliminating some EUV layers and completely avoiding the usage of EUV double patterning. This makes it a bit cheaper to produce, and in some cases it widens the process window and yields, though it comes at the cost of some transistor density."

Source: https://www.anandtech.com/show/2139...nology-on-track-for-mass-production-this-year

Another possible issue is whether you're comparing N5 pricing from the same level of maturity as N3E is currently. I think N5 was almost certainly newer in Q1 of 2020 than N3E is now. That's making N5 look more expensive than it should be, if what we want is an apples-to-apples comparison.
 
Sure, here is my napkin math.
You're not taking into account die size or yields which dictate wasted silicon. Transistor density is also a moving target that depends entirely on design and what could be shrunk. TSMC's revamped N3 loosened SRAM density to match N5 for gains in other areas. Unfortunately there's no straightforward way to say one node is definitively better than another with regards to cost except a case by case basis.

Some real world density examples based on third party sourced measurements and reported transistor counts:

Apple:
A14 (N5) ~134.1MTr/mm^2
A15 (N5) ~139.3MTr/mm^2
A16 (N4P) ~141.6MTr/mm^2
A17 Pro (N3) ~183MTr/mm^2

AMD:
Zen 4 CCD (N5) ~91. 5MTr/mm^2
Zen 5 CCD (N5 or N4 haven't seen definitive) ~109.2MTr/mm^2
 
I hear this argument, but one problem we face is that app developers tend to use and target newer devices. So, they pack in features and only tune performance well enough to run on those devices. This leads to older phones feeling slower and more sluggish over time.

It's a similar story with web developers. So, if you use your phone to browse the web, it'll be getting increasingly bogged down by sites using more videos and other heavy-weight web features.

I'd say it's probably easier to get along with a low cost phone, if you upgrade more frequently. I had a 6 year old phone that I finally upgraded, last year, but it was a flagship model when it launched (or at least had a top-tier SoC). The other thing that was starting to be a concern is that it got only one Android update and some apps don't support older Android versions.
I also bought a flagship phone back in 2017 and the device served me well over the years but the camera quality was not up to par anymore and daily driving of apps and browsing experience got so bad that I had to buy a new one last year. I’m hoping this 15 Pro will last as long.
 
  • Like
Reactions: bit_user
Sure, here is my napkin math.

The advertised 300nm wafer price of TSMC 5nm is $16,988 and yields a transistor density of 138.2 Mtr/mm^2 with 70,695 mm^2 per wafer.

Thus $16,988 / (138.2 Mtr * 70,695 mm^2) = $1.74 per billion transistors

TSMC N3? (Article doesn’t state which N3 process so I’ll assume the least dense N3B process which has been replaced by N3E) 300nm wafer price is $18,000 and yields a transistor density of 197 Mtr/mm^2.

Thus $18,000 / (197 Mtr * 70,695 mm^2) = $1.29 per billion transistors

And let’s do worst case speculated price while utilizing the lowest density N3 process currently available. $25,000 per N3E 300nm wafer at 216 Mtr / mm^2 = $1.64 per billion transistors.

So if a chip design made on TSMC 5nm was directly ported to 3nm, the cost per chip should theoretically go down since the chip will occupy less mm^2.

Now my stipulation is only concerning 5nm to 3nm. 7nm to 5nm goes from $1.44 per Btr to $1.74 Btr. The only reason 3nm is a better transistor value is because 5nm had a disproportionate price increase from N7 ($9,346) and N3 had a very modest price increase comparatively.

Unwdy4CoCC6A6Gn4JE38Hc-1024-80.jpg
Someone else already did the math for you.


The density improvements are, at best, slightly better than the wafer cost increases. With the FinFlex 2-1 implementation, density improvements are ~56%, with a ~35% cost increase. This results in an ~15% cost per transistor improvement, the weakest ever scaling for a major process technology in 50+ years.

The other implementations are either flat on cost per transistor or even negative, but come with greater per-transistor speed improvements. Note that the above improvements, gen-on-gen, are measured with the Arm Cortex A72. The density improvement will vary based on what IP is being implemented.

Most chip designs will not achieve the 56% density improvement, but instead a much lower ~30%. This would imply a cost per transistor increase, but companies are adjusting designs to ensure that does not happen. This will be explained in the process technology section.

The other problem with your math is that it only considers the actual production cost. The cost of implementing more advanced node designs has also significantly increased which needs to be factored in when calculating the final cost of a product.

The decision to move to 3nm or stay within the N5 family becomes even more tricky when the cost of implementing a chip on the most advanced process technology becomes even higher.

We explained this issue in detail above, but the fixed costs of implementing a product in the newest process technology are getting so large that it represents a massive risk for companies. Delays get more tricky, respins get more costly, and worst of all, the volume required to achieve a cost-per-transistor improvement becomes higher and higher.

Without even doing any math, why did everyone bail on TSMC's first 3nm (N3B) node except Apple, electing to to stay with 5nm variations for another year or two and wait for cheaper 3nm nodes if there were cost improvements to be had with N3B?

Nvidia is rumored to be moving back to Samsung and their 2nm process after this generation to cut costs because TSMC is getting too expensive. We wouldn't be hearing these things if TSMC prices were going down with new nodes.
 
  • Like
Reactions: bit_user
why did everyone bail on TSMC's first 3nm (N3B) node except Apple, electing to to stay with 5nm variations for another year or two and wait for cheaper 3nm nodes if there were cost improvements to be had with N3B?
Intel is thought to have used N3B in the compute tile of Lunar Lake and now Arrow Lake. I'm not sure it was ever confirmed, however.
 
Last edited:
Intel is thought to have used N3B in the compute tile of Meteor Lake and now Arrow Lake. I'm not sure it was ever confirmed, however.
Meteor Lake was a combination of Intel 4 and TSMC 5 and 6. Intel is using N3B for Arrow Lake, but we know that wasn't the original plan. The plan was to use Intel18A, but they ended up switching to TSMC at the last minute. The only reason Intel would do that is because something wasn't right with Intel 18A. Once that was the case, what were Intel's options? They were forced into using N3B. Arrow Lake on N3B is largely considered a major flop. Can you imagine how bad it would have been if they fell back to Intel 4 instead?
 
Meteor Lake was a combination of Intel 4 and TSMC 5 and 6.
Sorry, I meant to write Lunar Lake, not Meteor Lake. I know the difference, I just wrote the wrong one.

Intel is using N3B for Arrow Lake, but we know that wasn't the original plan.
But it was the original plan to use it for Lunar Lake. I think Intel used it for Arrow Lake partly because they had already done the work of porting the cores to N3B for Lunar Lake, so it wasn't much additional work to port the rest of the compute tile for Arrow Lake.

The plan was to use Intel18A, but they ended up switching to TSMC at the last minute.
It couldn't have been last-minute, if they hadn't already done most of the work for Lunar Lake, well in advance of that point.

The only reason Intel would do that is because something wasn't right with Intel 18A.
Arrow Lake was meant to use 20A. We might believe they cancelled it because it was to be used pretty much exclusively for Arrow Lake's compute tile. When they did something similar for Meteor Lake on Intel 4, it ended up being uneconomical. Perhaps that's because they couldn't justify the work needed to refine Intel 4 and get the yields up, given that it was for just that one product.

Arrow Lake on N3B is largely considered a major flop.
Lunar Lake shows that the same cores on the same node can be successful, although that's in a different clock & power envelope.

IMO, most of Arrow Lake's problems have nothing to do with the compute tile being on N3B.
 
Meteor Lake was a combination of Intel 4 and TSMC 5 and 6.
I'm certain this was just a mistype and it was supposed to say Lunar Lake not Meteor Lake.
The plan was to use Intel18A, but they ended up switching to TSMC at the last minute.
ARL was supposed to be the debut of 20A, but the node was canceled. We won't know whether or not Intel was lying about the reason until 18A launches.
They were forced into using N3B.
Nobody knows for sure whether it was N3B or N3E because Intel never specified though there was a singular LNL leak that indicated N3B. N3B makes a lot more sense for LNL than it does for ARL, but who knows contractually what the situation is as it hasn't been made public.
Can you imagine how bad it would have been if they fell back to Intel 4 instead?
If they were to use their own node it would have been Intel 3, and the fact that they didn't indicates there wasn't enough available volume or it would have taken too long to hit volume (LNL already being on a TSMC node undoubtedly made this process faster).
 
  • Like
Reactions: bit_user
Misslead article. Per one transistor cost we got cheaper! Also edge technology has it cost (and this what they do is like SF movie!) Also performance/energy consumption got huge jump in past 10 years. Article should be rewrite again....
 
I hear this argument, but one problem we face is that app developers tend to use and target newer devices. So, they pack in features and only tune performance well enough to run on those devices. This leads to older phones feeling slower and more sluggish over time.

It's a similar story with web developers. So, if you use your phone to browse the web, it'll be getting increasingly bogged down by sites using more videos and other heavy-weight web features.

I'd say it's probably easier to get along with a low cost phone, if you upgrade more frequently. I had a 6 year old phone that I finally upgraded, last year, but it was a flagship model when it launched (or at least had a top-tier SoC). The other thing that was starting to be a concern is that it got only one Android update and some apps don't support older Android versions.
Yeah the thing with apps not supporting older versions of Android really surprised me at first. I guess I was justused to the pc ecosystem where some companies consider backwards and forwards compatibility to be almost sacrosanct.
 
Sorry, I meant to write Lunar Lake, not Meteor Lake. I know the difference, I just wrote the wrong one.


But it was the original plan to use it for Lunar Lake. I think Intel used it for Arrow Lake partly because they had already done the work of porting the cores to N3B for Lunar Lake, so it wasn't much additional work to port the rest of the compute tile for Arrow Lake.


It couldn't have been last-minute, if they hadn't already done most of the work for Lunar Lake, well in advance of that point.


Arrow Lake was meant to use 20A. We might believe they cancelled it because it was to be used pretty much exclusively for Arrow Lake's compute tile. When they did something similar for Meteor Lake on Intel 4, it ended up being uneconomical. Perhaps that's because they couldn't justify the work needed to refine Intel 4 and get the yields up, given that it was for just that one product.


Lunar Lake shows that the same cores on the same node can be successful, although that's in a different clock & power envelope.

IMO, most of Arrow Lake's problems have nothing to do with the compute tile being on N3B.

Yes, I meant 20A for Arrow Lake, not 18A. Intel has too many nodes they are working on at once. In September of 2023, Intel showed a 20a wafer of Arrow Lake test dies claiming they were on schedule.

https://www.tomshardware.com/news/i...er-with-20a-process-node-chips-arrive-in-2024

Unless that was complete BS at the time, Intel managed to go from we're all good and on schedule to this isn't working, let's cancel 20A and switch to TSMC in one year and released on schedule in October of 2024. By CPU development standards, that qualifies as last minute.

Based on the cost of N3B and how far along it looks like Intel got with 20A, it's hard to fathom it was more economical to can their in house node and switch to TSMC, unless again, Intel was lying about their progress on 20A. Lying seems like the most likely scenario.

I didn't say anything about ArL being bad because it was on TSMC. I said it would have been even worse had they used what they had available from their own node portfolio after 20A was cancelled.
 
  • Like
Reactions: cyrusfox
Planned obsolescence and security compliance are best friends. At some places as soon as a device hits end of life and stops getting new updates, it fails the device posture check and is no longer allowed to connect to resources.
But the only reason to drop the device is because it stopped getting updates. If it kept being supported at the same level, then it wouldn't need to get locked out. So, planned obsolescence isn't helping security. What's really happening is that these types of corporate network policies are serving to enforce planned obsolescence.

The only real security argument for planned obsolescence is if you imagine the manufacturer has a fixed amount of resources for supporting old devices. If the number of supported models is reduced, the quality of that support might improve. More likely, they'll just devote fewer resources to the endeavor.

The open source community has shown it's viable to support modern software on quite old hardware. Platforms more than 10 years old are quite well-supported, and this could easily apply to phones if manufacturers or lawmakers wanted it to.
 
Last edited:
  • Like
Reactions: thestryker