News TSMC's 1.6nm node to be production ready in late 2026 — roadmap remains on track

This is the final confirmation that those names N5, N3, N2 has nothing to do with transitor/gates size. Why Tom's uses 1.6nm in title while it is not even 3nm. Why in every articles Apple SoC is 3nm (N3), when Intel Arrow Lake is 7nm (Intel 4) 😀
 
  • Like
Reactions: KraakBal
This is the final confirmation that those names N5, N3, N2 has nothing to do with transitor/gates size. Why Tom's uses 1.6nm in title while it is not even 3nm. Why in every articles Apple SoC is 3nm (N3), when Intel Arrow Lake is 7nm (Intel 4) 😀
It hasn't meant anything for a long time. None foundry name it. No foundry calls it "Nanometer" or "Angstrom" anymore it's Intel 18A, A16, SF1.4
 
  • Like
Reactions: igorcd
This is the final confirmation that those names N5, N3, N2 has nothing to do with transitor/gates size. Why Tom's uses 1.6nm in title while it is not even 3nm. Why in every articles Apple SoC is 3nm (N3), when Intel Arrow Lake is 7nm (Intel 4) 😀
Arrow Lake is TSMC N3, not Intel 4.
Intel no longer uses its own nodes. Now TSMC does all chips for Intel (desktop and mobile).

Also Intel 4 was less denser than TSMC/Samsung 5N. For relatively fair naming, it should be Intel '5.5', not Intel 4
 
Last edited:
  • Like
Reactions: bit_user
While they might scale a little smaller the limit is coming, they will run out of atoms!
They look at density increases from an areal point of view. So, if you start adding more logic layers, it looks higher density even though it's not in 3 dimensions. However, doing things like that will justify continuing to use smaller numbers.

Then, there's the potential for new materials with better electrical performance to enable higher clockspeeds, like I think I recall seeing about graphene or something, a while back. So, I'd be surprised if the treadmill doesn't keep running for at least another decade.

However, what definitely is happening is the amount of improvement from each new node is decreasing, while new node costs are simultaneously going up. So, large dies that can't be sold for exorbitant amounts of money might stay on older nodes, as we're seeing Nvidia do with Blackwell.

I wonder if Nvidia might even drop to a 3 year cadence, for consumer GPUs. If they can't offer compelling improvements after just two...
 
Meanwhile TSMC took billions of taxpayer dollars for the Arizona foundry that won't even see N3 until 2028. What a bargain!
The way I look at it, you should consider what we did get. If Taiwan goes offline before then, at least AMD can keep making Zen 5 CPUs, because they use the N4P node being produced in that Arizona fab. There's a big difference between getting something vs. nothing, when it comes to vital infrastructure.

Of course, I'm applying CHIPS logic to a fab that planned even earlier. Prior to CHIPS, I think the idea was just that making these products in the USA would be a financial win, though I'm not sure exactly for whom. A lot of times, when tax breaks are involved, they're at least as big as the additional taxes to be collected from the increase in economic activity. So, from the government's perspective, it's probably a wash.
 
Meanwhile TSMC took billions of taxpayer dollars for the Arizona foundry that won't even see N3 until 2028. What a bargain!
They have to abide by the laws of the country they're headquartered in. The fab is also getting all of the important expensive equipment required to make everything through A16. So while not getting the leading edge node isn't great there's nothing hurting its long term viability which is the important part of the investment.
 
Scary to think what will happen when our ability to improve processing power finally hits an impassable wall.
I can think of a lot of things to worry about, well before that happens. And I'm pretty sure it won't be a wall, but just a point where anything beyond a meager pace of gains becomes impractically expensive.

Also, this:


At some point, I'd like to understand how they figure there's a 10^30 multiplier of achievable computation between the current ambient temperature of interstellar space and what it might settle out to, some tens of billions of years from now. Also, what does that say about computation at room temperature vs. what's achievable using more modest forms of refrigeration? Or is there some nonlinear increase that happens once it's cold enough for quantum computation to become practical?

Perhaps it comes down to a question of energy and what happens as an apparently endless demand for more computation grinds against the slowing of efficiency gains of silicon manufacturing technology. Perhaps we (or our AI overlords) cook the planet, which seems all the more ironic, in light of how detrimental heat apparently is to computational performance.

I wonder if AI will eventually colonize Neptune. Certain icy Jovian moons also come to mind, but I think the amount of radiation emanating from Jupiter might be a bit high.
 
2nm is 20 silicon atoms across.. if the node size was actually 2nm and not marketing speak.

Atomic radius empirical: 111 pm
Covalent radius 111 pm
Van der Waals radius 210 pm

While they might scale a little smaller the limit is coming, they will run out of atoms!
+1, and theoretically switching to other atomic materials like boron or carbon will only yield another -25 to -40 pm in atomic radius, but I believe, as of right now, boron must be covalent bonded to nitrogen to create a usable silicon replacement so it’s lattice structure is bigger. However carbon nanotube transistors could be feasible in theory to make vertically integrated transistors with a planar area of 1 nm^2 (carbon nanotube has 1 nm diameter, with source and drain stacked above and below respectively in the z-axis).
It’ll be interesting to see what will come of things as the silicon limit continues to be pushed.
 
Last edited:
  • Like
Reactions: bit_user