News Intel Process Roadmap Shows 1.4nm in 2029, Two-Year Cadence

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Assuming they can ever get to, or past, 10nm, while actually providing a SUPPLY. Oh, who are we kidding. They can't even hit the supply margins on the much easier 14nm process, how do we expect them to reach 1.4nm in the next 10 years when they've been trying to get past 14nm for 5 years already.
They can.
They've perfectionized the 10nm process, but literally had no competition.
Then they had 1Bn dollar fine to pay to Nvidia over stuff...
Anyway, 10nm was the smallest regular lithography could go.
Now they're using the UHV process, which allows them to easily half the size; as AMD already has show on their 7nm (basically 14nm halved on a better laser).
All Intel got to do is swap out lasers. They got no real research to do, as they can now get ~3.5x more chips per wafer (with about 0.5x bad or lower binned chips).
 
They can.
They've perfectionized the 10nm process, but literally had no competition.
Then they had 1Bn dollar fine to pay to Nvidia over stuff...
Anyway, 10nm was the smallest regular lithography could go.
Now they're using the UHV process, which allows them to easily half the size; as AMD already has show on their 7nm (basically 14nm halved on a better laser).
All Intel got to do is swap out lasers. They got no real research to do, as they can now get ~3.5x more chips per wafer (with about 0.5x bad or lower binned chips).
Um, yeah. Ok.
 
  • Like
Reactions: TJ Hooker
All Intel got to do is swap out lasers.
Go watch ASML's video on their 7nm equipment... getting to 7nm requires pretty much new everything. The light source itself is ridiculously complex: a microscopic molten metal droplet gets ejected into a vacuum vessel, one laser pre-heats it, then a bunch of other lasers blow it up to produce the actual light flash used for lithography. That light flash has to pass through a bunch of lenses to shape the beam and remove unwanted wavelengths before passing through the mask and getting shrunk down. All of the optics are wavelength-dependent and the shaping is source-dependent, so all of that has to get swapped out with lenses made of compatible material, correct shapes and whatever other properties may be relevant too. On top of that, all of it also needs to happen with much tighter absolute positioning and timing tolerances.
 
Do you have a source for this? Has Intel ever set out to develop a node exclusively for mobile/low power before? As far as I'm aware they've used every one of there previous nodes across more or less their full product stack.
The sources were from more than 5 years ago when 10nm was still a concept. At the time Ivy Bridge was on the market and AMD was floundering with Piledriver, people were waiting for the next Sandybridge, and there was a couple of quotes that came out of Intel indicating they were focused on trying to make 10nm their node where they can bring desktop cpus to mobile devices like tablets and phones and directly compete with ARM. People at the time were upset by these quotes because it made it clear Intel wasn't aiming at better desktop performance but better efficiency and thermals. There were a lot of quotes from Intel around this time about basically making their desktop lineup draw the same power as their atom lineup (I think this was their goal for 7nm).

This was what Intel was aiming for when 10nm was in development. They didn't considering that they might need to compete with AMD anymore when 10nm hit the market. If you look at their mobile 10nm parts you'll see what i'm talking about. the clock speeds are down, the power efficiency is up, and while there is a nice jump in IPC you see no appreciable improvement over the older gen parts due to the lower clock speeds. From what my friends in Intel are saying (there is a fab not far from where i live, been there a few times as a sub contractor, have friends with people who know about some of the things going on in the company there) 10nm doesn't clock up due to it's design, it just creates a ton of heat. as for why 10nm never was released on desktop (nor will it) the people in charge don't want to take a step backwards in performance on desktop, and frankly the clock speeds they got out of 14nm was way higher then expected. Interestingly, there is a story behind that one as well, 14nm was actually delayed and broadwell basically aborted because the clock speeds wouldn't come up (remember the devils canyon refresh of haswell?) and management didn't want to take a step back in performance. Anyway they were able to solve the clock issues with 14nm... so that success has caused intel to stagnate on 14nm as they believe they can fix 10nm. Eventually they gave up on it, and redesigned 7nm from the ground up for higher clock speeds not better efficiency.

anyway half of my source are articles from years ago, and the other half is workplace gossip from people working at an intel fab near where I live. Of course it could be wrong, but those same people were right when 14nm was struggling, so i'll trust them about 10nm.