Intel Has 5 nm Processors in Sight

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Indeed, scientific "laws" are really simply observations of phenomenon that have not been refuted or disporven yet. Scientific law is that the universal speed limit is the speed of light - nothing can go faster. Except scientists are actively trying to find a means of meeting and surpassing that. Doesn't mean it's impossible, just that no one has found a way of meeting that threshold. Intel or someone else may very well find a way to get down to a fraction of 1nm. They just haven't gone there yet so they haven't seen exactly what problems there are or if they can work around them. Science doesn't work on certainity, it works on 99.9% sureity - no scientist will ever tell you something absolutely will not happen, only that it is extremely unlikely, and that is only after it's been observed and experimented with for a long time. Even that 0.1% chance can turn into a a breakthrough. That's what science is about.
 
[citation][nom]math1337[/nom]Remember when we were supposed to have 10 GHz processors?[/citation]

Technically, Intel might have hit 10GHz if they stuck with Netburst or at least Netburst-derived architectures up until today, but it probably wouldn't be as practical as Sandy Bridge and Ivy Bridge.
 
There were reports about "Moore's Law" going to end in 5 - 10 years when I got my Everex 386/20Mhz computer (according to pc magazine, it was the fastest personal computer for 3 months) back in 1988. The techs and scientists keep finding solutions to the problems that were claimed to be sure to kill Moore's observation.

Actually, Moore's "law" did fail. It stated that the amount of transistors that can be inexpensively placed on a chip (IIRC) should double roughly every twelve months. We're down to something like every twenty-four or thirty months right now.
 
Smaller chips, but the same poor graphics driver software, and OEMs will never utilize all the chips power. At a point when everyone will be able to game at realistic levels their will not be much demand for future
processers after that time. I guess the server farms will always need more power efficient processors but on the consumer level, why bother!
 
[citation][nom]captaincharisma[/nom]in 2019 intel will be coming out with 5nm processors while AMD will be bankrupt[/citation]

AMD is picking up again. It wouldn't make sense for them to be bankrupt. Besides, if AMD fails, then Intel will get killed by anti-trust lawsuits and we might have no major x86 CPU company. Intel wouldn't allow that and would quite literally bail AMD out if they had to in order to keep AMD running.
 
[citation][nom]gigabyter64[/nom]If I remember correctly, wasn't Terminators based on 12nm chips, we are just about there!![/citation]

Where the heck did you hear that!?
 
I cant imaging how they gonna make a proper chip @ 5nm. isnt the size is where the silicon are at its limit and will fall apart.

I assume they are trying new trick to move smaller. New thing isnt good until proven in the market. Being a conservative myself I'll buy a rig with 14nm & use more than 5years. And see how they apply those new tricks in 5years. I will not get anything below 10nm unless they go thorough in the market without problems for 3-5yrs.
 
[citation][nom]blazorthon[/nom]Six to eight years from now is not really "in sight" in this industry. Anything and everything can change in less than half of that time as far as this information is concerned. Saying that this will happen by 2019 doesn't mean that it won't be delayed until around 2025 or something like that. I'm also not convinced that 14nm will even hit the market for Intel until 2014 given that it would mean that Intel has about a year to get both Haswell and the 14nm die shrink of it (Broadwell if I remember correctly) and the chances of Haswell having a full line-up by the end of 2013 aren't even looking very good given that Ivy only got CPUs from all families Celeron and up a few days ago.[/citation]
I'd only partially agree with you here. Intel plans their stuff very far out in the future. 22nm tri-gate stuff was in development since the 90's. iirc lithography was in the hundreds of nanometers then.

So yeah...they may run into problems, but "in sight" is a believable statement.

And 14nm Broadwell based chips ARE planned for 2014, 2013 will see Haswell, which is still 22nm.

I mean, yes, according to the pure tick-tock cycle we should have an official haswell release by the end of this year, but i think it's lagged behind enough to make it just one iteration of the cycle per year.
And in that sense 2014 is "in time" for Broadwell. They had already started manufacturing their fabs a few months ago, so i don't anticipate much delay. Probably launch around the same time frame as ivy did.

Assuming that the one cycle per year holds, we'll have Skymont/Skylake in 2015/16 (can't remember the order) so 2016=10nm. Then 2018 should see 7nm, and 2020 should be the year for 5nm. So yeah off by a year at least as i see it (from 2019).

BTW i wonder what would happen if you shrink the manufacturing process but keep the same die size...
 
[citation][nom]ojas[/nom]I'd only partially agree with you here. Intel plans their stuff very far out in the future. 22nm tri-gate stuff was in development since the 90's. iirc lithography was in the hundreds of nanometers then.So yeah...they may run into problems, but "in sight" is a believable statement.And 14nm Broadwell based chips ARE planned for 2014, 2013 will see Haswell, which is still 22nm.I mean, yes, according to the pure tick-tock cycle we should have an official haswell release by the end of this year, but i think it's lagged behind enough to make it just one iteration of the cycle per year.And in that sense 2014 is "in time" for Broadwell. They had already started manufacturing their fabs a few months ago, so i don't anticipate much delay. Probably launch around the same time frame as ivy did.Assuming that the one cycle per year holds, we'll have Skymont/Skylake in 2015/16 (can't remember the order) so 2016=10nm. Then 2018 should see 7nm, and 2020 should be the year for 5nm. So yeah off by a year at least as i see it (from 2019).BTW i wonder what would happen if you shrink the manufacturing process but keep the same die size...[/citation]

[citation][nom]article[/nom]Intel's manufacturing cadence suggests that the first 14 nm products will arrive in late 2013[/citation]

That's why I responded saying that I doubt that 14nm will hit in 2013. The article says that it should and I just don't see it.

I see your point in that I may have not thought to differentiate between "in sight" and "set in stone", my bad on that.

As for shrinking the process and keeping the same die size, well, unless clocks are dropped, that'd mean increased power consumption. A die shrink usually reduces area used by the CPU by very roughly 50% (increased IGP size might counteract this, but is irrelevant for CPU performance comparisons with the IGP disabled), but power consumption with the same or roughly the same architecture has been dropping by more like 30% at the same clock frequency. Unless Intel is setting the voltage too high, keeping the same die size with the same or roughly the same architecture and clock frequencies would likely mean a roughly 20-40% increase in power consumption.

Dropping clocks or some other sort of trick would be necessary to counteract the power consumption increase and them architectural improvements would be necessary to counter-act the performance impact of lower frequencies. Maybe instead of that, something such as AMD's High-Density Library could be used to counteract the power consumption hit instead of lowering frequencies, but architectural improvements would still be important for making a performance difference.

Shrinking the die with smaller processes is easier than doing any of this.
 
@blazorthon

I think he means shrinking the transistor size yet not the physical size of the die itself, basically not increasing transistor count but still using smaller transistor structures. Instead you just leave more empty space between the transistors to counteract QM effects while still lowering overall power usage.
 
[citation][nom]palladin9479[/nom]@blazorthon I think he means shrinking the transistor size yet not the physical size of the die itself, basically not increasing transistor count but still using smaller transistor structures. Instead you just leave more empty space between the transistors to counteract QM effects while still lowering overall power usage.[/citation]

Ahh, my mistake. The nm of a process is the distance between the transistors, not the size of the transistors (which varies between different types of transistors even on the same process technology), so that would be something different from using a smaller process. It would lower power consumption, but probably not as much as shrinking the distance between the transistors does, although it might be more stable as you mentioned. It could be something done on *tocks* and such to improve overclocking stability.
 
[citation][nom]Waiting for the promised death[/nom]There were reports about "Moore's Law" going to end in 5 - 10 years when I got my Everex 386/20Mhz computer (according to pc magazine, it was the fastest personal computer for 3 months) back in 1988. The techs and scientists keep finding solutions to the problems that were claimed to be sure to kill Moore's observation.[/citation]
We nay-sayers are not saying that moore's law is in any troubble, but that they will need to find a different paradigm in order to continue it. Back with the Pentium 4's it was 'Moore's law will be peserved by higher clock speeds with chips hitting 10+GHz'... and we all know how that worked out for them. There are simply too many complications running highly parallel devices above that ~5-7GHz range, and so it became easier to go back to the drawing board and come up with their current killer combination of die shrinks and changing extensions. But they cannot shrink forever. My bet is that they will hit some form of thermal wall before they hit 5nm for high end chips (mobile, and specifically ultra mobile, CPUs may be an entirely different story). And even if they do hit 5nm, they can only go down to 3 or so before coming up on some really major physics issues, so obviously the plan will end sooner or later.

The real question is; what will be next? After die shrinks offer no real returns (each die shrink is less and less effective). Will they make some fundamental (and long overdue) changes to x86 to bolster efficiency? Will they replace and consolidate extensions to make them more efficient? Will they crack the programming code to enable highly parallel instructions for day-to-day computing to enable the effective use of more than 4-8 cores? Will they move into the true 3D relm and start making stacked CPUs? New materials and ditch silicon? ditch the entire electro-magnettic computing scheme and move on to photonic computing? quantum computing?

Die shrinks are not the only answer, and many of us are really curious to see what comes next.
 
[citation][nom]ben850[/nom]LOL I'm still rocking a Phenom II x6 on my AM2+ setup.. Looks like my next upgrade could be an extremely huge leap in CPU technology[/citation]
p2 333hhz
p4 3.0 ghz ht
phenom x4 955 black
~future unknown~

god i love how i have always taken huge, game changeing leaps.
i honestly wonder what the next thing that will tax my system will be...
graphics... possibly, but with physics being moved more and more to the gpu...
...
what will come allong that would require a better cpu, that cant be offloaded to the gpu?
 
[citation][nom]alidan[/nom]p2 333hhzp4 3.0 ghz htphenom x4 955 black~future unknown~god i love how i have always taken huge, game changeing leaps.i honestly wonder what the next thing that will tax my system will be...graphics... possibly, but with physics being moved more and more to the gpu... ...what will come allong that would require a better cpu, that cant be offloaded to the gpu?[/citation]

How about significantly more intelligent AIs?
 
[citation][nom]captaincharisma[/nom]in 2019 intel will be coming out with 5nm processors while AMD will be bankrupt[/citation]

AMD and ARM processors/technologies is what keeping Intel this hungry on pushing forward their processors line... Without a competitor we will have a crappy expensive Intel processors... :)
 
[citation][nom]blazorthon[/nom]How about significantly more intelligent AIs?[/citation]
nah, ai that is an actual challenge and not just enemies who do more damage and take less will be a long LONG way off. i don't see ai really coming into its own even next consoles cycle, the pc may get a few, but we know that consoles effect pc gaming.

multimedia was the last thing that forced me to upgrade,
my p2 couldn't handle mkv files
my p4 couldn't deal with 720p blu ray quality
but my phenom 2 has no problem with 1080p or 3d to my knowledge.
and as much as some people hope, 4k and higher will not be come a standard for years, as the tech just isnt there cheap enough, you will not see a quality jump from 1080p to 4k like you did with 480I (i believe old broadcast image size) to 1080P, at least not as sub 80inch tvs at a comfortable viewing distance.

honestly, i don't think the next cpu jump i get will really come with a major preformance increase (what i mean was i wasn't able to watch 720p, but now i am) it will just be because my motherboard died.
 
[citation][nom]balister[/nom]Unfortuneately, Intel is about the hit the wall in how far they can go. 1 nm is pretty much the wall as that's about 5 atoms in width. Quantum effects start to take over once you get to that level and it is not as easily dealt with due to things like Heisenberg's Uncertainty Principle and how the Strong and Weak forces start being a much bigger factor.[/citation]

yes, maybe Intel will have to innovate rather than just shrinking the same old tech down a few wavelengths

 
5nm is about the physical limit for what we can do, however, instead of building on a flat plane, building on a 3d one would probably give you FAR more power...

the only concern is time it takes to make, and failure rates.
 
if it were me i would build up from say 5 or 10 nm and instead of having one traditional processor stacking them on top of each other with interconnects built in .. so like you would have the north and south bridge functions on the cpu to get rid of as much latenancy as possible bit just shrinking the size alone i can see an actual implementation of intels light peak in that form instead of having copper / gold plated connections to the motherboard having a light transistor relay that way things like ram or your storage would have a quantifiable performance gain with each revision of the tech. amd could say just thinking for kicks instead of having to share a die layer with a gpu have a gpu on a secondary layer and along with possible on die video ram something along those lines .. i just think that current systems while pretty high tech have a lot of room for improvement.. not to mention implementation of exotic materials and other factors.
 
[citation][nom]math1337[/nom]Remember when we were supposed to have 10 GHz processors?[/citation]

We have much better than 10 GHz processors, we have 8 core 4 GHz processors now today and if my math (pardon the pun) is correct that is the equivalent of a 32GHz CPU. In fact it was Intel's slamming against the 4 GHz "wall" with the P4 that Intel started to seriously look at multicore CPU's to begin with.
 
[citation][nom]math1337[/nom]Remember when we were supposed to have 10 GHz processors?[/citation]


Actually IBM is almost there with their new POWER chips.
 
Status
Not open for further replies.