Intel Has 5 nm Processors in Sight

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]chewy1963[/nom]We have much better than 10 GHz processors, we have 8 core 4 GHz processors now today and if my math (pardon the pun) is correct that is the equivalent of a 32GHz CPU. In fact it was Intel's slamming against the 4 GHz "wall" with the P4 that Intel started to seriously look at multicore CPU's to begin with.[/citation]

It doesn't work like that. an octal-core 4GHz CPU is not *equal* to a 32GHz CPU and regardless of that, GHz is not a measure of performance, just the frequency of the CPU cores and some other parts of the CPU.

Also, unless a program has at least nearly perfect multi-threading efficiency across eight threads, it won't necessarily run as well on eight cores that are each one-eighth of the performance of a single core CPU as it would on a CPU that has a single core that is eight times faster than one of your eight cores, assuming that all else is equal.
 

deksman

Distinguished
Aug 29, 2011
233
19
18,685
5nm the lowest they can go?
Guys... Intel is still using Silicon for making consumer grade electronics.
Its a cheap material that's woefully ineffective.

Semiconductors from synthetic diamonds were patented in 1996 for one thing and we had the ability to integrate them into computers by 1997.

By the year 2000, cpu's could have mostly been made from diamonds as is, and computers would be about 40x more powerful, wouldn't require active cooling because of their inherent properties, and would draw 1 tenth of power that computers draw now.

Graphene could have been used wherever possible since 2006 in diamond, while a band-gap problem on Graphene was solved in 2009.
IBM even made/demonstrated a full graphene cpu in 2012.

Intel is toying in technological obscurity because they use 'cheap' materials and means of production.

Cost efficiency (which has NOTHING to do with the amount of resources at our disposal or technological ability to do something - in abundance no less) simply means 'technical inefficiency' (its good for capitalism because of profits, but it doesn't do a thing for 'innovation').

 

deksman

Distinguished
Aug 29, 2011
233
19
18,685
Correction: IBM demonstrated a full graphene CPU in 2010.

The commercial market is full of outdated technologies by several decades at the very least.

We aren't creating what we can do best from a technological/efficiency perspective - we are creating products that are profitable (and those have little to do with technological efficiency, or advanced materials/technology in the first place).

 

rantoc

Distinguished
Dec 17, 2009
1,859
1
19,780
[citation][nom]blazorthon[/nom]Saying that this will happen by 2019 doesn't mean that it won't be delayed until around 2025 or something like that.[/citation]

Considering Intel's track record with their manufacturing processes have been pretty much spot on during the last decade i really doubt that statement will be true in this case.
 

assasin32

Distinguished
Apr 23, 2008
1,356
22
19,515
[citation][nom]GabZDK[/nom]Yeah well, we better start loonking for a John Connor, we have like 5 years from now[/citation]

Were good I hear in this timeline terminators run on Windows 8. I am sure Terminator Metro will be all about fashion and the only thing he will terminate is The Gap.
 
[citation][nom]rantoc[/nom]Considering Intel's track record with their manufacturing processes have been pretty much spot on during the last decade i really doubt that statement will be true in this case.[/citation]

Ivy Bridge was delayed. That's not spot on. Besides, as many others have said, getting to very small process nodes not only has a diminishing returns on how much it impacts power consumption, but is also increasingly difficult. Say that even if Intel is ready for it, maybe other issues such as in building equipment that can work on such small scales has issues. Whether or not Intel manages to be ready might not matter if something else comes up. Six to eight years is a very long time to try planning ahead in the tech industry, things tend to not be on time. They are usually either early or more often, late. Intel is no stranger to being late for something.

[citation][nom]deksman[/nom]5nm the lowest they can go?Guys... Intel is still using Silicon for making consumer grade electronics.Its a cheap material that's woefully ineffective.Semiconductors from synthetic diamonds were patented in 1996 for one thing and we had the ability to integrate them into computers by 1997.By the year 2000, cpu's could have mostly been made from diamonds as is, and computers would be about 40x more powerful, wouldn't require active cooling because of their inherent properties, and would draw 1 tenth of power that computers draw now.Graphene could have been used wherever possible since 2006 in diamond, while a band-gap problem on Graphene was solved in 2009.IBM even made/demonstrated a full graphene cpu in 2012.Intel is toying in technological obscurity because they use 'cheap' materials and means of production.Cost efficiency (which has NOTHING to do with the amount of resources at our disposal or technological ability to do something - in abundance no less) simply means 'technical inefficiency' (its good for capitalism because of profits, but it doesn't do a thing for 'innovation').[/citation]

You make some excellent points that are undeniably true, but do keep in mind that most people here already acknowledged that moving away from silicon would at the least probably help get past 5nm, although how far past would remain to be seen. As others have said, we might have ten, maybe twenty years before silicon is no longer feasible (depending on just how long it takes us to reach silicon's practical limits) in its current use and at that point we'll either have to switch to some sort of 3D chips (different from 3D transistors such as Intel's 22nm process as I'm sure you understand), move on to another material, or something else.

As for graphene, well, graphene is still a little new for that kind of thing relative to when the necessary breakthroughs came about (although by now, it shouldn't be unreasonable to have used it if we wanted too). Diamond, as you said, has been a practical option for a long time. As you said, we're still on silicon strictly to slow down innovation and maximize profits. Diamond has been dirt cheap to manufacture and use for something like this for years, Intel and such simply don't want to use it when they can still milk maybe another decade of profit out of silicon.
 
[citation][nom]assasin32[/nom]Were good I hear in this timeline terminators run on Windows 8. I am sure Terminator Metro will be all about fashion and the only thing he will terminate is The Gap.[/citation]

Windows 8 is superior to Windows 7 as an OS in pretty much every reasonable way except that it doesn't have a start menu by default. It's faster, lighter, snappier, has UI improvements such as the improved task manager and copy/paste manager, better WiFi connection times, and much more. Even if you're too lazy to spend two minutes to install Classic Shell or one of the several other such programs, you still don't need to use Metro if you don't like it. If you don't see Windows 8 as important enough to upgrade too, fine, that's your choice and you should be able to make it how you want to (within reason), but needlessly bashing the OS just because it happens to have another platform for applications that you don't even need to use if you don't want to when you're using Windows 8 is just trolling.
 

Regor245

Distinguished
Jan 19, 2012
213
0
18,710
[citation][nom]alidan[/nom]p2 333hhzp4 3.0 ghz htphenom x4 955 black~future unknown~god i love how i have always taken huge, game changeing leaps.i honestly wonder what the next thing that will tax my system will be...graphics... possibly, but with physics being moved more and more to the gpu... ...what will come allong that would require a better cpu, that cant be offloaded to the gpu?[/citation]

1996 - AMD k5/am5x86 (???) - I still have this CPU.
2004 - Intel Celeron 333Mhz (250nm)
2008 - Intel Pentium 3 1Ghz (180nm)
2009 - Intel Pentium 4 1.8Ghz (130nm)
2010 - Intel Pentium 4 3.0Ghz HT (90nm)
2011 - AMD Athlon II X2 245 2.9Ghz (45nm)
201x - Future Unknown

Yeah, i'm very far behind :sol:
 

master_chen

Honorable
Jun 20, 2012
1,215
0
11,360
Intel...even though I love you quite much, please, don't make assumptions like that until you actually release Skymont (which would be 10nm), and that won't be anytime soon (end of 2015, approximately).
 

ojas

Distinguished
Feb 25, 2011
2,924
0
20,810
Um no, blazorthon got it correct, i meant shrink the transistors and process (sorry wasnt clear about it) and increasing the number of transistors on the die.

so a power consumption increase compared to what? the previous gen or this same gen with a smaller die?

wouldn't you end up getting the same power consumption as the gen you're shrinking but way more performance?
 
[citation][nom]ojas[/nom]Um no, blazorthon got it correct, i meant shrink the transistors and process (sorry wasnt clear about it) and increasing the number of transistors on the die.so a power consumption increase compared to what? the previous gen or this same gen with a smaller die?wouldn't you end up getting the same power consumption as the gen you're shrinking but way more performance?[/citation]

Think of it this way. It takes a roughly 50% process and die shrink to get a roughly 30% reduction in power when all else is equal. If you do a process shrink without shrinking the die, instead of dropping power consumption, you'd increase power consumption because if it takes a 50% smaller size to get 30% lower power, doubling the 50% to get the same size as before would mean that you now have a roughly 40% increase in power consumption. Not only did you increase power consumption, but you increased power consumption at the same die size, so thermal density is also much greater.

Sure, performance would probably be able to be increased by more than 40% if you use a new, higher performance architecture to make use of all of those extra transistors, but the power consumption increase would still be dangerous, so you'd need to drop voltage and clock frequency to lower power consumption. You'd still have a faster CPU than the previous generation, but the difference probably wouldn't be much better than if you simply shrunk the die with the process unless the new architecture could make use of many transistors very effectively in performance without high clock frequencies.

Something that could be tried to counteract the high power consumption instead of dropping clock frequency is using a trick similar to AMD's High-Density Library trick that they have planned for Excavator to optimize for power consumption. However, this would reduce the maximum clock frequencies that can be stably hit even though it'd reduce power consumption, so this CPU would probably not overclock very well.

Shrinking a process without shrinking the chip size, and also the high-density library trick that can help counter-act the power consumption gains at the same frequency, are better suited to low-power systems such as phones/tablets and other very low end systems.

Also, a die shrink does not increase performance. Only changing the architecture in some way will affect performance beyond changing the maximum clock frequencies that can be hit. A die shrink simply decreases power consumption and chip size. Performance improvements are only made (again, excluding maximum frequencies that can be hit) if you change the architecture, core configuration, and such of the CPU.
 

curiosul

Honorable
Apr 18, 2012
63
0
10,630
[citation][nom]math1337[/nom]Remember when we were supposed to have 10 GHz processors?[/citation]

I do, but at the same time, today, with "only" 2-3-4GHz CPUs look at the processing power increase!
 
G

Guest

Guest
All those goodies and but how many people want those cpu's now? I want them before I die or become gray instead of waiting 20 years on a 10Ghz/thread cpu or something.
 

deksman

Distinguished
Aug 29, 2011
233
19
18,685
[citation][nom]blazorthon[/nom]You make some excellent points that are undeniably true, but do keep in mind that most people here already acknowledged that moving away from silicon would at the least probably help get past 5nm, although how far past would remain to be seen. As others have said, we might have ten, maybe twenty years before silicon is no longer feasible (depending on just how long it takes us to reach silicon's practical limits) in its current use and at that point we'll either have to switch to some sort of 3D chips (different from 3D transistors such as Intel's 22nm process as I'm sure you understand), move on to another material, or something else.As for graphene, well, graphene is still a little new for that kind of thing relative to when the necessary breakthroughs came about (although by now, it shouldn't be unreasonable to have used it if we wanted too). Diamond, as you said, has been a practical option for a long time. As you said, we're still on silicon strictly to slow down innovation and maximize profits. Diamond has been dirt cheap to manufacture and use for something like this for years, Intel and such simply don't want to use it when they can still milk maybe another decade of profit out of silicon.[/citation]

Exactly my point.
:)
Capitalism and competition are good for profits... but they do nothing for technological innovation or evolution (that much comes down to cooperation as history shows us on a continuous basis).

Graphene, regardless if its new, could have been used wherever possible since at least 2006... and practical experiences with synthetic diamonds if they were used in computers since 1997 would probably accelerate integration of diamonds and graphene.

We had technological ability to produce abundance ever since we perfected recycling technology in the late 19th century (which was when we were able to recycle heavy metals) - though recycling other materials was doable before that as well with high efficiency anyway.
Money at that point became fundamentally useless, and the crash that happened in 1929 merely confirmed that (which was prompted due to automation/mechanization).

For about a century now, we have the ability to mass-produce a synthetic material or technology within 12 to 24 months of the initial discovery, or basically immediately right after a prototype was made.

I'm sad to see how we limit ourselves continuously through obscure notions of 'money', 'value' and 'cost' which means nothing in terms of our ability to synthesize superior materials in abundance, or create more light-years more advanced technology.

What you basically see in circulation today is roughly 6 decades (in some cases up to a century) outdated (or effectively, no where near of what we can actually create today).

We already live in a world of abundance (both needs and wants, not to mention renewable energy) for over a century... and yet the system we live in creates artificially induced scarcity, because that's how a monetary system 'thrives'.

We need a huge shift in mentality to change things because the route we are taking now is fundamentally unsustainable in the long run - and its effectively keeping people in the dark of where our technology actually is.
 

lamorpa

Distinguished
Apr 30, 2008
1,195
0
19,280
[citation][nom]balister[/nom]Unfortuneately, Intel is about the hit the wall in how far they can go. 1 nm is pretty much the wall as that's about 5 atoms in width...[/citation]
"about" as in 15 years from now? Your "about" is pretty long.
 

lamorpa

Distinguished
Apr 30, 2008
1,195
0
19,280
[citation][nom]deksman[/nom]...We need a huge shift in mentality to change things because the route we are taking now is fundamentally unsustainable in the long run - and its effectively keeping people in the dark of where our technology actually is.[/citation]
Everything is unsustainable in the long run. What's that got to do with it? That's why there is this thing called 'change'.
 

chewy1963

Honorable
May 9, 2012
246
0
10,680
[citation][nom]blazorthon[/nom]It doesn't work like that. an octal-core 4GHz CPU is not *equal* to a 32GHz CPU and regardless of that, GHz is not a measure of performance, just the frequency of the CPU cores and some other parts of the CPU.Also, unless a program has at least nearly perfect multi-threading efficiency across eight threads, it won't necessarily run as well on eight cores that are each one-eighth of the performance of a single core CPU as it would on a CPU that has a single core that is eight times faster than one of your eight cores, assuming that all else is equal.[/citation]

Oh Blaze, if you are trying to say that an octocore 4 GHz CPU wouldn't outperform the 10 GHz singlecore equivalent of that CPU, then you are not nearly as astute as I had thought you to be. The point here is that we've far beyond the performance that Intel was referring to when they announced plans to eventually scale the clock to 10 GHz on the P4.
 
[citation][nom]chewy1963[/nom]Oh Blaze, if you are trying to say that an octocore 4 GHz CPU wouldn't outperform the 10 GHz singlecore equivalent of that CPU, then you are not nearly as astute as I had thought you to be. The point here is that we've far beyond the performance that Intel was referring to when they announced plans to eventually scale the clock to 10 GHz on the P4.[/citation]

I didn't say anything about an octal-core CPU at 4GHz being able to beat a 10GHz singe-core equivalent CPU. I said that it wouldn't be really equal to a 32GHz single-core equivalent CPU.

Also, for any software that can't efficiently support three or more threads, the single core CPU would outperform an octal-core equivalent CPU that has a 60% lower clock frequency.

Yes, we're beyond the performance of a 10GHz Netburst CPU. Sandy Bridge is probably around three or four times as fast as Netburst per core per Hz at a given CPU clock frequency with the same memory configuration.

Whether or not this applies directly to your intended point and whether or not your point was correct wasn't what I was getting at; your reasoning for your point was a flawed and I addressed it because of that.
 

chewy1963

Honorable
May 9, 2012
246
0
10,680
Adding to that, I think that had Intel not hit the clock speed wall with the P4, they might still be basing their CPU's on some derivative of NetBurst with it's woeful IPC ratio. Luckily they did hit that wall and decided to get more efficient with their designs which ultimately led to performance increases far beyond what a 10 GHz Netburst could have ever achieved.
 

chewy1963

Honorable
May 9, 2012
246
0
10,680
[citation][nom]blazorthon[/nom]I didn't say anything about an octal-core CPU at 4GHz being able to beat a 10GHz singe-core equivalent CPU. I said that it wouldn't be really equal to a 32GHz single-core equivalent CPU.Also, for any software that can't efficiently support three or more threads, the single core CPU would outperform an octal-core equivalent CPU that has a 60% lower clock frequency.Yes, we're beyond the performance of a 10GHz Netburst CPU. Sandy Bridge is probably around three or four times as fast as Netburst per core per Hz at a given CPU clock frequency with the same memory configuration.Whether or not this applies directly to your intended point and whether or not your point was correct wasn't what I was getting at; your reasoning for your point was a flawed and I addressed it because of that.[/citation]

I guess I wasn't clear as to my original point, which was that we are far beyond the performance that Intel was promising/planning went they wanted to take the P4 to 10 GHz.
 

aragis

Honorable
Apr 21, 2012
96
0
10,630
[citation][nom]balister[/nom]Unfortuneately, Intel is about the hit the wall in how far they can go. 1 nm is pretty much the wall as that's about 5 atoms in width. Quantum effects start to take over once you get to that level and it is not as easily dealt with due to things like Heisenberg's Uncertainty Principle and how the Strong and Weak forces start being a much bigger factor.[/citation]

Actually 1-atom transistor cpu's are already feasible, no not 1nm, 1 atom!

http://www.tomshardware.com/news/science-research-transistor-moores-law,14746.html
 
[citation][nom]aragis[/nom]Actually 1-atom transistor cpu's are already feasible, no not 1nm, 1 atom!http://www.tomshardware.com/news/s [...] 14746.html[/citation]

Making a single one atom transistor and being able to make complex processors out of them are two different things. Given how difficult it was just to make the one single-atom transistor (which they only estimated was there, they didn't have concrete proof), I doubt that single-atom transistor CPUs with marketable performance are feasible yet.
 

deksman

Distinguished
Aug 29, 2011
233
19
18,685
[citation][nom]lamorpa[/nom]Everything is unsustainable in the long run. What's that got to do with it? That's why there is this thing called 'change'.[/citation]

Actually, there is a big difference between the current system of 'consume and discard' and 'sustainability'.
We are not using our technology to better our lives at all... we use it for 'profit'.
And in case you hadn't noticed... we are doing quite a lot of damage to the environment (the very same environment we depend on) while recycling next to nothing or doing things that can minimize our footprint.
 
Status
Not open for further replies.

TRENDING THREADS