Intel Stops The Tick-Tock Clock

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
You guys are right that Intel aren't feeling much pressure from AMD in the high end desktop, but they still have to compete with themselves. Everybody has a PC that's "good enough" if they want to entice an upgrade out of consumers they need to keep up a steady rate of performance improvements. If they don't, they may see their profits drop despite their monopoly. They're slowing down due to the difficulties of the physics they're working with.
 
Intel is not innovating anything but messing around with Core 2 ever since was released just like Microsoft is messing around with same NT technology since Vista release.
 


So IBM, with the help of some of the biggest silicon manufacturers in the world (Samsung, GlobalFoundries, etc.) managed to create a single prototype chip that is, in the words of the article, "nowhere near production-ready". I don't see what this proves in relation to Intel and this article.

There's a huge difference between being able to create a sample in a lab and being able to bulk produce chips with high yields at a viable cost. I'm sure Intel has 10 nm and probably 7 nm chips somewhere in a lab. But they need to be economic to mass produce before they can be released as a product.
 
Skylake+? Do you mean Kaby lake? Or is there a refresh planned before Kaby comes to the market?

Everything I've read about Kaby has been suggesting that it will relate to Skylake the same way Devil's Canyon related to Haswell.
 


I think Moore's law relates more to actual performance more than the manufacturing process.
Reducing transistor size has been an easy way to increase count and decrease heat, but is starting to provide diminishing returns.
Architecture updates and clock speeds are another way to boost speed.
For sustained long term increases we'll have to think outside the box.
Is it time to ditch the whole x86 architecture? It will happen eventually, and we'll also have to move away from transistors.
 
Proof from the horses mouth that Moore's law is dead. No one can go beyond 7nm on silicone and produce a functional chip. Scary times for silicone valley & the whole tech economy.
This is a very sensationalist comment. Hitting a limiter at 7nm has no negative effects on any part of the tech economy. It will prevent us from going smaller for quite some time (doubtfully forever) However lateral movements in advancement is just as important as steps up. All that hitting a limit in size will mean for technology is that the people focusing on smaller nodes will start focusing on improving other things.
 


I'm still rockin' a i7-950 and haven't found the need for an upgrade. Haven't even overclocked it yet either. I tend to upgrade when the hardware no longer suits my needs and so far, I haven't run into a program that my CPU doesn't take in stride. I built the computer in December of 2010 and the only thing I've done to it was a GPU upgrade from a Radeon 6870 to a R9 390 and that was Dec. 2015.
 
Is Intel struggling or just slowing down and maximising profits because, lets face it, they are miles ahead of AMD.

There is a genuine issue arising. We are nearing the end of the road for current technology without a breakthrough. Going to each successive node is also costing more and more.
 
Shame AMD cant compete atm, this is what happens - The competition gets lazy and try to milk as much as they can. I hope AMD'z Zen will be good but then again AMD have promised alot in the past that didnt really pan out!
 
Seems to me I was reading somewhere a while back about attempts to use materials other than silicon to make transistors.
I'm not sure I see transistors going away. It seems to me a switch needs three pins; whether you call them gate, drain, source, or base, emitter, collector, or something else entirely, isn't likely to change that.

Edit: I suppose we could call them "valves," but that would either boggle the minds of our British friends, or make them think we've completely lost our minds.
 
Germanium is an alternate semiconductor, but my guess would be it has too much leakage for digital circuits. It is also subject to thermal runaway. Oops!
I believe Boron is a common doping element, and also Arsenic; seems to me there are a couple of others...
 

Hitting physically practical chip fabrication limits means a lot more than having to focus on other stuff: transistors need to get smaller to reduce transistor gate and channel capacitance to reduce the amount of energy wasted on charging/discharging the gate to make them switch. If you cannot make the channels any shorter without causing an unacceptable increase in leakage current, you also hit limits on the current smaller FETs can drive. Without smaller capacitors with less parasitic capacitance and stronger FETs to drive them, it means you cannot push higher clock speeds nor higher power efficiency. Making wires any smaller is also not an option at that scale since the trace cross-section is only a few hundred atoms. Having the ability to build chips vertically does you no good when you are already bottlenecked by TDP: the equivalent of two 100W chips stacked together as a single 3D design will still dissipate the better part of 200W and be over twice as difficult to cool.
 
I'm still waiting for games that use MMX
https://upload.wikimedia.org/wikipedia/en/thumb/d/d5/PentiumMMX-presslogo.jpg/220px-PentiumMMX-presslogo.jpg

That was supposed to be the big thing, people even talked about not needing a graphics card :) ... then 3Dnow.. then SSE and so on.
Basically we have been buying hardware because it was supposedly going to bring our PC's to the next level, still waiting for that and the software too. Hopefully with VR and mobile phones, we might actually see the next level.
 


The problem here is that process node is not the only factor. Even at the same process node (32nm) Intel is more efficient than Bulldozer because of the uArch. So even if Intel and AMD have the same process node if the uArch sucks it wont save it.

For example, Intels Pentium D had 65nm parts but its performance and power draw was much higher than the Core 2 Duo due to the uArch being bad.

Plus as you put it Intels 14nm is vastly more mature than GloFlo or even Samsung right now.
 
The problem here is that process node is not the only factor. Even at the same process node (32nm) Intel is more efficient than Bulldozer because of the uArch. So even if Intel and AMD have the same process node if the uArch sucks it wont save it.

While I'm unable to say for certain, all things considered, including the man who likely contributed the most to Zen's arch, I wouldn't be so quick to dismiss Zen's potential to bring back some genuine competition. From the leaked info one can glean, at very least, that Zen might just end up being a significant success for AMD, and heavens knows the competition from a successful Zen launch would be more than welcome and healthy for the consumer. I've faith that Zen will be a success, and if AMD is running on time, it should be released in time for xmas 2016.

Other than that, good article short, sweet, and well written. While the info may not be a new revelation, it warrants revisiting. Thanks Michael for your efforts and good job.
 
To interpret: Intel feels they are so far out ahead of AMD there's no point in rushing innovation any longer.

Also, the gains are getting weaker and weaker. Going from my Sandy Bridge to Skylake increased my gaming frame rate by 1 - 3 FPS.
 
Not to be a downer but if Intel is struggling at this point what makes you think AMD will be able to close the gap considering they will run into the same problems. After the Athlon 64 days AMD has always been behind. I want them to succeed and put pressure on Intel, but I don't think they will be able to close a big gap like that.

Magical Fairy Dust.
 
Status
Not open for further replies.