Intel's 10nm Is Broken, Delayed Until 2019

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


Well that just depends. The 1950x and the 8700k are very different CPUs. If you are using applications that can take advantage of the the 16 cores offered by the 1950x, then it is a great value. But not all applications can utilize all of those cores. And if you are using applications that scale well with fast single threads, then the 8700k is top notch.
 
AMD can easily add more single core performance to zen: Make the core bigger. Add more floating point and integer units. It'll be on 7nm, they have the room for it.
 

bigger, more complex cores are less power-efficient, clock lower, require a far more complex instruction scheduler and related resources, have a higher likelihood of having design flaws, have a higher chance of manufacturing defect, etc.

Throwing more execution units at a core does you no good if you cannot make efficient and effective use of it. Where do you think HyperThreading/Simultaneous Multi-Threading came from? CPU designers realized that on average, a single thread only has enough instructions eligible for execution to keep 60-70% of execution units busy, so they added multi-threading to let the scheduler pick more efficient combinations from two or more instruction flows. That's why CPUs with SMT perform 30-40% better in heavily threaded tasks.

CPUs already have 30-40% more resources than they know what to do with in lightly threaded tasks, adding more won't help single-thread IPC much.
 
I suspect that for the past decade that Intel has been under investing in research and development. Their miserly approach may cost them a lot more money than they expect.
 


In addition to other persons quoting you.... keep in mind this law, it's what makes processors so hard and why Moore is failing:

# transistor * hz * voltage^2 = TDP.(heat ouput)

Make the core bigger? Aka, more transistors? It increases heat rapidly, as does overclocking, hence why it doesn't take so long before they have to get to liquid nitrogen cooling. CPUs are becoming heavily limited by the amount of heat that can practically be wicked away - and if there is not good cooling, there is a cpu cooking itself.

The saving grace in the last... 10 years is that nice ^2 attached to voltage. Unlike the other components in this equation, if you can reduce your voltage, you get a SQUARED reduction, not a linear one. The trouble is that we're at a point where we do not possess the tech to differentiate the voltage spike (from null) that indicates a 1 or a 0 if we lower it any further. There isn't a sufficient delta to deal with issues like noise causing false values.

So, we've lost that nice big playroom afforded by ^2 reductions in heat output that we could then fill with ghz or transistors. This is one of the reasons for the switch to multicore arch - single cores are reaching their limits in terms of what heat can be wicked away vs. what they put out, so the solution is more cores to divvy that up.
 


Lower power.

A big part of what makes this process hard is this formula, which defines the amount of heat a processor puts off: Many, many choices and design options are informed by how heat plays with the processor.

# transistors * ghz * voltage^2 = Heat

A mobile chip, such as a 845, has a MUCH lower frequency (typically around 1.5-2Ghz vs a lower limit of 3-4Ghz for a x86 desktop/laptop processor) and it is also far more power efficient, draining a lower voltage. This in particular is important, as you'll notice in our formula voltage is squared. So, any addition is a squared factor being multiplied in, and any removal is a squared factor being divided out.

Mobile procs like Snapdragon and mediatek use far less power and run at a much lower frequency. You can therefore see how heat is much more manageable in such processors. As heat is reduced, room opens up for other options, such as transistors, hence why they can increase the count without the tradeoff. They also use other tricks like Big-Little, which means having a few really good cores and then the other ones are just small weaker ones that handle normal tasks with a lower clock and such.
 
you know all of the upper end processors from both inte and amd will do everything and more than most people will ever need, software dev is always years behind hardware ,and this is the problem.jim keller left amd soon as he had done what he was employed to do ,he then moved to samsung who are moving into the processor market, he has moved to intel and will be already working a new architecture for intel .i believe he works on more than one archtecture at a time ,will be interesting to see the similarities that emerge across all the main processor manufaturers.i predict that there will be buyouts and takovers in the future .i feel Samsung are already buying into amd shares .whether intel will start to make moves on a takeover.of amd or part takeover will be interesting .and then nvidia .have read nvidia are making moves on cpus ,although i have no idea how they could do that unless they sell out to someone else .but i dont see that at all
 
In the 20 years I've been building computers, not once did own an AMD system for myself. AMD was never "bad", I just made the personal decision to pay the premium and stick with Team Blue.

Given this latest news, combined with the whole Meltdown/Spectre fiasco and the way Intel handled it... honestly, my next build will almost assuredly be Zen 2 in early 2019.

Team Red?? So be it.
 
14nm my A++s. Its marketing hype. All that matters is power consumption and they can't get it down. They must be stumped to hire Jim Keller.
 


Its worse, They didn't do the R&D on new materials and transistors sizes are still at 50nm to 70nm after a 20 years. Moores law is marketing hype is full of it.
 


 
Status
Not open for further replies.