80-watt Hamster :
Sheesh, why all the negativity? As someone who put together a Skylake platform with an i3, I'm looking forward to more capable -K processors being released for the same socket.
Certainly for someone who got a Skylake Pentium or Skylake Core i3 when the time comes a Kaby Lake K (unlocked) Core i7 will give you a real kick in CPU power while (Lord willing) being able to reuse current components such as mobo, DDR4 RAM etc.
What is concerning though is a somewhat minimal improvement. Let's go back to the graph they showed showing the 8x to 10x performance-per-watt improvement. That's good and all that but that curve is not even a straight line anymore, it's showing a clear limiting function/ asymptote. If Cannon Lake is 10nm and that's also not an architecture improvement per se (it's mainly die shrink)...
TL;DR Intel is far from dead but the glory days of high profit margins and huge leaps in performance and performance/watt is for the most part behind us. I personally think silicon at sub-10nm is at a dead end. The silicon atom itself is said to be 0.2nm.
Thus we are at the end of a current age/epoch and at the cusp of a new post-silicon one. Don't get me wrong, a Core i7 Kaby Lake running at 5ghz on all 4C/8T will be absolutely beastly compared to a 2.5ghz Sandy Bridge dual core. That's exciting - however Sandy Bridge Quads overclocked to 4ghz in most practical situations are holding their own to this day. And anything very CPU bound like 3D rendering, Lightmapping, physics, simulation, etc are all orders of magnitude faster on the latest GPUs:
https://www.youtube.com/watch?v=gVe6FNd4yQc ...And that's with Nvidia Maxwell Titan X I believe Nvidia Pascal Titan X has been released and 2017's Nvidia Volta architecture will probably blow away any chance of CPUs being competitive at most CPU-intensive work, performance, performance-per-watt, performance-per-dollar, etc.
In the gaming world already we are seeing the attitude of "what's the least amount I can spend on my CPU so I can spend the most amount on my GPU". Video encode/decode is an area where if I am not mistaken GPU acceleration is also key. Consumer video encode/decode on the latest Intel CPUs is alright and useful, but again, as per above, where the CPU is really needed, GPGPU is only gaining further distance.
Xeon and Servers are of course a different kettle of fish and that is apparently 50% of Intel's business - but in terms of enthusiast and mainstream non-ARM PCs, things are drying up.
bit_user :
they bumped the max clock speed 12.9% and got a 12% improvement ?
Ahahahahahahahaha !
Why is that funny? Were you expecting 13%?
Given that they didn't change the architecture, then the
most speedup they could get from a clock speed increase would be 12.9%. Performance scaling at 93% of the clockspeed delta is actually pretty good.
And that beats the incremental performance increases we've seen in all of Intel's desktop CPUs, since Sandybridge.
While I personally wouldn't deride Intel the thing is their communication, choice of benchmarks and comparisons, are not wrong per se, but overall shows transitioning out of sub-10nm silicon to something else will now be their biggest challenging while fending off the huge momentum on the ARM side of things... in the foreseeable future.
InvalidError :
Realist9 :
From the article..."limited performance improvements available from 10nm products". Isn't Cannon lake (2018), which is supposed be an actual performance increase, unlike the past 5 yrs of desktop CPU's, supposed to be 10 nm??? Is this bad news for Cannon lake?
That comment wasn't about Intel, it was about some of the other foundries deciding to skip re-tooling their fabs for 10nm and focus on the jump to 7nm instead.
Cannonlake is primarily supposed to be a Kaby Lake shrink. Based on Ivy Bridge and Broadwell, the last two process shrinks, there is no reason to expect more than 5-7% gains from those. Even Skylake's updated architecture only produced sub-10% improvements over Broadwell/Haswell.
In all likelihood, you can forget about ever seeing major leaps in architectural performance gains per clock per single-threaded core.
That would be my current understanding. Since Cannon Lake is ostensibly not an architectural revamp per se but die shrink, and also since people are saying 10nm should be skipped(!) for 14nm-->7nm, altogether Intel will still produce great chips but the glory days do appear to be behind us, at least in terms of silicon.
That said it is exciting yet very scary to see what comes out post-sub-10nm-silicon.
On the business side of things, perhaps one thing not being noted is that Intel's business and marketing strategy is very much geared towards Tick-Tock. It is what got them out of their funk when Athlon 64-bit dual cores reigned supreme - as I understand "Core" was born out of the Pentium 3 architecture/philosophy(?) and Tick-Tock has brought us to where we are now. The fly in the ointment is how Intel has used this strategy in relation to pushing their chips through OEMs - the refresh rate, marketing/hype cycles are all related to the physical manufacturing cycles.
The challenge Intel faces now is if they don't skip 10nm they face quite a bit of costs since within a very short time they have to hit 7nm. But if they ~do~ skip 10nm that means they might face delays, get "out of sync" with the business/ sales/ marketing/ hype cycles and Intel and OEMs may suffer sales - think about Windows 10 - that appears to have not led to the desired to PC purchases, and in any case Microsoft started giving away Windows 10 - and 8GB RAM, an SSD and Windows 10 is more than enough to rejuvenate a 1-5 year old PC. "Yoga" form factors are interesting but the Surface form factor is where the sales are at - yet this is low-margin for Intel, AFAIK.
So all in all I shall end my very long post with, "the end of the silicon age is upon us". All-in-all it's been a fun ride in nanometer-land (from apparently 800nm in 1989) through these 30 years.