Intel Stops The Tick-Tock Clock

Status
Not open for further replies.
Which means the next Intel CPU will be on 14nm, just like AMD's Zen. Maaaybe there's a chance to catch up? I can hope, right? 😀

I know, they will be on a mature node, with experience and refinement, and faster even without improvements. But the gap will close faster if Intel can't keep its improvement rate.
 
Which means the next Intel CPU will be on 14nm, just like AMD's Zen. Maaaybe there's a chance to catch up? I can hope, right? 😀

I know, they will be on a mature node, with experience and refinement, and faster even without improvements. But the gap will close faster if Intel can't keep its improvement rate.

Not to be a downer but if Intel is struggling at this point what makes you think AMD will be able to close the gap considering they will run into the same problems. After the Athlon 64 days AMD has always been behind. I want them to succeed and put pressure on Intel, but I don't think they will be able to close a big gap like that.
 
Transistor technology is going to get more and more difficult to improve as time goes on. The technology just can't support getting much smaller. At what point are we really going to start focusing closely on the emerging alternatives?
 
Intel is allowing itself to run into difficulties, as it makes no sense to invest so much into the performance race, with its competitor now pretty much ineffective in the high-end class. So to all Intel fanboys - this is the result of having world domination. I was never a fanboy, but I had my beliefs in who I prefer. Now that I've matured I understand that my best interests are to have a strong AMD too.
The result of this lack of real improvement is the fact that there's no real justification for me to upgrade my 3.5yeras old i7-3770K CPU (with its motherboard and RAM) for a Skylake. The 20% (max) gain in gaming performance is not enough to justify an upgrade.
 
Well, following the ridiculous news that Kaby Lake will only support Windows 10, I'd be quite happy to see a further generation of CPUs that can still work on Windows 7.
 
There is no competition (AMD can't compete), and the market in x86 has been slowing down, and adding more transistors didn't make any notably benefit, so. People won't see their computer obsolete, as often. Why bother?
Cheers
 
Intel is allowing itself to run into difficulties, as it makes no sense to invest so much into the performance race, with its competitor now pretty much ineffective in the high-end class. So to all Intel fanboys - this is the result of having world domination. I was never a fanboy, but I had my beliefs in who I prefer. Now that I've matured I understand that my best interests are to have a strong AMD too.
The result of this lack of real improvement is the fact that there's no real justification for me to upgrade my 3.5yeras old i7-3770K CPU (with its motherboard and RAM) for a Skylake. The 20% (max) gain in gaming performance is not enough to justify an upgrade.
i wouldn't say they're allowing themselves to run into difficulties, but they certainly are becoming lazy or are purposely doing this. they have no reason to continue releasing faster chips because year after year, their chips are the fastest. all they need are marginal optimizations.

AMD is needed more than ever. this being said, if ZEN performs as good as even Haswell, I will jump ship. Even if it is at a small premium. Last time I built a computer, it was between ivy bridge and bulldozer. wasn't much of a choice.
 

Intel is not immune to the laws of physics and the increasing difficulties that come along with going smaller.

If going forward was so easy, the other chip designers and makers would have no trouble catching up with Intel. But they are all struggling just as much. Like it or not, that's what happens when you get close to physical limits.

The only practical way to increase performance much beyond the current state of things is to add more CPU cores but there is too little mainstream software capable of leveraging multi-core CPUs in a meaningful and efficient way for heavily multi-core/multi-thread mainstream CPUs to make sense.
 

AMD is fabless. They spun off and sold their fabrication factories years ago. They now just design the CPU (or GPU) and contract with a fab to manufacture it for them. The fab factories are the ones closing in on Intel. Samsung and GlobalFoundries already have 14nm working. Toshiba has 15nm. TSMC is the big one which is still stuck at 20nm.
 

Why do people keep repeating and believing this misconception? Microsoft is not locking down Windows to only run on certain CPUs, they're just not going to add support to older OS's for newer technologies and ISAs. As an x86 CPU, you can install and run older Windows on Kaby just fine, it simply may not take advantage of newer features Kaby offers. You're also going to be at the mercy of getting drivers for the new hardware on the older OS. Take a look at MSI and ASRock's sites. Notice at Z87 they stopped offering WinXP drivers? You can't even get Win8 drivers for current mboards ( Win7 and 8.1, but not 8, and yes, it makes a difference ). So why all the outrage at MS doing something that hardware mfrs have done for years?

As for the tick-tock change, I think this also reflects how powerful CPUs have become lately. Years ago, having 5+ yr-old CPU meant a computer that was completely inadequate. Now we still have people rocking away with Nehalem, Sandy Bridge, and AMD Stars without a worry. If consumers don't have a need to upgrade as often, why not extend the R&D to try and wring as much optimization out of what we have now?
 

Your reasoning is right, but you have cause and effect reversed. It does in fact reflect how CPUs speeds have stalled. From 1980-2005, the software makers could write sloppy code, confident that by the time it was up for sale in a year or two, processors will have become fast enough that it wouldn't be slow anymore. But if you were using a computer from around when the software was being developed, it would be slow at running the new software.

1985 - 2 MHz
1990 - 33 MHz
1995 - 300 MHz
2000 - 1.2 GHz
2005 - 3.5 GHz
2010 - 3.7 GHz
2015 - 4.0 GHz

Around 2000-2005, processors (notably the Pentium 4) ran smack dab into the power leakage wall which prevented clock speeds from being cranked up any higher. Any faster and the power requirement and heat generation skyrocketed. Robbed of the primary means of speeding up processors for the last 30 years, CPU manufacturers had to resort to other means to eek out performance. Multicore, out of order execution, optimizing pipelines, hyperthreading (using unused parts of the core to execute other instructions in the queue), etc.

These helped a little, but nowhere near as much as cranking up clock speed. That's why a Sandy Bridge processor from 5 years ago is still about 80% the speed of Skylake clock for clock. A couple decades ago, you would've expected it to be just 10% the speed. Most of Intel's R&D has instead been focused on reducing power consumption.

Faced with this new reality, software makers have been doing something they didn't really have to do before - optimize their code. That's the reason Windows 8 and 10 actually run faster on older computers than Windows 7. Which is why older computers can still run modern software. Which is why we don't need to upgrade computers as often. In a way, we were a victim of our own success in the past - forced to upgrade computer hardware frequently because it was improving so quickly. No more.
 

I seriously doubt Intel is doing this by choice: the lack of meaningful upgrades for people who already own an Intel CPU from 3-4 years ago means that most of these customers are going to wait several years longer before bothering with an upgrade.

Until about 10 years ago, people were itching for a new PC every 2-3 years because there was a huge immediate performance gain from upgrading that was obvious in nearly all software. Today, upgrading after five years only gets you ~50% more performance which may go unnoticed in most software.

PC sales are dropping by 6-10%/year for a reason: lack of reasons to upgrade. Intel and AMD have trouble delivering faster CPUs, most software is already more than happy with existing CPUs anyway and an increasing number of people are forgoing PCs and laptops, preferring to do most of their personal computing on their phone or tablet.
 
I've had no strictly performance reasons to upgrade for years. In general, I have done so to get newer or faster interfaces, other new features, or because "it's my job to be current with computer technology."
 
Proof from the horses mouth that Moore's law is dead. No one can go beyond 7nm on silicone and produce a functional chip. Scary times for silicone valley & the whole tech economy.
 

The Pentium Pro already had superscalar pipelined out-of-order execution and the DEC Alpha family had it many years earlier.

Those pipeline optimizations yield about 50X more work getting done per core per clock tick (the 8080 could take 10+ cycles to execute individual instructions from instruction fetch to finish while a Skylake CPU can issue as many as 50 instructions to its execution units in the same 10 ticks) and they were done because finding ways of keeping existing execution resources busy is a much cheaper way of increasing performance than adding more of them and cranking clock frequencies up.

HT is one more example of finding cost-effective ways to keep execution resources busy: when Intel introduced it, they said it adds less than 5% to chip complexity but yields 20-30% more performance depending on instruction mix. That figure is likely better on modern Intel chips with more issue ports.
 
Yes, pure clock speed matters, and yes, some programmers use that as a crutch for poor code. I think it was understandable back in the 80s and 90s since so few people knew how to code, let alone code well. But you might want to rethink the timing here of your claimed cause/effect.

We're nearly 15 years past Netburst and the power leak wall, as you called it. We're ten years past the last time we saw significant clock speed increase. Obviously Intel saw this back then because they abandoned Netburst for Core. You really want to argue that it took Intel this long to finally adapt to what they saw clearly back then with this new tick-tock-tuck approach?

As you yourself state, those "other means" were pursued immediately after the speed wall, so don't you think it's more apt to say they were the reaction to the clock stall, and not this new design paradigm?

If you really think that these ancillary technologies like multi-core, Hyper Threading, or anything that speeds up IPC are vastly inferior to pure clock speed, perhaps it would be helpful to state what kind of scale you're thinking of. Otherwise compare a C2D against a Sandy Bridge and Skylake Pentium at the same clock and see how much difference they make ( which wouldn't even account for HT or additional cores now available ).

In short, your claim might make sense if Intel was in a bind and was searching for any way to dramatically increase performance due to heavy software demands. But we have such an abundance of CPU power right now, despite the clock stall, that CPUs aren't being pushed by the majority of software out there. Not only that, but these new architectures are planned out years in advance. We won't see the fruits of this new three-stage design for another couple of years at the earliest. By that time, you're saying either it took Intel 20 years to get something to market in reaction to Netburst's failings, or that it's been paying for Netburst's problems for 20 years.
 
Which means the next Intel CPU will be on 14nm, just like AMD's Zen. Maaaybe there's a chance to catch up? I can hope, right? 😀

I know, they will be on a mature node, with experience and refinement, and faster even without improvements. But the gap will close faster if Intel can't keep its improvement rate.

Not to be a downer but if Intel is struggling at this point what makes you think AMD will be able to close the gap considering they will run into the same problems. After the Athlon 64 days AMD has always been behind. I want them to succeed and put pressure on Intel, but I don't think they will be able to close a big gap like that.

mainly the guy behind amds new architecture, the co author of x86-64 and the lead designer when amd was ahead of Intel, went over to apple and i cant say for sure because looking it up would be a pain, but i believe it was beating intels ipc back then (and i have no idea how that was calculated, could easily be wrong) and they had him back for the zen, the hope is it brings amd close to current, though id be happy with just sandy bridge performance which should be the minimum.


AMD is fabless. They spun off and sold their fabrication factories years ago. They now just design the CPU (or GPU) and contract with a fab to manufacture it for them. The fab factories are the ones closing in on Intel. Samsung and GlobalFoundries already have 14nm working. Toshiba has 15nm. TSMC is the big one which is still stuck at 20nm.

not to mention that if the current rate they are going, Samsung should/is set to beat Intel to the theoretical limits of silicon.
 

AMD was ahead of Intel only because Intel was still desperately trying to make Netburst work at any cost back then. Netburst itself was a huge regression on IPC compared to the P3: the 1.3GHz P3 Tualatin was winning benchmarks against the P4 until the P4 passed the 2GHz mark, that's a 50% IPC handicap for the P4.

AMD's domination crumbled as soon as Intel admitted that Netburst was a dead-end and started over, using the P3/Pentium-M as the new foundation and enhancing it with lessons learned from Netburst to create the Core series.
 
The last real big Improvement was pretty much Sandy Bridge. Everything after that is optimization to some degree. No huge performance improvements since then.

The big step before that was basically Conroe/Penryn (Core 2) which had a huge jump in performance and was able to compete with nehalem on many levels.
They could have just been pushing that architecture to faster clockrates (note what clockspeeds those 45nm chips are reaching when overclocked. Around 3.8 - 4 GHz are easy to get, and with further optimization were easy to manufacture)

Even with all the improvements since 2008 we still can run any modern OS on these chips without real need fore more, exepct when doing stuff like video rendering or gaming. It's basically the first time we can use modern software on 10 year old systems without any problems.

Also around 5-7 nm there are other physical laws to keep in mind. On that scale were talking about gates that are less than 13 atoms in size. On that distances electrons might just tunnel through and silicon is no longer a semiconductor. so the laws of physics will set a limit to shrinking beyond that point)
 
Is anyone actually surprised about this? The writing has been on the wall for a long time, just look how long it took for intel to put broadwell out onto the market and that have consistently said that each of the die shrinks they've done recently has been more difficult than the last
 
Lots of good discussion going. Just wanted to say, personally, I approve of this altered design scheme. Companies that produce microprocessors have traditionally relied on the ever shrinking transistor nodes to help improve the performance and efficiency of their chips. Sometimes that was necessary, and using smaller transistors does provide considerable benefits, but it isn't really necessary to improve either.

A perfect example of this is Nvidia's last few architectures, particularly Kepler and Maxwell. Both use the same transistor technology, but Maxwell is undeniably faster and more efficient. Companies working to produce better transistor technology wasn't keeping pace, so Nvidia was forced to work around it. It is also to note that Maxwell is essentially a heavily modified Kepler. Refinement can produce powerful results. Similarly, all of the Core processors are essentially refined versions of the previous architecture. If you look at them from a die diagram, they look very similar. And the Core architecture itself is also a heavily refined multicore adaptation of Pentium M, which was a refinement of Pentium-III. Not having the new transistors to carry part of the weight, will likely push Intel to be more innovative with their improvements.
 
man these articles ! just huge disappointment ok, I just got a gamming notebook 4months ago and guess what it's 2 generations behind it's a Haswell 22nm
where are these new CPUs ? they never make it to mass production
 
Status
Not open for further replies.