Intel Stops The Tick-Tock Clock

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

InvalidError

Titan
Moderator

Can you name even a single chip designer or manufacturer which has a CPU architecture anywhere near as mature as Intel's who fares any better than Intel in improving their chips even further? You might be tempted to blurt out "ARM" but ARM is still catching up on well-known CPU performance optimizations that have been in desktop and server CPUs for 10+ years, so I would not count ARM A72 as mature yet.

Performance stagnation due to current chip fabrication process and material limits is not unique to Intel. It is affecting the whole semiconductor industry and CPUs, regardless of ISA, are getting hit worse due to the generally sequential nature of software. While there may be alternate fabrication processes and material available right now, most are far more expensive than silicon and would not be economically viable until more research is done to either bring costs down or find entirely new processes and materials.
 
Not to be a downer but if Intel is struggling at this point what makes you think AMD will be able to close the gap considering they will run into the same problems. After the Athlon 64 days AMD has always been behind. I want them to succeed and put pressure on Intel, but I don't think they will be able to close a big gap like that.

Getting down to the same node as Intel, though not as mature, will go a long way increasing efficiency. But even more, it is an entirely new uArch and that can be huge, as others have noted in this thread. This next generation has the potential to change the game, on both CPU and GPU fronts. I'm trying to stay optimistic and take all AMD's marketing with a grain of salt, but I really look forward to see what AMD has come up with and hope it is at least comparable to Skylake, considering that Kaby probably wont be far off when Zen drops.
 

lodders

Admirable
There are plenty of people on this Toms Hardware looking to upgrade a computer, and if they have an i5 or better built after 2010 I usually advise them not to bother.
My son is happily gaming on his overclocked phenom II x4 (which is now 7 years old).
Given the limitations to clock speed and transistor size eloquently discussed above, anyone building a new PC with a decent i5 in it should expect it to be nearly as fast as a future new PC even when it is 5-10 years old.
Once nobody really needs to upgrade, is this the death of the PC industry?
 

I really hope this is not prophetic. After all its hype, when Bulldozer "dropped," it did so just like a turd. I really want AMD to do better.


Software needs to catch up. Maybe it will be a new class of AI-based application that will drive a completely new programming paradigm (think parallel processing, neural networks, etc), but once that happens, hardware will need to catch up again, or at the least be significantly reconfigured. What would be truly remarkable is to discover that whether by chance or by design, the APU model of processing is already there.
 


New uArch and process node can also bring along problems. That was the entire idea behind the tick-tock originally. They launch a new uArch on a proven process node then after that uArch has been proven and matured they move it to a new process node then repeate. Makes sure they don't ever risk having one bad side ruin the other.

Again, Pentium D and Core 2 Duo shared the 65nm process node yet if you look at efficiency Core 2 pounded it into the ground.
 

InvalidError

Titan
Moderator

The Pentium 4/D is a poor comparison point since the Netburst architecture was a huge IPC regression compared to the Pentium 3: the Pentium 4 did not manage clean sweeps in benchmark suites against the 1.3GHz P3T until the P4 hit 2GHz. If the P3T got shrunk from 130nm to 65nm, it would have run circles around Prescott too.

There is not much merit in beating an architecture that never worked as intended and was a clear regression over the previous generation architecture.
 


Unfortunately, as you know, AMD doesn't have that luxury or time. However, as far as I can remember, the only issue AMD has had (outside of IPC) was with the Phenom. Intel has had a couple I can think of, however, some were the Chipset as opposed to the CPU itself, i.e.Sandy Bridge recall, and the recent Skylake issue.

But all we can do at this point is wait and see...here's hoping AMD can make good on Zen as I don't think the GPU side can support the CPU side much longer.
 

lodders

Admirable
The problem is not only one of size limits.
The bigger problem is that recent size reduction to 14nm did not reduce the power consumption of each transistor as much as previous process size reductions.
In the glory days of Moores law, reducing the transistor size allowed you to double the number of transistors without increasing the TDP significantly.
Apparently those days are over, even if they go to 10nm process, the power per transistor won't change much.
The new situation is that more transistors = worse battery life in a laptop or phone.
On a desktop not only do you need a bigger heat sink, you also have problems getting enough electricity to the CPU (most of the CPU pins are power pins!) .
This is why Intel have been concentrating on reducing power consumption for the past few years.
 

InvalidError

Titan
Moderator

The main reason why CPUs have so many ground and power pins is inductance: each pin in the LGA socket has at least 4nH of parasitic inductance and at over 5GHz of frequency content on the supply rail, that translates to somewhere in the neighborhood of 100 ohms of AC impedance and the need for hundreds of pins to bring the overall AC impedance back down to something manageable.

Were it not for parasitic inductance, high speed chips would need less than half as many power and ground pins.
 

megiv

Distinguished
Sep 13, 2011
49
0
18,530
I think that actually SSDs caused the slowdown in CPU market.

The real bottleneck for most people in most of the daily use cases was lack of I/O power, not CPU power, and SSD solved it, thus causing 5+ year old computers to feel like new, removing the need to upgrade to a whole new PC (and buying CPU, consequently).
 


That was my exact point, that the Netburst uArch was horribly inefficient compared to Conroe on the same process node.

End point being that having a new process node is not going to be all AMD needs to catch up to Intel as their new uArch will also have to be decent enough.



Sandy Bridge did not have a recall, the chipset did. The current Skylake issue is not an issue though much like AMDs TLB bug as it will almost never be encountered by 99% of users. The ones who were running into it were the people who just run Prime95 all day long.

Bulldozer had a whole host of issues from IPC to power draw to thread handling (Windows needed a patch to properly handle threads on the cores).

Still I am just saying that process node is no guarantee that AMD will "catch up" because it is only one aspect of what will make or break a CPU.
 

InvalidError

Titan
Moderator

Before bothering with an SSD, you need enough RAM to comfortably accommodate all your open software and frequently accessed data to avoid running the SSD down with swapping. Once you have that, the SSD's advantages become much less significant for most everyday uses. Swapping ruins your day regardless of how fast your other components are.

The industry's main issues are that a growing number of people already own PCs more powerful than they can be bothered with and many are moving to phone/tablet computing. In the case of my nephew and nieces born after 2010, I'm wondering if they will ever bother with a PC of their own since they got introduced to phones and tablets at less than two years old and many schools are moving towards standardizing on iPads as a mandatory item from grade 1.
 
We're going to reach another time period like the 80s and 90s where programmers actually have to program efficiently. In the 21st Century, they have been abusing the performance increase of hardware and getting lazy with coding, but when that hardware can no longer get any better, it's going to be up to programmers to start doing crap efficiently again.
 

InvalidError

Titan
Moderator

Or it can go the other way around: piling on more abstraction layers in an attempt to extract more parallelism without putting in significant additional programming effort to make that parallelism actually efficient and meaningful. We'll end up with software that uses several times more processing power through poorly optimized or even counter-productive automated parallelism to get the same job done.

My prediction is that it will continue getting generally worse before getting better.
 

jasonf2

Distinguished
One other thing to realize here is that AMD is outsourcing its fab. The competitor process node transistor density, even if being advertised as 14nm isn't even close to what Intel has been putting out. 10nm Intel chips will probably be the equivalent of 7.5 nm process node for AMD being produced by others. AMD isn't Intel's biggest problem it is really ARM which has been able to produce more efficient architecture on older process nodes. I am pretty sure that their refocus on two cycles of arch development isn't really a bad thing at all. Intel will have to compete with ARM going forward. This means that they will have to figure out how to get price competitive with ARM while maintaining some margin. Huge investments in bi annual process node replacements are keeping intel chips too expensive to even come close to ARM pricing. Letting the nodes mature by 33% more should help maintain their margin while allowing them to become more competitive.
 

InvalidError

Titan
Moderator

Intel already has chips that compete with ARM's for power (2-3W), price ($20-40) and performance, they are called the Atom product family. Unfortunately, Intel is still struggling to score many significant design wins for their newest z3500-3800 (22nm) and x5000 (14nm) series chips. I bet most people aren't even aware that Android phones and tablets running on Intel's Atom chips actually exist.

It costs Intel about $100 per chip to design and manufacture an i5/i7. Everything above that goes straight to Intel's gross profit margin. If ARM or AMD became an immediate threat to Intel's desktop and laptop chip sales, there is plenty of price movement headroom for Intel to work with.

Intel's biggest immediate threat is consumers' longer replacement cycles due to new chips providing marginal benefits over what their PCs from 4-5 years ago already had. Just look at the number of people holding on to their Sandy Bridge or even Nehalem rigs here simply because there is nothing really worth upgrading to yet.
 


ARM is more efficient in its more basic form but once you add in the same level of complexity as any of Intels x*^ chips you start to see more power draw.

ARM started off as a very simplistic chip and didn't even have OOoE in the original design. But it has been added as have a lot of features that x86 has had but ARM just now implemented and they are slowly creeping up in power draw vs what Intel has.

ARM just has a very good grip on the smart phone market and tablet market because of the cheap licensing and that when it did come out neither Intel nor AMD had a competitive product. Hell even Atom wasn't out until well after ARM had grabbed control of the market.
 

jasonf2

Distinguished
Invalid Your price points are off. Intel has been artificially under pricing its mobile platform for some time to try to make market penetration along with special vendor arrangements. And this isn't a fanboy discussion arguing which is better X86 or ARM. X86 didn't make the first generation mobile space historically because of power consumption issues. If Intel is going to be long term viable they have to make transition into mobile with profit margins that they can work with. Giving away the chips to try to make market penetration isn't going to be around too much longer. Intel's Core lineup (I5 and 7 LV variants) have only been tablet viable for about the last two generations and certainly won't work in a phone form factor yet. BTW I love my Surface Pro. All I am saying is that it is unrealistic to expect Intel to compete with ARM if they adjust their margins up to desktop chip levels in the mobile space. The whole Tick, Tock thing was dreamed up because AMD came out swinging with the Athlon64 and hit them where it hurt (high margin desktop processor business). So Intel has to figure out how to adjust itself into ARMs price-point now as desktops morph into cellphones with docks over the next decade. To do that they are going to have to make the chips for less. ARM is doing nothing but design R&D. TSMC or Samsung front the process node bill for what ever company is licensing the tech and outsourcing the fab. Under that model Intel has what can be both an advantage or a liability. If they can fab chips cheaper than their competition they have better control and margins. If they spend too much money too quickly on next gen node development however they price themselves out of the market because they lack the flexibility to utilize cheaper sources. Exponential increases in node R&D costs ultimately have to mean longer process node cycles if there isn't direct competition from the fab side of the business. So what you are actually getting now is a tick tick tock cycle because the old desktop tock cadence keeps Intel too expensive to compete in the mobile space and desktop business continues to shrink.
 

InvalidError

Titan
Moderator

Every other mobile SoC costs from $10 for low-end Mediatek chips to $45 for the Nvidia's X1 and other high-end chips. If Intel cannot produce competitive mobile chips priced within that price range, then they are doomed as far as mobile is concerned.

If you think my price points are "way off", consider this: you can get 8-10" tablets based on Mediatek SoCs for under $100 and 7" ones for as little as $50. There is very little room in the mobile SoC space for Intel to indulge in the same massive price gouging they have been doing in laptops and desktops.
 
i dont think this has anything to do with performance on the high end. there is no demand for more performance on the high end. there is demand for performance in an extremely small package. they can easily make 5" square cpus and pack 50 cores onto it and eat up 300w but there is no demand for that kind of horsepower, they just use an array to get the job done. there is a demand for basic i3 performance soc that runs at <1watt and weighs <1gram.
 

jasonf2

Distinguished


We just said the same thing. Only what you are terming as price gouging is what I am calling profit margin. Those profit margins in the desktop space have also funded the process node R&D. Another fabless ARM licensed company mediatek is exactly what I am talking about here.
 

InvalidError

Titan
Moderator

Fabless companies do not get fab services for free. The fees they pay to have those wafers made by someone else cover that someone else's costs to get the process R&D done and span everything from SRAM chips to SoCs and GPUs, not just CPUs. Everything being made by a foundry pays for some fraction of the R&D. In the case of Samsung and nearly everyone else other than Intel, they don't even make desktop or server chips for themselves, the only profit they see "from desktop and server" is the margin they make on someone else's wafer starts.
 
Status
Not open for further replies.