Intel Broadwell CPUs to Arrive Later This Year

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Intel has barely begun starting to push the mobile SoC front in a remotely serious manner: they did not have anything worth talking about under 10W until this year and CPU-wise, Baytrail was beating ARM in most benchmarks by a fair margin at the time it was announced.

Intel is not making losses because they are "falling behind;" they are making losses because the high-margin PC/laptop segments are collapsing in favor of lower-cost, lower-margin systems (or in some cases, migration to smartphones and tablets) and longer replacement cycles.

Even if Intel went ARM, they would still have the problem of gaining market share in the mobile space and ARM-based sales generating only ~1/10th the gross profit per unit as higher-end desktop CPUs. Same goes with Atom should Intel manage to gain market share there: even if Atom scored heaps of design wins, Intel's profits on Atom sales would still be only a small fraction of what they get on i3/5/7. Also, SoCs have integrated IO controllers so there is no further profit on chipset sales to complement CPU sales either with SoCs, which is another $5-10 loss per sale for Intel compared to their traditional desktop/laptop model.

Going from selling $200-600 mobile CPUs + $30-50 chipset per laptop to selling $30-50 SoC is a pretty steep drop they have to work with if they want to play in that arena regardless of whether they choose to do so with ARM or Atom.
 


Intel would be making ~13B+ if it wasn't for the 3.1B they toss away on mobile (and another 1B this last Q). Did you read the JP Morgan stuff asking Intel to STOP making mobile which would free up .50 eps (not that I agree, just saying)? If you're the top dog you get a premium for your stuff (IE NV's gpus vs. AMD's, Intel cpu vs. AMD cpu). They are practically giving them away because they are NOT the top dog in mobile in any way. Margins would be fine if they had a chip they could charge REAL pricing for. You're also forgetting we are talking 1.2B units vs. 340mil. You don't have to charge a 62% margin when selling 4x the units.
 
When I started at Intel, they were on the tail end of their P852 process at 600nm. The P854 process at 350nm was just ramping up. This process saw Pentium Pros and Pentium MMX along with regular desktop chips. Hard to believe they will be at 14nm by the end of the year.
 


Yeah, it's nice they had gotten some competition from AMD to ramp up their R&D department. Lets hope AMD can rebound back.

With ARM's 'threat', they're still quite a ways off on a performance level. Apple even thought about switching their CPU architecture to ARM on laptop/desktop to unify their platforms, but realized they'd be way too slow vs the competition. Intel on the other hand is closing the gap on performance per watt.

As the famous Japanese quote goes: "I fear all we have done is to awaken a sleeping giant and fill him with a terrible resolve."
 

Pretty much.

3-4 years ago, few people (myself included) would have believed it would be possible to bring relatively high performance x86 cores down to sub-3W power budget but this year, Intel is shipping 2-3W x86-64 SoCs that outperform 32bits ARM CPUs in the same power budget.

I'm just amazed at how the old x86 kludge managed to survive so long and looks like it might finally break in the handheld and other traditionally non-x86 markets. ~15 years ago, I thought I would own an IA64-based PC by now.
 


Outperform arm in what?
http://www.slashgear.com/nvidia-tegra-k1-out-performs-intel-haswell-in-early-benchmarks-13312939/
I don't see Intel doing much of anything in mobile other than losing money.

More fantasy on your part. But this year...blah...blah (I've been hearing that for a few years now)...Let me know when they stop losing money giving away chips. You act as though the enemy sits still waiting for Intel to finally pass them. They are NOT. On the other hand, ARM already stole 21% of Intel's entire notebook market and Intel still can't get into mobile without paying the difference between ARM's chips and theirs (I guess you'd call that price matching). More damage is heading Intel's way as ARM can far more easily adapt to MORE watts than Intel can to LESS. Samsung/TSMC will be on their new processes before Intel moves to 14nm. You gain nothing and actually seem to be losing ground in more ways than one. 3Dfinfet was supposed to take ARM down...How did that work out? Now all they have is a shrink, and the enemy has one first. Next time, the enemy has 3Dfinfet too...What then? They gain more, that's what.

Intel better buy NV before china/korea run them over in fabs (TSMC, Samsung). ARM socs produced with the best gpus on Intel's process would be a game changer. Other than that we'll see ARM erode Intel's finances more for the foreseeable future.
 

I already answered that half a dozen times already: CPU performance; both raw and per watt. If you look at CPU-only benchmarks, Baytrail is 20-40% ahead of current ARM chips in most benches - particularly parsing and branch-heavy ones like browser and JScript benches. Intel has very much proved they are definitely capable of bringing x86 all the way down to ARM-level power budgets but AMD is still struggling to get there without sacrificing too much performance.

The K1 is mostly marketed for gaming-oriented devices and devices using it are not shipping yet. By the time they do, Intel's Moorefield will be about to get on the market too and likely give Atom another significant boost in CPU performance by reinstating out-of-order execution.

The competition might not be standing still but Intel is incrementally rolling all their architectural big guns back in with each new atom generation bringing much greater leaps in performance per watt than any ARM competitor does and 2014 marks the year where Intel's SoC-CPU performance leapfrogged most of the competition under 3W.

Yes, Intel still has some catching-up to do on the IGP side of things but that is not as much of a problem on more productivity/business-oriented mobile devices.
 


You might want to read that article again. The Tegra K1 outperformed the Haswell in graphic benchmarks only, not CPU power. And even so, it's a HD4400, not a HD4600 or Iris 5200. Nvidia better be able to win in the graphics department, considering that's what their company is based on initially.

"Samsung/TSMC will be on their new processes before Intel moves to 14nm."- And they will still be slower than the slowest chip in Intel's x86 lineup (outside possibly Atom).

"3Dfinfet was supposed to take ARM down." News to me that that was ever posted anywhere. The point of making 3D transistors was the ability to be able to cram more transistors into a smaller space; as mentioned by wikipedia.

"Multigate transistors are one of several strategies being developed by CMOS semiconductor manufacturers to create ever-smaller microprocessors and memory cells, colloquially referred to as extending Moore's Law."

"More damage is heading Intel's way as ARM can far more easily adapt to MORE watts than Intel can to LESS." ... You may want to take a course in computer design before making a statement like that, then realize it's a false assumption. ARM by nature is designed to run at low power vs. high performance. That is what RISC (which ARM is based on) is designed for. Throwing more watts at it will not make it perform at the same level due to it's fundamental design. Oh and with Intel doing more with less wattage? What do you think these instruction sets Intel keeps adding to their CPU are? (MMX, SIMD, etc). They are RISC instructions.

Again, where is Intel losing money on their CPU chips? I know you may believe all the hype Nvidia creates, but when it comes to market size, they're just like a Chihuahua; a small dog with a constant bark, surrounded by a bunch of Great Danes (Intel, Apple, Qualcomm). Intel isn't going anywhere anytime soon.
 

RISC is not specifically designed for low power or any power budget in particular: look at IBM's Power-series chips with TDPs going up all the way to 400W per package. The original design goal behind RISC was simplicity and homogeneity - keep instruction sets simple and the instruction/data formats as uniform as possible to waste as little logic as possible on decoding instructions and accommodating different instruction/data packing formats.

Also, if you look at the internal architecture of Intel and AMD chips from the past ~15 years, their internal design is fundamentally RISC: messy x86 code comes in the instruction decoder and the decoders issue one or more very RISC-like microcode instructions for internal scheduling, re-ordering and execution. You could almost say modern x86 CPUs are silicon-based emulators.
 


Xmas moorefield will be facing denver which is also out-of-order execution and we've already seen the dual performs like the quad in cpu and will likely be facing others at 20nm (NV is also rumored to be attempting to pull 20nm in Q4 but I doubt that as apple/qcom will get it first probably). A57 is a huge leap over A15.

You're claiming K1 isn't in devices yet but at the same time telling me moorefield is coming (and btw not until late H2 which coincides with Denver K1 with A57's)...So not in devices for a while either right? What device has leapfrogged ARM at 3w?
http://blog.gsmarena.com/nvidia-tegra-k1-benchmarked-aces-tests/
http://www.evolife.cn/html/2014/77075.html
(2nd link was source of them I think)
Those are most of the K1 benchmarks. Point me to some scores toppling this with Intel. Those are not just gpu benchmarks in there. So far I've only seen press materials on merrifield and running webxprt is dubious at best at this point.
http://vr-zone.com/articles/nothing-redeeming-intel-mwc-2014/72692.html?utm_source=rss&utm_medium=rss&utm_campaign=nothing-redeeming-intel-mwc-2014
As discussed above, Intel optimized only probably (reminds me of bapco sysmark fiasco). Feel free to google Z3480 and give me some links to victories. As gaming is taking 70% of our time on mobile I fail to see a small cpu difference doing anything. It's all about the games going forward. The slashgear benchmark I linked shows K1 isn't bad against a 15w laptop chip on the gpu side (and I'm sure Iris will use more watts than 4400).

http://anandtech.com/show/7314/intel-baytrail-preview-intel-atom-z3770-tested/3
baytrail z3770 sunspider 566, K1 above scored 501 (wins, Jscript, lower is better).
Kraken z3770 4686, K1=3958 (wins, lower is better).
Google Octane z3770=6219 while K1=6450, but can't really compare since it's v1.0 vs 2.0 on NV K1. Intel beats shield though by 50% here, and K1 shows ~20% faster than T4 in 2.0, so I'd expect Intel to win here but nothing earth shattering and this is the crappy K1, not Denver coming for xmas.
K1 has 44225 in antutu which blows away 801/T4 as shown. New Antutu brings Intel down from previous BS benchmarks so not sure where they are now:
http://www.eetimes.com/author.asp?section_id=36&doc_id=1318894&
Antutu nonsense fixed. Again cpu.

Loses to qcom in Browsermark at anandtech link, so yet another Java benchmark NOT showing Intel good.
AndEBench Java again shows qcom kicking Intels butt big time (699 Qcom, to Intels 428...OUCH). I'm not seeing what you're saying here.

It wins the native AndEbench but 801 is faster than 8974 800 so still won't be a blowout and 805 is upon us next month or two so that's it's real competition on Qcom side now. We can see K1 tripling T4 in basemark X and more than doubling S801. Basemark OSII K1 again doubles T4, and blows away 801 also. We can't compare Intel here yet AFAIK but we know how it does in a number of benches vs. S800, and we see the damage K1 does to S801 which is faster.

Point me to some benchmarks we can compare to the K1's where Intel is winning on android. I'm not really interested in hearing windows stuff, as that really isn't comparable. I see Qcom/K1 doing very well against Intel. You're acting like only Intel can improve massively but as shown K1 can triple T4 and we are NOT talking A57's which will be hitting from Qcom & NV (apple custom also) soon. Though Qcom's custom is a long ways off, apple/NV is this year. We have already seen Denver benchmarks.

What the heck are you looking at for benchmarks? Links please, statements are useless to me without the links. I see you claim a lot, but I prove otherwise. Proof please, not your opinion. IF you have some, I'll be happy to look as I have money at risk 😉 IF you have the benchmarks post the links. You saying something a dozen times doesn't make it real. PROVE it. I know what you CLAIM, but I'm more interested in what can be PROVED. I don't see any Intel dominance. A bright spot here and there showing possible potential, but nothing concrete showing dominance you're claiming.
 


Intel is losing $3Billion a year on mobile.
http://www.eetimes.com/document.asp?doc_id=1322263
JP Morgan telling them to give it up...LOL. You don't read financial reports do you? :)
"The mobile and communications group saw a $3.1 billion operating loss in 2013, with 1Q 2014 losses hitting $929 million and revenues at $156 million. While Intel officials acknowledged the loss, several were quick to call recent financial numbers an “investment” in the mobile ecosystem. "

You really want to argue with Intel on their OWN loss comments? It is expected to be $4 Billion this year if you're looking at Q1's rate of already 929mil loss! I don't believe hype, I believe BALANCE SHEETS and EARNINGS reports. Pile that up with comments from Major financial institutions telling you the pain will not end, and you should get my reasoning here. You think JP Morgan is full of idiots that can't do math?

"JP Morgan statement read:
We continue to believe Intel will lose money and not gain material EPS from tablets or smartphones due to the disadvantages of x86 versus ARM"

Let me know when they're wrong. Until then I'm right 😉 Even Intel's comments say they merely expect to HOPEFULLY bring cost structure down some as they work through 2015. Sounds like a full year+ of complete losses in mobile even from their own mouths.
http://www.fudzilla.com/home/item/33701-yes-intel-is-subsidizing-bay-trail-tablets
https://webcache.googleusercontent.com/search?q=cache:wzDSBCkidrEJ:www.pcworld.com/article/2089421/how-intel-is-buying-a-piece-of-the-tablet-market.html+&cd=1&hl=en&ct=clnk&gl=us&client=firefox-a
Google cache of the pcworld article and since this is kicking in you see the Q1 ~930mil loss and it will continue as shown.

No need to take a course, as Invaliderror just showed there is nothing stopping ARM from putting out a 85w chip at some point and coming full bore after Intel's desktops and that is exactly what I expect with M1, P1, V1 etc revs after denver. Boulder is already on the maps also for servers. They are so far losing 3.1B a year on mobile trying to tackle ARM in THEIR turf. We will see if ARM loses any money trying to take INTEL turf. So far they already took 21% of the ENTIRE notebook market last year. I'd say the writing is already on the wall without Intel buying NV to get a great foothold just as NV takes off with desktop gpus in socs. They already have a custom A57 on the books and benchmarked. It would be FAR better on Intel 14nm and so would NV vid cards. Wins all around. You're confused if you don't think an Arm A57 at 4ghz won't be a problem for Intel's desktop chips. The power is there, they just need apps/games to bring it all home which is already happening for games, and apps are next. Denver dual core performs just like A15rp3 quad core. Now double the dual core denver to a quad core and run it at 4ghz. You think that chip sucks? HECK NO.

What do you think you get when you can shove more transistors into a chip? MORE PERF. 3Dfinfet was supposed to vault intel ahead of everyone and it got them nowhere vs arm. What do you think it means to extend Gordon Moore's law? You will be able to continue to add more crap inside which will allow you to keep up perf increases instead of hitting the wall. You are building my case, not yours.

Let me know when IRIS 5200 gets into an android tablet at arm watts 😉 IF Intel was beating Qcom/ARM they wouldn't have to subsidize their chips as they are now pushing harder with subsidies. Having to BUY your way into a design shows you're losing. The problem for Intel is Nvidia is no longer just producing gpus. Denver is IN HOUSE cpu and you should google the team that is behind it. Major cpu guys from all top designs of the past. Here's a quick one:
https://webcache.googleusercontent.com/search?q=cache:sp_-VGamB5EJ:www.forbes.com/sites/patrickmoorhead/2013/10/10/nvidias-mobile-custom-64-bit-arm-cpu-its-sooner-than-you-think/+&cd=3&hl=en&ct=clnk&gl=us&client=firefox-a

Patrick Moorhead is no dummy (11yrs at AMD, Paul was there too, has a ~dozen AMD patents). He's the #1 ranked analyst last I checked. He helped come up with AMD64 logo...LOL Among other more notable things:
http://www.moorinsightsstrategy.com/about/
Chock full of tech brains, but what does moorhead say about NV's coming chips? :
"Let’s look at Nvidia’s processor team. They have been in existence since 2006 and have hardened multiple, “off-the-shelf” ARM cores. Unknown to most, Nvidia’s engineers have been working on their ARM 64-bit Denver for at least three years, since before the CES 2011 announcement. Hailing out of Portland, Oregon, the team consists of former CPU jockeys from Intel, AMD, HP, Sun, and Transmeta with experience in superscalar, OoO (out of order) execution design, micro-code, VLIW, hyper-threading, and multi-core. Does this experience and background guarantee success? No, but it provides the opportunity to succeed, and succeed big if you look at what others have accomplished."

These guys don't suck. They have covered all the bases to take on Intel without WINTEL at all. From cpu, gpu to the bus (NVlink, removing hypertransport, Infiniband etc, no need), they're covered. This isn't NV kool-aide. Google Patrick M if you don't know who he is. I used to sit in Intel/AMD/NV conferences, so despite not taking engineering, I know a thing or two about these people since I was a reseller of all of them for 8yrs. I'm by no means ignorant about what is going on here.

You might want to READ more articles yourself. You're wasting my time. 🙁 More claims with no proof or anything to back your statements. Intel is losing ground. The data doesn't lie. Intel gains nothing from 14nm as everyone will move to 20nm before their socs go 14nm. A step after that everyone gets 3dfinfet AND a die shrink while Intel gets another shrink. They won't fab their way out of samsung making $32B a year to Intel's 10B and they are NOT alone (IBM laid the groundwork, and samsung/GF are running with the results now). Without something massively changing, WINTEL is in trouble vs the ARM/Android armada. I could go off on MS the same way (I've laid that case out many times in here already) and I think they will be the bigger loser unless they successfully run to greener pastures (maybe cloud crap offsets OS losses, god forbid we end up with Common Core crap etc...) or maybe they try to buy NV. They might have more luck since they don't have the hate that goes on with Intel over the chipset business they killed (and lawsuit etc). But this would do them no good if they didn't start pumping out android Nokia's with NV socs. They already make $5 for every android device sold...LOL. Buying a soc/gpu is definitely a move towards vertical integration now that they own a phone and they earn far more than Intel so it's easy for them to lay out $25B (they made $22B TTM).

http://money.msn.com/business-news/article.aspx?feed=MW&date=20140601&id=17664808
Piling up computex awards already for K1/Grid. Golden Award for K1! Koolaid? NOPE. Apparently a LOT of people are drinking it correct? Grid testing at 100 companies, then 200, now 600. Growing massively by the quarter.
 

ARM is already trailing 30-50% behind Intel's BayTrail in most CPU-based benchmarks and BayTail has about half the IPC of Intel's conventional CPUs so ARM is going to need more than A57 to become a credible threat there. No point in pushing a "low-power" architecture to 100W if it ends up performing barely on par with a 35W i3.

Integrating all the tricks of mature CPU architecture in ARM will take several years of trial and error for individual ARM chip vendors with in-house ARM core design ambitions to adapt to their specific implementation. ARM is not going to magically catch up with mature CPUs overnight since ARM chip designers will have to go through multiple large-scale re-designs just like AMD and Intel have with x86 before settling on a final general form.
 


"ARM is already trailing 30-50% behind Intel's BayTrail in most CPU-based benchmarks"
Links to these benchmarks please. I gave many of mine with benchmarks. They do not show what you say, and in fact quite the opposite in jscript, java etc.

You're not paying attention to the teams at play here. Apple has a team (pasemi etc again loads of people from everywhere), qcom has one, both have been doing in house for ages, NV's team is highly skilled with many cores, bus experience etc etc BEFORE they came to NV. What do you think ARM has been doing for the last 10 generations? Final form? There is no difference between what AMD/Intel is doing and what the ARM gang is doing. Different architectures but the same tactics. You are vastly over estimating the skillset of Intel vs. everyone else. Many of these people are ex employees and every big house (some worked at multiple top dogs, Dec, sun, Intel, AMD, fairchild, motorola etc...). These teams are NOT kids who just graduated from college for any of the names mentioned. Patrick Moorhead was explaining this precisely in NV's case (same with all others). Intel isn't dealing with newbs, they are dealing with 10-25yr cpu veterans and many of whom used to build Intel's/AMD/Dec Alpha/Sun/transmeta etc cpus. They've been doing your "experiments" for 20yrs at all of the top semi companies. You are vastly underestimating the skillset of the teams working the ARM side.

More importantly, you seem to think CPU is still king. I would have agreed 5-10yrs ago. Now it's GPU's turn with a decent CPU to help out. There is no need to be tops on cpu to take down Intel's margins and put a real crimp in their earnings. Are there more cpu limited things or GPU? Do we want FOUR Intel i7-4770's in our PC's or Quad SLI vid cards? Do supercomputers run on tons of Intel cpus, or is the TESLA's in massive quantities that pump out the power in these Top 500's supercomputers? IT IS THE GPU'S. There is no need for more than a tegra3 to feed NV's next gen supercomputers (which was partially the point of the CPU they made, to feed tesla and stop letting INtel/amd do it).

I never said Intel would be out of business next year. I'm guessing we'll be watching the damage for 5-10yrs and even then I don't think Intel or MS go bankrupt. I just think we are witnessing the next changing of the guard.

Again, can't you give some benchmark links with Intel on android showing what you say please? All I see is YET ANOTHER opinion post with ZERO data. You keep making counterpoints (if that) but with no supporting evidence. PROVE SOMETHING or quite wasting my time. Arm has already "magically" caught 21% of Intel's market share in notebooks and they are not even 64bit yet. They have already caused Intel to "magically" lose 3.1Billion chasing them and about to hit 4Billion based on Q1 intel mobile losses. They just started this matching price crap in Nov and we see it is accelerating Intel losses in Q1.

I don't want to hear your 15th opinion again. I want YOUR DATA. I don't care if it's S800/801/805/K1, but show me Intel beating them handily in a bunch of stuff you mentioned. I've already downed your jscript, java, and other cpu claims with DATA. So where is YOUR data?
 

You should re-read the articles you linked and pay attention to "lower is better" vs "higher is better" because my 30-50% better is taken directly from your links.

In the Anandtech link, Atom wins all CPU disciplines except Java and Browsermark by a wide margin. In the Browsermark case, Atom basically ties for second place with three others with a ton more devices not far behind, which seems to indicate Browsermark does not scale particularly well as a CPU benchmark.
 


Try again...K1 isn't in there, that is why I gave the K1 links vs. anandtech's Atom scores. It isn't winning. also lost AndEBench java, only won the native in that one. You're exaggerating saying ALL disciplines when they lose 1/2.
http://blog.gsmarena.com/nvidia-tegra-k1-benchmarked-aces-tests/
K1 from there or the evo link I gave before among others. Compare those to the anandtech scores and intel is losing. Try again please. Intel is beating some of the other chips, but it's facing K1, and you can see it isn't the same story. Nice try though 😉 Yeah intel won against some old chips, NOT K1. NV wins Kracken, Sunspider and as noted Qcom wins barely in Browsermark and again, that's just S800 (surely loses by more to 801 and now 805 right?). So you're using old data to make a point that no longer exists. S805 has been benchmarked also, and it is faster than what is in there too (those were 800's, they have 801 now, and just hitting with 805). Competition already moved, twice in Qcom's case and 810 hits shortly after Denver at xmas (probably a new samsung in there at some point too).

Intel will have problems getting into devices as the camera crap is lacking (13mp, vs. up to 100 for NV, ~55mp for Qcom etc), among other issues. Samsung is about to ship 16mp and 20mp shortly after, so no wins coming for Intel in those even with moorefield. Mobile is about more than just the CPU. In everything else, the GPU is king these days. Other things matter too, but when all is said and done without a good gpu you're not having as much fun or speed (games or pro apps with cuda).

http://www.fool.com/investing/general/2014/03/09/intel-where-were-all-of-the-bay-trail-android-tabl.aspx
Where are all the android baytrail devices? It's difficult to find benchmarks still that aren't FFRD and that info is old today as everyone has moved on.

http://www.fool.com/investing/general/2014/05/25/will-intels-moorefield-be-competitive.aspx
Comparing Qcom 805, K1, Moorefield. Not impressed. Bandwidth and camera lacking. No high end for Intel. We'll have to see how the tablets work out. I expect K1 to do well there for sure. Intel still has to show us some android tablets from the LAST chip as they sure missed the xmas promises right? Then again I predicted that all along on here when people pointed to the slides.

http://www.anandtech.com/show/8035/qualcomm-snapdragon-805-performance-preview/2
Baytrail already losing sunspider even to T4. Again, losing to kracken with Baytrail T100 vs. T4 and others (loses to both TF701T and shield). TWO CPU losses, cyclone shows well vs. Intel also in kracken (again Intel loss).

Octane 2.0, again loses to both T4 devices. TF701T scoring 5681, Intel already behind and K1 scores 6450. Intel doesn't win anything here. Try again :) So we have Octane2.0, Kracken 1.1, Sunspider 1.02 all with INTEL LOSING. T4 also shows VERY well in the basemark stuff and K1 blows it away (3x faster in basemark X and almost 2x faster than T4 in Basemark OS II as shown in K1 benches). Intel won't be winning those vs. K1 either once we get the benchmarks. K1 scoring 1448 with everyone else at anandtech being 1158 or less (ipad air, S805 etc all this or less). GPU scores for T100 in the 805 preview at anandtech are just terrible.

I'm really not seeing your evidence. I digress.
 


I think you are confused. I am not talking about today's performance, I am talking about 10-20 years from now. x86 has seen great leaps in performance almost entirely due to die shrinks and better manufacturing tech that far outclasses the competition. Once we hit the limit of die shrinks then we move to different materials, and once those hit their limit... then what?

ARM was never meant to be, or sold as, and efficient instruction set. It is a low power instruction set, meaning that it can run under certain power envelopes... all be it run extremely slowly. x86 has always been more efficient, but requiring a higher minimum amount of power to function. The recent improvements in Intel's ability to run at lower voltages has a lot more to do with the materials and manufacturing processes in use rather than instruction set improvements. As ARM manufacturers catch up to intel on manufacturing then they will continue to be able to offer even lower absolute power options, but take more power over time for a given workload. Over the next 5 years I think we will once again see Intel and AMD running the devices we interact with, and ARM being relegated to controlling appliances like cars, TVs, and clocks, just as they were before ARM based smartphones hit the market.

And Itanium failed for a whole host of reasons. The first of which is that Intel's engineers figured out how to apply the branch prediction that they learned from the Itanium development to the Pentium 4 (and related Xeon CPUs) which made continued Itanium development redundant and unnecessary (granted, by 'developed' I mean 'hired a bunch of AMD staff'). Beyond that there were a whole host of software and driver development issues which scared of deployment of such systems compared to Xeon and Opteron offerings that 'just worked'. However, if we hit a sort of end point of development with x86 then there will need to be other avenues to be explored and taken, just as they thought they needed to do with Itanium.

But I am talking about something more than just a mere instruction set, but a whole new architecture. While complicated, it is possible with modern electronics to move away from binary processors. Maybe we move to a base 8 or base 16 processor, which through native compression would be able to store and process several times more data per clock cycle. The latency of such a processor might be a bit of a bear to get around, but it would offer so much more raw compute and storage capacity that it may be worth the latency hit. That would be extremely interesting to see going forward. But we are talking about a development nightmare, and a complete resetting of our understanding about computer electronics, so there is not a chance of seeing this in the real world until x86 has exhausted it's options... and we have several more years until that happens.
 

Totally incorrect: x86's IPC has increased by leaps and bounds too.

The 486DX33 is about 70X faster than the 8086 at 8MHz... so that's a 4X clock increase and 16X architectural efficiency increase. The 100MHz Pentium was around 10X that fast so that's about 3X from clock and 3X from architecture. The Pentium Pro / P2 roughly doubled IPC again thanks to out-of-order execution and on-package L2. Not many game-changing architectural changes left after that and we are already at the P3-S which topped out at 1.4GHz stock. There is the ~30% gain both AMD and Intel got with their integrated memory controllers, another ~20% gain for Intel from transplanting the good stuff from Netburst into the P3 to create Core2 and that's about it for the past 10 years if we omit incremental sub-10% gains.

Without all those architectural improvements, today's chips would be ~100X slower on IPC before including multi-core and hyperthreading which raise this bar to ~600X. This is very close to the ~800X clock increase since the original 4.77MHz 8088/8086 the whole desktop PC industry as we know it started from.

So, clock rates are only HALF the reason today's x86 CPUs are as fast as they are. Architecture deserves credit for the other.
 
Status
Not open for further replies.