Apple Dumping Intel For ARM Chips In Laptops?

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Intel has the fastest AND smallest processors in the world. By next year Intel will also have moved to the 22-nanometer process making any performance and/or energy savings that ARM might have over Intel pointless. If necessary they can also give customers the best deals to keep them. It's unlikely Apple would move its PC based devices off Intel.
 
How about have both architectures exist in the same machine? Have a full-fledged x86-64 laptop that has an additional chipset that allows it to run an ARM processor just like an iPad, with access to shared resources like the keyboard, trackpad and display.

You could then flip between iOS and OS X much like you do now when using a KVM between 2 machines. iOS and OS X would synchronize themselves continuously so that any changes made in one OS is also translated to the other. This can be done either by shared preference libraries or by shared user information on a flash volume that both can access.

The benefit? If you are expecting light use, you can fire up the laptop in iOS and make use of apps or watch movies much like you would on an iPad. This would give you hours more battery time. But if you find you need more power than iOS is able to offer, you can boot up OS X and make use of the more robust x86 architecture, but at the cost of less battery time available.

I could see this as being a future option, especially in a hybrid situation we may see in the near future when ARM processors still aren't ready to replace x86 cores for general computing, and x86 is still not able to compete with ARM in the low power arena. And in reality, it would only add a small amount to the overall cost of the machine, on the order of maybe $150-$200.
 
... the cpu functions as a router for workloads now... do some legacy stuff, but... more and more stuff, heavy one's will be done by GPU... physics and rendering for games, coding/encoding for videos... i still wonder why do benchmarks in video compressing with just CPU... common... i have a 2 year old laptop with DH4570 in it... and then 7 comes along, and DX11 and direct compute... even movie maker uses GPU to render the video... so... must see the new ways... you can do stuff faster now with a help from GPU...
 
[citation][nom]Houndsteeth[/nom]How about have both architectures exist in the same machine? Have a full-fledged x86-64 laptop that has an additional chipset that allows it to run an ARM processor just like an iPad, with access to shared resources like the keyboard, trackpad and display.You could then flip between iOS and OS X much like you do now when using a KVM between 2 machines. iOS and OS X would synchronize themselves continuously so that any changes made in one OS is also translated to the other. This can be done either by shared preference libraries or by shared user information on a flash volume that both can access.The benefit? If you are expecting light use, you can fire up the laptop in iOS and make use of apps or watch movies much like you would on an iPad. This would give you hours more battery time. But if you find you need more power than iOS is able to offer, you can boot up OS X and make use of the more robust x86 architecture, but at the cost of less battery time available.I could see this as being a future option, especially in a hybrid situation we may see in the near future when ARM processors still aren't ready to replace x86 cores for general computing, and x86 is still not able to compete with ARM in the low power arena. And in reality, it would only add a small amount to the overall cost of the machine, on the order of maybe $150-$200.[/citation]
HP already does something similar to this without wasting the money on another processor. My HP laptop has a special mini OS that boots up instantly. It lets you browse the web and check your email. It also lets you do basic office productivity tasks like word processing and spreadsheets, for example, all without the need for an inferior ARM processor. Newer laptops with Intel Sandy Bridge CPU's and no discrete graphics have insanely long battery life which further proves just how much of a waste ARM is in anything but a phone.
 
History may repeat again.

Once upon a time (when most people here was not yet born) there where mainframes. In the sixties emerged an "inferior & cheaper" kind of computer, the minicomputer. Despite being inferior, their price allowed them to have a much broader market, and economy of scale allowed them to practically kill the mainframe market( though there is still a mainframe niche market).

In the late 70's early 80's another "inferior & cheaper" technology emerged, the microprocessor. The companies that emerged and almost dominated the word with the minicomputer didn't saw it coming and where late to react. Microprocessor based computers (namely workstations & servers) killed the minicomputer market.

Meanwhile another market emerged, the home PC (x86). That home PC market grew so big that the companies producing "superior" RISC processors tried to introduce their products in that market. However, as usual, the bigger market of the (x86) allowed Intel (and to a much lesser extend AMD) to out-compete the "superior" RISC processors, so that x86 ended replacing RISCS in workstations, servers and even supercomputers. As far as I know only IBM & Sun (now oracle) produce their own RISC processors.

However some of the RISCs (notoriously ARM) found a niche market in the embedded domain, and later on mobile devices. This niche market has outgrown by several orders of magnitude the PC market. And today we may be contemplating a repetition of history: an "inferior" computer type with a much bigger market may replace the "superior" old product or relegate it to a niche market.

Today about 100 millions of x86 are sold every year, that is a tiny market compared with the 1000 millions of cellphones a year (90% are ARM inside). ARM market is not restricted to cellphones, other products like HDTV, BluRay, ... and even cars, have big computer need and have a plethora of microprocessors inside. I've been told that one of the biggest manufacturers of SoCs for the audiovisual market is phasing out several proprietary core designs and replacing them with arms in newer products.

Not only that, the PC market will soon be a shrinking market. Do you need internet access?, don't buy a PC, soon your (ARM inside) TV will do that (or your ARM inside CellPhone). Do you want to game, you don't need a PC, you can buy a console, and probably soon you will be able to use all the computing power in your TV for gamin too. x86 has dominated for longer than any other technology because in the PC market code compatibility was important, but who cares about code compatibility in the cellphone, or in the TV set or in the car. Anyone reading that can tell me what processors has his HDTV or his car inside?

History may repeat again or things can be different this time, but if I had to bet (and I may be wrong) I'll not bet for the PC-x86 combo.
 
As it stands right now, I would agree with you. But the borders between desktop, mobile, and gaming electronics is QUICKLY closing.

Eventually I think the market will be at a place where we'll see the decline of the x86 platform as RISC-based processors slowly take over. All of the current generation gaming consoles run Power-based RISC processors, and nearly every mobile electronic device runs a RISC-based processor. So really, the only market that is left is the desktop market. A couple more years of the development in the ARM platform and I think it is perfectly reasonable that we start seeing ARM chips in desktop PCs.

So no, not like it will happen tomorrow, but I can easily see it happening in the future.
..

No, probably not. The hundreds of millions of x86 PCs are just going to flip to a woefully weak ARM processor to save a few bucks in power. Not going to happen.
 
One could argue this could be a huge mistake for Apple. Had Apple stuck with PPC they could have helped foster a more viable alternative to the x86 platforms that dominate the desktop/laptop environment today. PPC had HUGE potential that was never fully tapped.

Now, the rumor is that Apple is looking at ARM to replace x86. This is going in the wrong direction in terms of CPU computational abilities. I just can't see the video/graphics apps that Apple uses (like Final Cut Studio or Photoshop) working very well with ARM. Apple has business relationships with key industry partners that need powerful platforms to work from and ARM isn't there yet.

If Apple wants to change horses again, they should look at what AMD has in the stable. Fusion and Bulldozer have lots to offer and since Apple has already made the x86 changeover, the changes for developers would be minimal.

 
[citation][nom]ottozro[/nom]I am amazed that that this speculation is considered to a fact by some ha ha ...[/citation]

I'm going with this.

It's interesting that this recent run of propaganda and puffery comes before the nV release of its Q1 earnings this Thursday ...

most likely to show a 20% or so reduction against Q1-10 revenues and EPS.
 
The factor that has held back RISC-based desktop PCs until now is software base compatibility, and the lack of performance from object code generated by insruction set emulators, translators and recompilers. The PPC chips that were in Macbooks two decades ago could stomp all over the x86 PCs at the time. But those PPC chips weren't quite fast enough to emulate the x86 architecture and run at the same speed as the real x86 chips when processing x86 code. Given the choice, consumers chose the architecture that was able to run their software the fastest, which unfortunately happened to be the x86-based chips.

Fast-forward 20 years. We now have PCs with so much processing power beyond what the typical user needs that most CPU cores stay idle most of the time (F@H users notwithstanding). These CPUs do have enough processing power to transcode a program from one instruction set to another (or emulate the other architecture), and then run it. We're even starting to see some of this today, when we see game consoles that run games from the prior generation - even though the newer console's architecture is completely different.

RISC is a far more efficient architecture than x86 for doing just about anything other than running x86 code. And even for that, if the CPU doesn't need to squeeze every last ounce of performance out of the x86 code in order to perform acceptably, then a RISC CPU can do that as well via instruction set transcoding or emulation. We are starting to see this situation now. Non-x86 processors are fast enough to run most x86 desktop software in emulation or transcoded to provide acceptable performance to most business users. Business users, in turn, define the hardware and software requirements of the PC industry because that's where DELL, HP, Lenovo and the other OEMs make their money.

So now we have:
1) Windows 8 able to run on ARM,
2) an advancing CPU technology base capable of producing non-x86-CPUs that can run office software and all of a company's legacy software as fast as the leading edge PCs of 5-10 years ago,
3) an emerging mobile market that has already begun the recompilation process of many programs for a RISC-based architecture,
4) a transition from desktop computing technology to mobile computing technology for many office (and remote office, and telecommuting) workers, with a commensurate necessity for battery power endurance,

...and most of the requirements are in place to begin to migrate the world off of the x86 instruction set, and onto a RISC architecture.

I don't know what performance ARM processors can muster. I don't know whether the architecture has been pushed as hard as the x86-x64 architecture, or what would happen if it were. But I have to believe that if the same number of transistors were dedicated to an ARM-based CPU as are used for the top SB and Gulftown CPUs, as well as the application of advanced power-saving features such as core stopping, etc., then we'd see at least comparable performance - and possibly even better performance.
 
The article says:

"The new 3D transistor structure will enable Intel to increase performance while decreasing the overall chip size, power consumption and leakage."

The article SHOULD say:

"INTEL SAYS THAT the new 3D transistor structure will enable Intel to increase performance while decreasing the overall chip size, power consumption and leakage."

If you don't do that, it will make you look like NeuralSystem who ALWAYS believes WHATEVER Intel says, which probably believed the Prescott P4 was going to run cooler and faster than the Athlon 64. If Intel said it, It MUST be true for NeuralSystem.
 
Jesus Christ, it's like every article I have to step in and correct a bunch of misconceptions / misinformation.

It is not x86 vs RISC, that makes absolutely no sense. x86 is an ISA, RISC is a design philosophy. x86 is an instruction set made a long a$$ time ago by Intel using a CISC design philosophy. Back then systems had limited system memory and CPU internal storage (today called cache) was extremely limited, we're talking about a few bytes limited. CISC seeks to create the most compact code possible, to describe to the CPU what operations to perfect while using the least amount of instructions. It is then up to the CPU to interpret those instructions and produce the correct result. Thus software can be simpler while allowing hardware to be more complex. RISC is the exact opposite, it seeks to reduce the hardware complexity while allowing the software maximum control over the hardware. This is done at the expense that the software must me more complicated. The base requirement for a RISC architecture is that the CPU must be able to process a single instruction in a single cycle, where as a CISC architecture is more flexible. This requirement isn't always adhered to.

Anyhow none of that matters because a true "CISC" CPU hasn't been made in a very long time. The closest thing you'll get is Intel's original Pentium, everything from the Pentium Pro onward is a hybrid of those two philosophy's. The C2D is a mix of the two, it use's a front end x86_64 instruction decoder that converts the CISC style x86 instructions into RISC micro instructions that are then sent to the various execution engine's for processing. When their finished the results are then returned, typically instructions can be reordered dynamically during processing to produce the most efficient execution order. Thus even poorly written software can have decent to high performance because the CPU is smart enough to try to make it work.

AMD is another good example, their last "CISC" CPU was the AMD 486, everything from the K5 onward has been a RISC CPU with a x86 front end decoder. Even their amazing (for its time) Athlon CPU had its roots in the DEC Alpha. AMD hired almost the entire Alpha CPU design team and allowed them to work mostly unmolested to create what we called the Athlon. AMD had licensed the EV6 bus protocol along with large portions of the design spec. The Athlon was so compatible, that the AMD Irongate north-bridge could technically be used with an Alpha CPU without any modification, this was actually used in a few systems. So as one can see, AMD CPU's haven't been "x86" nor "CISC" for over a decade.

What it boils down to is the ISA is just the hardware language used to translate the software instructions. Its the way a program can express its desired actions to the underlying CPU. And while some ISA's are better at certain things, none of them is the "best". x86 has survived this long because it's proven to be the best suited for general purpose processing. With SIMD style extensions there is no need for a strictly RISC based ISA.

As for CPU vs CPU performance, aka ARM vs Intel vs AMD, ARM will never win that battle. ARM CPU's are designed with low power usage as their priority, to accomplish this the ARM CPU lacks many of complex feature sets that are standard in modern CPU's. These feature sets are not "x86" as they also exist in SUN SPARC and IBM Power designs. The biggest being the ability to decode and reorder instructions on the fly, that in and of itself is like another coprocessor to the main processor. Now for smaller lightweight low power devices that do not need high general computing performance then ARM is usually the way to go for obvious reasons.

Basically ARM is at least one order of magnitude slower then current Intel / AMD / SUN / IBM CPU designs. In a few years when they reach the performance of today's CPU's, then SUN / IBM / AMD / Intel would have designed faster CPU's that are still an order of magnitude faster then ARM. You can not have both low power usage and high performance at the same time. You must sacrifice one for the other.
 
[citation][nom]Teramedia[/nom]The PPC chips that were in Macbooks two decades ago could stomp all over the x86 PCs at the time.[/citation]
You are 100% completely wrong. Mac's with Intel CPU's run circles around the old PPC Mac's. Intel's latest Sandy Bridge Core i7 processor is leaps and bounds faster than the PowerPC G5 processor was. It runs cooler and uses less power too.
 
Techguy,

I think he's referring to the G5 PPC CPU vs the Intel CPU at that time, not the Intel CPU's of today. This is largely difficult to compare as the two CPU's work differently and software optimized for one will not port easily to the other.
 
[citation][nom]palladin9479[/nom]
The biggest being the ability to decode and reorder instructions on the fly,
[/citation]
A clarification. Both Cortex A9 and Cortex A15 are out-of-order superscalars.
 


Cortex A15 I don't know about as I've yet to work with those. Cortex A9 is horrible at this though, branch prediction and instruction rescheduling is a very complex and power hungry piece of hardware to have around. Its like it's own miniature processor in and of itself. In order to keep power requirements down the A9 used an incredibly underpowered branch prediction / rescheduling unit, it might as well not even be there. That and the ARM ISA isn't very conducive to out of order operations. They used conditionally executed instructions as a way to compensate for this. It really is a perfect ISA for small lightweight processing as it gets the job done without needing excess hardware, the downside is that it can't scale to meet large complicated workloads.
 
well, imo, intel has the lead in pc and notebook cpu... i personally dont think ARM is gonna make a huge impact on the pc/notebook market..
 
Status
Not open for further replies.