ARM: Intel Has An Uphill Climb Ahead

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
"All converting to ARM is not possible."

The above statement is only true if Chromebooks (and alike) fail miserably.
 
ARM has has an even steeper uphill climb if it is getting into the desktop CPU market than Intel has getting into the mobile sector with x86.The main factors which currently is holding Intel back is price and power consumption. Future Atom CPUs will be able to compete with ARM, and Intel is working on addressing power consumption.People assume x86 to be more inefficient compared to ARM, while actually its more efficient than x86 in certain workloads.

On the other hand, I believe that ARM has a big challenge in front of it in the desktop segment and that is matching x86 in terms of pure performance. Quad core ARM @ 2.5 GHz will not be faster than a current Dual Core x86 CPU.
 
[citation][nom]herp_derp_derp[/nom]1. Intel cannot magically overcome the laws of physics with a bloated and inefficient architecture[/citation]

Intel know how to use the physics and at the pace their putting that into real factory processes is fast, 3d gate is their latest along with new shrinkage. Say in 2-3 years where do you think Intels manufacturing process will be at? The competition are comprised of several licensing companies, do you think they can spend the same amount of money into r&d in their manufacturing processes as Intel does when they have to share that income?

Sure the Atom is requiring a lot of transistors but with Intels manufacturing progress who is to say they wont be able to move ahead even with the more advanced atom?

Why do you think Arm made this announcement, they are running scared. No one of their licensees have made this much progress in the manfacturing process like Intel have in the past year and if they manage to keep up the speed who can say the Atom won't be able to compete with Arm even in the ultra-low power market in say two years?
 
"Intel know how to use the physics and at the pace their putting that into real factory processes is fast, 3d gate is their latest along with new shrinkage"

They hyped up 3d the same way they hyped up Hi-K metal gates. While it was a nice advancement, it didn't quite revolutionise things the way Intel said it would.

...and Intel catching up would involve ARMs advancement slowing down, and lately, ARM has been advancing faster than Intel. An ARM A15 quad-core at 2.5ghz is going to eat Atom for breakfast, before you even take into account that the A15 has a process shrink, improved efficiency, and improved IPC.

Intel's process technology isn't superior to anybody else's at the same node, they're just 1 to 2 years ahead of everyone else.
 
Intel's trying to be more power efficient, but maybe ARM wants to be more powerful. While ARM is has the argument: Look Ma, no fan, but what if I OC the thing!? Intel is still up to it's old: Look Ma, no software, & I hope they don't start to use emulators!
 
[citation][nom]jimmysmitty[/nom]I don't. Sorry but switching to ARM for everything would be near impossible. Its the same reason Intel's Itanium IA64 arch didn't take over and instead we have x86-64. The change is too hard for the majority of consumers.[citation]

Agreed. too many times have we heard that x86 is going downhill and its yet to happen for over a decade's worth of hearsay. Intel themselves tried to kill it and couldn't. The market is invested in it and its here to stay for what is likely going to be a very long time.
 
Intel Direction are idtiot.
What they wait to buy ARM?

x86 is old technology mean it's not optimized.
It's just like Windows but apply to hardware: A patch CPU.
 
I have to praise Microsoft on this one. I guess over the past few years
Microsoft has been on the good side of things. An effort to market ARM
based Windows will give buyers a power efficient computing platform which
is much cheaper with almost the same performance. What's important is
the responsiveness of everything you do in the operating system especially
the GUI. It will be a race between X86 and ARM on who can give the market
the first 16-core CPU which has been proven to give improved performance
on Windows 7 on benchmarks.

The switch is not impossible once the crowd is convinced of lower power
envelopes and we all know that ARM is much cheaper than whatever Intel
can make. Itanium didn't take over because it didn't have a lot of
developer support. ARM on the other hand is now supported in Linux and
other BSDs and now Windows is moving in because it knows what's comming.

We all know why a lot of people think that RISC loss the great CISC vs RISC
battle was because it had more developers maintaining legacy code. But
we're forgeting that RISC had a different use case back then and now we're
seeing ARM move to CISC market. Proof of this is Android. All Android
platform we're seeing now is ARM/RISC based so I think we'll be seing more
RISC/ARM based Linux and Windows desktops. X86 was first accepted as a
consumer PC before it was accepted as a server processor. This is the same
as what is happening now with the tablet and smart phones in the market now.
 
I have more faith in Intel defending its turf than ARM defending its turf. If ARM is entering the PC sector they will have to get by VIA and AMD and then they MIGHT be able to take on INTEL. As for supplying most of the chips or single completed solution on a single chip INTEL used to do that for the PC. I suspect they are up to the challenge of entering the mobile sector.
 
[citation][nom]psiboy[/nom]Why would you bother with Atom when AMD has Brazos?[/citation]
Such a repeated comment!
Atom uses much less power, and is meant for those who are happy just browsing and want 8+ hours of battery life!

[citation][nom]dirkrster1[/nom]I have more faith in Intel defending its turf than ARM defending its turf. If ARM is entering the PC sector they will have to get by VIA and AMD and then they MIGHT be able to take on INTEL.[/citation]
ARM is not taking on the desktop nor server section where Intel reigns.
It's merely expanding their processor line from cellphones and tablets to netbooks and perhaps basic laptops. Those things that require long battery life, and little cpu horsepower.
 
[citation][nom]sinfulpotato[/nom]Isn't x86 only emulated in current desktop CPU's? I didn't know they still made x86 CPUs...[/citation]

The x86 ISA isn't emulated so much as its abstracted. Modern CPU's use an instruction decoder and prefetch unit at their front. This unit is like a small co-processor in that it reads incoming instructions, decodes then from big CISC instructions into many smaller RISC instructions then dispatch's them to the various processor resource's to be executed. This unit also reorders instructions and performs branch prediction in an attempt to optimize execution at run-time.

As basically you have RISC internal components coupled with a CISC instruction decoder (AMD / VIA) or a hybrid CISC / RISC internal components (Intel) with a CISC instruction decoder. Either way modern CPU's haven't been "CISC" for over a decade. Its not about RISC vs CISC, as that war's already been fought. The industry is set on x86-64 and no single company, or even group of companies are going to move it. Instead they can just implement different HW and use decoders to translate.

From a pure micro-architecture point of view, the current design's represent the best middle ground between software coding optimization and hardware performance optimization. With a pure RISC CPU (SPARC / POWER) the software coding and the compiler have to be very good or you end up with poor performance. By definition RISC requires more instructions to be created and complied to accomplish any set of tasks then CISC. Each operation is a single pass on the CPU and has a very predictable execution time. Due to the sheer quantity of instructions created, a badly programmed piece of software will chug along and there isn't much the RISC CPU can do to make it go faster. With a CISC CPU you can use complicated instructions in the coding / compiling, this not only makes your code smaller but it also standardizes various complicated HW interactions. It simplifies the software coding / compiling but puts more strain on the HW to know how to optimize code on the fly. A pure CISC design would have the HW executing large complicated instructions that require multiple passes and do not have a predictable execution time. Its very hard to make a pure CISC design go fast, the execution units just get too bulky.

The current method of using small RISC execution units inside but using software compiled with a CISC ISA seems to be the best of both worlds. The lynch pin is that the HW decoder / instruction prediction unit gets put under alot of strain to keep the CPU's resources busy and to interpret the incoming CISC instructions to find the optimal instruction execution. It requires more power and die space and a crap ton more engineering.
 
[citation][nom]palladin9479[/nom]The x86 ISA isn't emulated so much as its abstracted. Modern CPU's use an instruction decoder and prefetch unit at their front. This unit is like a small co-processor in that it reads incoming instructions, decodes then from big CISC instructions into many smaller RISC instructions then dispatch's them to the various processor resource's to be executed. This unit also reorders instructions and performs branch prediction in an attempt to optimize execution at run-time.As basically you have RISC internal components coupled with a CISC instruction decoder (AMD / VIA) or a hybrid CISC / RISC internal components (Intel) with a CISC instruction decoder. Either way modern CPU's haven't been "CISC" for over a decade. Its not about RISC vs CISC, as that war's already been fought. The industry is set on x86-64 and no single company, or even group of companies are going to move it. Instead they can just implement different HW and use decoders to translate.From a pure micro-architecture point of view, the current design's represent the best middle ground between software coding optimization and hardware performance optimization. With a pure RISC CPU (SPARC / POWER) the software coding and the compiler have to be very good or you end up with poor performance. By definition RISC requires more instructions to be created and complied to accomplish any set of tasks then CISC. Each operation is a single pass on the CPU and has a very predictable execution time. Due to the sheer quantity of instructions created, a badly programmed piece of software will chug along and there isn't much the RISC CPU can do to make it go faster. With a CISC CPU you can use complicated instructions in the coding / compiling, this not only makes your code smaller but it also standardizes various complicated HW interactions. It simplifies the software coding / compiling but puts more strain on the HW to know how to optimize code on the fly. A pure CISC design would have the HW executing large complicated instructions that require multiple passes and do not have a predictable execution time. Its very hard to make a pure CISC design go fast, the execution units just get too bulky.The current method of using small RISC execution units inside but using software compiled with a CISC ISA seems to be the best of both worlds. The lynch pin is that the HW decoder / instruction prediction unit gets put under alot of strain to keep the CPU's resources busy and to interpret the incoming CISC instructions to find the optimal instruction execution. It requires more power and die space and a crap ton more engineering.[/citation]

You don't know what you're talking about, really. But good job trying to sound like you do.

The only reason CISC is around is not because it works best, but because it is compatible with software. Implementation and economy of scale have allowed x86 to continue, even though it's a poor instruction set. All the nonsense about getting into trouble with RISC instructions is pure fallacy. x86 does not protect against this, and in reality would be far worse if you actually chose instructions that needed to be broken down by the microcode in the processor, instead of simpler instructions. Luckily, the compiler takes care of this and knows not to issue the very difficult instructions.

Also, AMD uses more complicated internal instructions than Intel, not the reverse as you implied.

Branching and branch prediction are not done by the decoders.

 
[citation][nom]captaincharisma[/nom]thats funny because MS help make OS2 Warp. at least on the business versions of it[/citation]

Bzzt, wrong answer! The last version Microsoft helped make was version 1.2. Version 1.3 (arguably the most stable OS ever made on a PC) was an IBM version, but was sold by Microsoft. After 1.3, Microsoft and IBM broke up. Microsoft went on to sell the crap you see today (as well as Windows 95/98/ME), IBM to OS/2 2.0, 3.0 (Warp), 4.0 (Merlin).

Incidentally, Windows NT was originally called OS/2 NT. It was supposed to be a platform non-specific 32-bit version.
 
You don't know what you're talking about, really. But good job trying to sound like you do.

The only reason CISC is around is not because it works best, but because it is compatible with software. Implementation and economy of scale have allowed x86 to continue, even though it's a poor instruction set. All the nonsense about getting into trouble with RISC instructions is pure fallacy. x86 does not protect against this, and in reality would be far worse if you actually chose instructions that needed to be broken down by the microcode in the processor, instead of simpler instructions. Luckily, the compiler takes care of this and knows not to issue the very difficult instructions.

Also, AMD uses more complicated internal instructions than Intel, not the reverse as you implied.

Branching and branch prediction are not done by the decoders.

Ahh a youngin.

CISC isn't an architecture, x86 =/= CISC. CISC is a design philosophy where the designers opt for large multipurpose instructions that produce smaller executable code. This was originally in a time when memory space for code was extremely scarce and you were counting bytes for instruction space. The CPU would then receive the instruction and execute it in bulk. RISC is the exact opposite philosophy and arrived once cache's and larger memory space was available. RISC opts for smaller micro instructions that each can be processed in a single cycle. It lacks the complex memory accessing modes of the CISC CPU and required the programmer / compiler to be smarter then your average bear. RISC CPU's were also known as load - store CPU's because that is exactly how you access memory.

Ex:

RISC (Load / Store) Design ASM

MOVE [A, 2:3]
MOVE [B, 5:2]
MUL [A, B]
MOVE [2:3, A]

CISC Design ASM

MUL [2:3, 5:2]

Now both have to do the exact same thing, read the contents of two memory locations into a register, multiply it, then return the value to another register. The L/S design requires the programmer / compiler to walk out the instructions and specify exactly how they want it done. The CISC design lets the CPU itself figure that part out, the programmer / compiler just need to pass the multiply along and call it a day. This is the age old debate between better HW or more efficient programmers. A complex instruction design allows the HW engineers to determine an optimum method for accomplishing a task, the reduced instruction design forces this onto the programmers / compilers but allows for smaller CPU's.

Modern CPU's use a hybrid method. And no your extremely wrong about AMD, AMD hasn't produced a non load/store CPU since the 486. I highly suggest you research the AMD 29K made in 1987. The K5 (1995) and K6 (1997) is when AMD started their habit of sticking a front-end x86 decoder onto a RISC CPU. This was made possible by superscaler processing and speculative execution. Instructions could now be reordered (something in and of itself a very CISC activity) to achieve optimum performance along with using additional CPU resources to predict execution of instructions. The K7 Athlon was basically an Alpha design. Seriously AMD hired the Alpha design team and let them build a CPU, this is the Athlon that all current AMD CPU's descend from. The CPU was so "Alpha-y" that you could connect an AMD 751 Irongate chipset to a real DEC Alpha CPU and they would work together wonderfully. There are a few OEM systems that were actually built using this method. AMD 751 + Alpha with Win NT 4.0 Alpha, or eventually HPUX.

AMD produces a RISC CPU, don't ever doubt that. Intel on the other hand has their own philosophy entirely. I can't call it a RISC due to its native implementation of complex addressing modes, but its definitely creating micro-ops and dispatching them to execution units, a decidedly RISC activity. Its a true hybrid of the two. I honestly admire their engineers for the level of ingenuity demonstrated. Their marketing / business leadership is what sours my mouth.
 
Ohh BTW, a "RISC" processor doesn't have microcode, out of order processing, or branch prediction. At least not a "pure" RISC as a philosophy. The concept was to throw out any unnecessary components to save transistor space to make the CPU smaller and cheaper. RISC instructions are supposed to be executed directly in HW, no translations, predictions or other HW massaging of the code is supposed to be done.
 
Actually, x86 does not have that strong a ground anymore.
At the Unix side Java and at the Windows side .NET framework paved way to the transition away x86.

The performance penalty of JIT compiled applications is negligible. Conside having a single core processor that is capable of doing 3 billion operations per second, any user mode app will be fine on different processors.

When we go to the lot that require the raw power and to the extreme, well they do their stuff on their own anyway. None of such guys I know use Windows as the choice OS.
 


No no and even more no. JAVA is a horrible ~HORRIBLE~ language to program anything performance based in. It was designed for cross platform compatibility and not performance and no about of sales marketing can change that. I work almost entirely in Solaris 10 on SPARCs and for some god awful reason SUN chose to use / push Java as its language of choice to everyone. It was horrible and slow as all get out. Our software developers are still in the process (2+ years now) of porting all the Java code over and compiling it to native Sun4v binary. Of the core process's we have left in Java every one of them is slow, unresponsive at times and resource hungry. We saw a 50~200% increase in performance from porting away from it.

Java isn't a binary language it is a platform. You compile your program for a target JavaVM and its the responsibility of that JavaVM to recode your program to run on the native system. Its emulation pure and simple and there is a severe performance penalty involved, especially when dealing with memory and inter-process communication.
 


I can't quite agree with you on this. The performance hit from the Java environment is a one time startup "JIT" hit. But I know that having a garbage collector, Java programmers tend to be pretty lousy on their memory access patterns, causing a lot of cache misses & disc paging.

MS did a better job on educating programmers on importance of memory allocation & access patterns.

Anyway, already quite some part of UI in Win7 is already written in CLR. It won't take them a big deal to release an ARM version of Windows. Native apps won't work, but CLR apps will work fine.
 
Status
Not open for further replies.