ARM Says 2015 Will Be The Year Of 64-Bit Chips

Status
Not open for further replies.
I still don't understand the point of 64-bit smartphones and tablets. When PC went to 64-bit, the biggest advantage that existed was the option to use greater amounts of RAM than 4GB. Today still the only real thing that has changed to give 64-bit an advantage is the ability to use 64-bit applications. Other than that its not an important feature.

I fail to see why someone needs more than 4GB of RAM in their mobile devices.
 

jimmysmitty

Champion
Moderator
I still don't understand the point of 64-bit smartphones and tablets. When PC went to 64-bit, the biggest advantage that existed was the option to use greater amounts of RAM than 4GB. Today still the only real thing that has changed to give 64-bit an advantage is the ability to use 64-bit applications. Other than that its not an important feature.

I fail to see why someone needs more than 4GB of RAM in their mobile devices.
The way you fail to see the need for 4GB in mobile is the same way people failed to see the need for more than 4GB of RAM on desktops back when 64Bit first started.

It will eventually be utilized to load programs so they will launch faster as RAM is faster than storage.

As well, 64Bit has been shown to be faster than 32Bit in certain uses.
 
this year is more like "qualcomm et al swallow their own words and play catch up to apple's 64bit chips" or "the year of mediatek further kicking but" or "the year of nvidia hiding more of their soc design problems"
 

zerassar

Reputable
Jan 23, 2015
1
0
4,510
0
64-bit isn't just about RAM. Yes it increases the addressable RAM by a massive amount... But it also allows 64-bit instruction sets to be processed.

 
64-bit instructions really don't add much of anything. A lot of applications including benchmarking applications have both 32-bit and 64-bit variants though and give overall similar results. I just ran Cinebench in both modes one after the other and got 8.8 with the 64-bit version and 8.27 with the 32-bit. Granted that does get a 6% improvement using better instructions, but its unlikely the majority of app creators will bother optimizing their applications with 64-bit instructions.

As for the RAM part, it made sense that on the desktop more RAM would help. Even that has its limits though, most users will never make use of more than 8GB of RAM on a PC regardless of how much multi-tasking they are doing without running extremely RAM heavy applications such as Virtual Machines. Given the greatly reduced size of applications for mobile devices with the need to maintain storage space restrictions, it seems unlikely that without pushing much more deeply into the notebook market and getting used as heavily as someone would use a desktop would an ARM based system require more than 4GB.
 

bit_user

Splendid
Ambassador
As the article states, ARMv8 has advantages besides 64-bit addressing.

Both ARMv8 and x86-64 doubled the size of the general purpose register file over their 32-bit counterparts. The additional registers are only accessible in 64-bit mode. This was a big win for x86-64, given the comparatively small size of x86's register file, by modern standards.

ARMv8 has other enhancements, as well.

BTW, a downside of 64-bit chips is that the size of pointers and other datatypes is also doubled. This reduces cache efficiency and taxes memory bandwidth, slightly. That might explain why your cinebench results didn't show more improvement.

As evidence of the efficiency improvements, consider that the most efficient in-order ARMv8 core (Coretex-A53) achieves 2.3 DMIPS/MHz, while the most efficient in-order ARMv7 core (Coretex-A8) achieves only 2.0.
 

emjayy

Distinguished
Apr 22, 2009
33
0
18,530
0
FTA - "After Apple launched its ARMv8-based 64-bit A7 chip in 2013, some companies tried at first to deny that 64-bit support is an important feature for mobile CPUs"

That's because 64 bit support wasn't an important feature for mobile CPUs in general...but when Apple moved to 64bit, it wasn't actually a gimmick either. Apple actually moved to 64bit because they absolutely NEEDED to play catch up with the capabilities of top end Androids introduced to the market at the time.

Here's what was really going on. Android phones were using 4 application cores while Apple's iPhone was using just 2. While Android phones were getting processors from the industry leaders who had the knowledge and resources to master quad core ARM chips in a timely manner, Apple designs its own processors internally, and their design team has only ever cranked out dual core application processors. Designing 4 core processors was another level Apple's internal team had not reached yet.

So, if you're Apple and you're stuck with a dual cores at the moment and you're falling behind the curve because the competition are using 4 cores and blowing you away on the benchmarks, how do you modify a dual core to match a quad core processor? You can crank up the clock rate (which then creates major heat and power consumption issues)...or you can double the bits processed per core. Apple simply did the later because ARM had reference designs for a 64bit ARM chip intended for servers.

An Android quad core processes (4 cores x 32) bits each clock cycle. That's equal to 128 bits processed per clock cycle.

Apple's 32bit processor used in older iPhones could only process (2 cores x 32 = 64 bits) each clock cycle. The move to 64bit processors yielded (2 cores x 64 = 128bits) each clock cycle, thereby matching Android smartphones that use quad cores.

So Android isn't actually behind Apple when it comes to processing power because Apple's use of 64bit technology is just a means to play catch up. They needed 64bit because of dual cores, Android phones didn't because of quad cores. However, once high end Android devices move to 64bit, it will be done with quad cores, not dual cores like Apple. So the equation suddenly becomes

Android 64bit quad core - 4 x 64bits = 256bits per clock cycle.
iPhone 64 dual core - 2 x 64bits = 128bits per clock cycle

At we're right back at square one all over again.

Now Apple will have a real problem because they can't use 64bit dual cores to match that processing power and it's same problem of playing catch up all over again. Their team must produce a quad core processor in the iPhone 7 or they'll be embarassed in the benchmarks and the media will absolutely jump all over it.
 


General enhancements of the execution units inside of the core, pipeline work, improvements to the decoder, etc. don't count as improvements of 64-bit. Any new architecture would expect general enhancements and these could just as easily be applied to a 32-bit architecture such as ARMv7.

This is the only review I can find doing any real comparison. Most benchmarks are almost even, and only a few which would of made use of additional RAM such as media encoding programs really benefit from running in 64-bit OS. There is no significant impact on performance in 64-bit vs. 32-bit systems even with optimized programs.
http://www.phoronix.com/scan.php?page=article&item=ubuntu_1404_x64&num=2

Yes there are an increased number of registers, but that overall hasn't shown an increase in performance outside of the increase RAM usage.

For the increase in the space used for the data, running an application in 32-bit mode would use the same amount of data as it would on a 32-bit chip on a 32-bit OS. So that isn't playing into the results.
 


One 64-bit core does not equal two 32-bit cores. This can be shown from looking at the old release of the Intel and AMD 64-bit chips. They didn't see a doubling of performance, or anything close to it. There were dual-socket server boards out at the time with two 32-bit CPUs, running the same clock speed as the new 64-bit cores being released, which were single core, and the server boards had better performance. Not cause of a better platform or anything, but because one 64-bit core of relatively similar architecture won't beat two 32-bit cores of relatively similar architecture and clock speed.
 

SirKnobsworth

Reputable
Mar 31, 2014
43
0
4,530
0
Apple's early move to 64 bit was probably more about software than hardware. By the time more than 4GB of memory is needed in a mobile device, their entire app ecosystem will be 64 bit enabled. They did the same with personal computers - some of their earliest 64 bit models can't even hold more than 4 GB of memory.
 
4GB memory:

The addressable 32-bit limit of 4GB is quickly becoming an issue now that phones and tablets have high res screens and are capable of playing video games.

Even if only 5% of the new products required this it's a good idea to switch since you can unify the code and phase out 32-bit support.

As said the chips are significantly redesigned so I doubt making them 64-bit adds much cost anyway.
 

yhikum

Honorable
Apr 1, 2013
96
0
10,660
7
64-bit instructions really don't add much of anything. A lot of applications including benchmarking applications have both 32-bit and 64-bit variants though and give overall similar results. I just ran Cinebench in both modes one after the other and got 8.8 with the 64-bit version and 8.27 with the 32-bit. Granted that does get a 6% improvement using better instructions, but its unlikely the majority of app creators will bother optimizing their applications with 64-bit instructions.

As for the RAM part, it made sense that on the desktop more RAM would help. Even that has its limits though, most users will never make use of more than 8GB of RAM on a PC regardless of how much multi-tasking they are doing without running extremely RAM heavy applications such as Virtual Machines. Given the greatly reduced size of applications for mobile devices with the need to maintain storage space restrictions, it seems unlikely that without pushing much more deeply into the notebook market and getting used as heavily as someone would use a desktop would an ARM based system require more than 4GB.
You're assuming that processor architectures behave similarly for ARM and x86 instruction sets. Generally this is true.

I would rather see actual results from testing applications (Linux or Android) that would be compiled specifically to utilize 64-bit instruction sets.

In short you might be comparing apples (x86) and oranges (ARM).
 

Vlad Rose

Reputable
Apr 7, 2014
732
0
5,160
61
One bad thing people forget to mention about 64-bit vs 32-bit is that programs written for 64-bit code are also larger in size. It isn't too much of an issue with the price of SD cards coming down, but Apple will have a hard time trying to peddle their 8 gig only iPhones.as you'll nearly be out of space from the get go; or at least offer a SD card expansion finally..
 

Textfield

Honorable
Jun 23, 2013
70
0
10,660
11
FTA - "After Apple launched its ARMv8-based 64-bit A7 chip in 2013, some companies tried at first to deny that 64-bit support is an important feature for mobile CPUs"

That's because 64 bit support wasn't an important feature for mobile CPUs in general...but when Apple moved to 64bit, it wasn't actually a gimmick either. Apple actually moved to 64bit because they absolutely NEEDED to play catch up with the capabilities of top end Androids introduced to the market at the time.

Here's what was really going on. Android phones were using 4 application cores while Apple's iPhone was using just 2. While Android phones were getting processors from the industry leaders who had the knowledge and resources to master quad core ARM chips in a timely manner, Apple designs its own processors internally, and their design team has only ever cranked out dual core application processors. Designing 4 core processors was another level Apple's internal team had not reached yet.

So, if you're Apple and you're stuck with a dual cores at the moment and you're falling behind the curve because the competition are using 4 cores and blowing you away on the benchmarks, how do you modify a dual core to match a quad core processor? You can crank up the clock rate (which then creates major heat and power consumption issues)...or you can double the bits processed per core. Apple simply did the later because ARM had reference designs for a 64bit ARM chip intended for servers.

An Android quad core processes (4 cores x 32) bits each clock cycle. That's equal to 128 bits processed per clock cycle.

Apple's 32bit processor used in older iPhones could only process (2 cores x 32 = 64 bits) each clock cycle. The move to 64bit processors yielded (2 cores x 64 = 128bits) each clock cycle, thereby matching Android smartphones that use quad cores.

So Android isn't actually behind Apple when it comes to processing power because Apple's use of 64bit technology is just a means to play catch up. They needed 64bit because of dual cores, Android phones didn't because of quad cores. However, once high end Android devices move to 64bit, it will be done with quad cores, not dual cores like Apple. So the equation suddenly becomes

Android 64bit quad core - 4 x 64bits = 256bits per clock cycle.
iPhone 64 dual core - 2 x 64bits = 128bits per clock cycle

At we're right back at square one all over again.

Now Apple will have a real problem because they can't use 64bit dual cores to match that processing power and it's same problem of playing catch up all over again. Their team must produce a quad core processor in the iPhone 7 or they'll be embarassed in the benchmarks and the media will absolutely jump all over it.
No offense, but you actually don't know much about how computers work. I mean no offense because I was of course at one point at the same level of knowledge. So for your own knowledge, I'll clarify.

Apple is not in any way bad at designing chips; in fact, they're taking the smarter route by going dual-core. Quad-core will always look better on paper, and in benchmarks that fully use those cores, it will pull out ahead a bit. The problem is that quad-core processors rely upon parallelism to be fully utilized. Some tasks can only be done sequentially by their nature, while others can be parallelized, but not efficiently. And Android as an OS isn't very good at filling those cores efficiently either. In general, while four cores has a higher peak processing speed, it's much harder to use them fully or at all efficiently.

Another misunderstanding here is what 64-bit actually means. It's not true that a 64-bit processor "processes" 64 bits per cycle whereas a 32-bit processor only does 32. The reality is far more complicated. Processors are designed with complex pipelines, and with parts of those pipelines operating in parallel. A processor core will fetch instructions and decode them into microinstructions, which are smaller pieces, and can execute several non-blocking microinstructions simultaneously. The measure of how many instructions a single core can process in a single cycle is sometimes referred to as IPC (instructions per cycle).

This means that you can have one 32-bit core that's vastly more powerful than another, or a big 32-bit core that's faster than a 64-bit core, or a 64-bit core that's as fast as 2 32-bit cores. The label of "64-bit" does mean some things, such as double register size, double memory bus bandwidth and larger address space, higher performance with integers, larger instructions, and so on, but it's not a determining factor in processor performance.

What Apple focuses on, and has historically focused on, is less cores with an aim for high IPC in each core. This means that the peak performance of their CPUs will be less than the quad-core ones ending up in Android flagships, and they will consequently suffer a bit in benchmarks that take full advantage of this. Run any of these benchmarks in single-core mode, and Apple smokes the competition. And while benchmarks may take advantage of more cores efficiently, those results are deceptive, because the real world doesn't. So take a chip with two, beefy cores into the real world of apps and everyday usage, and there's a good chance it will be a bit snappier than a quad-core competitor. Apple's move to 64-bit was in part likely to keep its cores big and powerful while it stuck with fewer, though preparing the software ecosystem for the future is equally important.

And the last important piece of the puzzle I'll mention here is compiler optimizations. The Android world is full of vastly different architectures running the same binaries. Even across the same instruction set, it's very possible and often advantageous to optimize for individual architectures. Since Apple maintains their own, finely tuned compilers for their devices, they can ensure this kind of heavy optimization.

The coming of 64-bit chips from Android competitors is interesting, though it's likely we'll continue to see Apple holding their own by continuing to improve their own cores. It's also likely we'll see an increase in core count sometime soon, seeing as how the iPad Air 2 is already a triple-core design.

And one final thing: clearly, you are concerned with specs. That happens to all of us, and much of the Android market plays to this heavily by marketing the specs, sometimes more than the devices themselves (as we know from Samsung cheating on the benchmarks). But specs and benchmarks can be deceptive when it comes to real-world performance and behavior, and device optimizations on both the programmer's side (don't forget Apple released Metal with iOS 8) and the compiler's side play a critical role as well. Spend some time to actually look at and use a recent iOS device rather than the benchmark numbers it spews out. And I think you'll find that either the benchmarks and specs are deceptive, or that they really don't matter as much as you think.
 

rwinches

Distinguished
Jun 29, 2006
888
0
19,060
30
The future is with devices that add a full laptop screen and keyboard to your smartphone like
Clambook
Droid Bionic Lapdock
Casetop.
So having extra a powerful CPU and more memory will be warranted.
these type of addons can have large batteries and extra storage and ports.

I think Toshiba tried this years ago just a bit ahead of time and too expensive.
 

none12345

Distinguished
Apr 27, 2013
431
2
18,785
0
Useless unless/until they start putting in 4+gigs of ram.

There are benefits to doing it early. It allows all the software to mature before its needed. That's definitely a very strong reason to do it early.

The downside tho is larger binaries. Its not huge, about 15% or so. But, it means if you dont increase ram, you make things worse. Means if you dont increase storage space, you make things worse. Going from 32 bit to 64 bit, all things being equal will slow things down on a ram limited platform.

Another downside is a more expensive chip that is slower. If you instead allocated the same transistors to making a 32 bit chip wider, you would end up with a faster chip(either by ipc and/or clock rate). Again assuming you are NOT ram limited. Also assuming an average workload, there are certainly some applications that need the wider data types, but most do not.

There is no doubt it will be needed at some point. With the current ecosystem, i dont think the time is right. Id say maybe another 4 years.

From a marketing standpoint, they have no choice but to do 64 bit now. Because most people are technical idiots. Their line of thinking is 64 is double 32, so it must be faster!
 

bit_user

Splendid
Ambassador
I think you should take a closer look at those results. To me, it seems like a pretty clear victory for x86-64. The only ones that are worse are the Apache benchmark and kernel compilation (and I wonder whether that was a 32-bit kernel or a 64-bit one...). The games don't show many differences, but it's not clear if they're even CPU-bound.

According to whom? Then why does the x86-64 version of anything run faster than 32-bit? Are you seriously implying that ARM and AMD didn't know what they were doing, when they added more registers? Have you ever written x86 assembly language or looked at compiler output? There are register spills all over the place. And since ARM is a register-to-register ISA, it has greater register needs than x86.
 

walkthetalk

Distinguished
Aug 12, 2009
11
0
18,510
0
4Gb Limit with 32 bits? NO. That was a Microsoft issue. If you ran Windows (desktop) there was a 4Gb limit. If you ran Windows Server, Linux or OS X, you could address more than 4Gb of RAM. Just calculate 2^32 = 536,870,912 bytes. Microsoft was the problem... again!
 
Status
Not open for further replies.

ASK THE COMMUNITY

TRENDING THREADS