News Qualcomm Claims the Transition of PCs to Arm Is "Inevitable"

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
The whole concept that separates arm from x86 is that it runs supported instructions in one cycle at low clocks to make it extremely efficient at the instructions that it supports.
There is nothing intrinsic to ARM about having fewer multi-cycle instructions, that is just a side-effect of having fewer instructions that pack 2-3 different things in one instruction. Pipelined execution to crank up clock frequencies can be done on any ISA and the hit on power efficiency isn't much as long as you don't get stupid about it like Intel did with Netburst. Even Intel's CPUs don't have that high TDPs until you start pushing clocks beyond 4GHz.

Hence why qualcom is pushing notebooks, those users could very well be completely fine with running fewer things if that gives them a better experience overall.
I doubt low power has much to do with why Qualcomm is going netbook first. It far more likely has to do with most people buying netbooks, Chromebooks, tablets and equivalents not caring about legacy software, which makes netbooks one of the lowest-friction markets to test the waters with. If Qualcomm launched something more PC-like, it could set itself up for the same kind of failure as Microsoft's WIndows on ARM where too many people got confused about what can or cannot run on it which effectively killed it.

You don't grow a market by making early adopters feel like they got screwed over. It makes sense that Qualcomm would target a lower-risk market more in line with ARM's traditional strongholds first.
 
The whole concept that separates arm from x86 is that it runs supported instructions in one cycle at low clocks to make it extremely efficient at the instructions that it supports.
Trying to do the same at high clocks to compete with x86 will be using a lot of power, basically it's what we see with AVX on intel, it's a special circuit that does these calculations in way less cycles than simple x86 does, so at low clocks it's incredibly more efficient than using x86 would be, but pushing it to high clocks uses up huge amounts of power.

It's the same for any instruction, to get to the speed that x86 has from clocks alone arm would have to do exactly the same as x86 does, it doesn't work any other way.
The ISA itself has nothing to do with how power efficient or whatever the CPU is. It may help, but at the end of the day, how you implement the processor ultimately determines how performant and power hungry it is.

The only alternative is for every single piece of software to only run power efficient code which is what the scaled down mobile versions of software do, for the most part. If every user is ok with running compromised software, then arm replacing x86 in PC could become feasible.
Hence why qualcom is pushing notebooks, those users could very well be completely fine with running fewer things if that gives them a better experience overall.
There's no such thing as "power efficient code," if I'm understanding this as running certain instructions over another. There's only power efficient design: the sooner you can idle, the better.
 
There's only power efficient design: the sooner you can idle, the better.
Racing to idle does not intrinsically produce better total task energy figures, especially when burst power is astronomically higher than baseline.

The main reason "race to idle" works is simply because it is far more convenient for general-purpose computing than having one system optimized to complete one specific task in the most power-efficient manner that still meets time constraints.
 
The ISA itself has nothing to do with how power efficient or whatever the CPU is. It may help, but at the end of the day, how you implement the processor ultimately determines how performant and power hungry it is.


There's no such thing as "power efficient code," if I'm understanding this as running certain instructions over another. There's only power efficient design: the sooner you can idle, the better.
You have to think of arm as a selection of special purpose ic (Integrated circuit ) each instruction is its own little computer that is specialized to do this one instruction the ic is made for.

If your software can run only with those instructions you get a very fast very power efficient result.

If your software needs to run something there is no ic for you have to string multiple ics together until you get the result you need, now you have multiple times the power draw because all the ic used for this have to still draw the same amount of power but you have to use more of them and you also need multiples of time because you need the output of one ic to be the input for the next. This then becomes very power inefficient as well as slower.
 
Racing to idle does not intrinsically produce better total task energy figures, especially when burst power is astronomically higher than baseline.

The main reason "race to idle" works is simply because it is far more convenient for general-purpose computing than having one system optimized to complete one specific task in the most power-efficient manner that still meets time constraints.

You have to think of arm as a selection of special purpose ic (Integrated circuit ) each instruction is its own little computer that is specialized to do this one instruction the ic is made for.

If your software can run only with those instructions you get a very fast very power efficient result.

If your software needs to run something there is no ic for you have to string multiple ics together until you get the result you need, now you have multiple times the power draw because all the ic used for this have to still draw the same amount of power but you have to use more of them and you also need multiples of time because you need the output of one ic to be the input for the next. This then becomes very power inefficient as well as slower.
Clumping these two because the point addresses both.

In the general case, if your power consumption over time is still lower, then it's a more power efficient solution. Sure using AVX might cause my processor to spike up to say 100W, but if it's only doing that for five seconds, that's better than if I did the same problem without AVX using 60W but it took more than 10 seconds to run. I've taken issue with reviews that say Intel's Adler Lake is a power hog and only show the instantaneous or worst power consumption case when running a test, but they don't show how long the test ran.

We have to stop looking at "instantaneous" power draw.
 
I've taken issue with reviews that say Intel's Adler Lake is a power hog and only show the instantaneous or worst power consumption case when running a test, but they don't show how long the test ran.
Going by THG's handbrake frames per watt-days, Ryzen has 2-3X as much energy-to-completion power efficiency as Alder Lake does.

Racing to idle is meaningless for long jobs. It is most relevant for user experience type stuff like responding to keyboard and mouse input.
 
Going by THG's handbrake frames per watt-days, Ryzen has 2-3X as much energy-to-completion power efficiency as Alder Lake does.
Yes because for the 12900k they use the 241w all the time setting...
If you compare the 12600k that is limited to 150w it's basically margin of error from the 5800x, that's still alder lake and ryzen and a much higher price tier of ryzen at that.
That's hotaru' point, only using max power skews power draw results immensely.
A9QSbSz2F2JbJ6Jsnjc3vZ-970-80.png.webp
 
On the other hand, I wonder - is it difficult to compile an extra version of any and all software? I mean, as it is, I see we had 32 and 64 bit versions of a lot of things for Windows, even today with drivers, though it seems 32 bit overall is finally falling by the wayside.
I think the biggest hurdle is simply that it is more work (= more cost) for a software development house to have to support two hardware platforms. Some may be waiting for Windows-on-ARM to reach critical mass before they want to spend money on it.

There would be more work especially if the code-base contains old C and/or C++ code that could have taken advantage of platform-specific features (SSE libraries, etc.) or depends on multithreading in complex ways. Certain types of threading-related software bugs manifest themselves differently on ARM than on x86.
Languages like C#, Java, Rust, Python don't have those issues (AFAIK), but you'd still have to test and package the code for the other platform.
 
Languages like C#, Java, Rust, Python don't have those issues (AFAIK), but you'd still have to test and package the code for the other platform.
If the code isn't performance-critical, you can use a single portable binary image for all platforms and let platforms' runtime environment do the native recompile like Android does after an OS update or Microsoft does after a .NET framework update for its portable stuff.
 
The fact is, the industry really has no choice. Apple is absolutely crushing it on performance / watt and their M1 / M1 Pro / M1 Max chips make for far more compelling laptops than ANYTHING Intel based. Intel doesn't have an answer for this. They can match performance, but at huge energy costs. Nobody wants a laptop that has to be plugged in to run properly and that has fans like a turbojet.
That's because Apple bought out TSMC's entire 5nm capacity. The M1 series is being produced at 173 million transistors/mm^2. AMD is still using TSMC's 7nm which is about 114 MT/mm^2. Intel's 10nm process is about 101 MT/mm^2.

Intel's fate lies with how quickly they can get their 7nm process (since renamed Intel 4) up and running. It's supposed to be around 200 MT/mm^2. But AMD is supposed to switch over to TSMC"s 5nm next year. And TSMC is sampling 3nm (numbers are vague but the percentage improvements they're citing suggest it will be somewhere around 225 MT/mm^2), which Apple is sure to buy priority access to. (Their other option is to swallow their pride, and contract to have their CPUs manufactured by TSMC. Intel is one of the few companies with a more obscene profit margin than Apple, and so could potentially outbid Apple.)

There hasn't been a major breakthrough in processor architecture in decades. All the low-hanging fruit has been picked. The biggest recent change was nearly 20 years ago, when unable to push clock speeds higher or make instruction sets faster, they resorted to adding more cores to improve multithreaded performance.

So it's likely that all these different architectures - Intel, AMD, Apple, ARM - perform similarly. The primary difference is the manufacturing process. That determines performance per Watt, and consequently top clock speed and raw performance.

If you remember Nvidia's botched Maxwell release, it ran into the same problem. Kepler was manufactured on TSMC 28nm. Nvidia was expecting to be able to manufacture Maxwell on 22nm, but Apple bought TSMC's entire 22nm capacity. That forced them to manufacture Maxwell on 28nm, where it ran too hot. The entire desktop 800-series was canceled. They had to redesign their Maxwell GPUs for 28nm, which was released as the 900-series. The only 800-series Maxwell GPUs which made it to market were a few lower-power mobile versions.

The problem with Linux isn't its customization. It's the way apps are primarily distributed. Every distribution's package manager runs on the principle that no app is stand-alone. If it depends on something, you have to get its dependency which is a problem for two reasons:
  1. Does that dependency even exist?
  2. Is the version of that dependency compatible with the app you want to run?
Linux's problem (which also causes the dependency issue you point out) is pretty simple: There is no effective feedback mechanism for users to impress their wants and needs onto developers. So you have developers in charge of projects like GNOME going nuts doing whatever they think is cool or following their preconceived notion of how a UI should work. Oblivious to how much their users hate it, ignoring features users want, removing features users like. Linux thus ends up being an OS by developers for developers. And it never makes a dent in the desktop market.

With commercial software, that feedback mechanism is provided by money. Users are willing to pay more for software they like. That helps guide developers to implement features users want, and to waste less time doing things they may find cool but users find useless or counter-productive.

Linux on the desktop won't happen until open source figures out a way to implement a user-to-developer feedback mechanism, which doesn't rely on money (since they're trying to remain free). Right now if you're a user trying to get a much-needed feature added, your choices are:
  • Grovel before the project managers/developersand shower them with praise. Stoke their ego and maybe they'll implement the feature you want.
  • Pay to hire a programmer to implement the feature and add it to the project. But doing that usually costs more than buying commercial software with the feature. In the open source case, you're footing the entire bill for the feature's implementation. In commercial software, the cost gets amortized over all users.
  • Learn to code and implement it yourself. But the whole point of a modern economy is that people specialize in different fields, allowing them to become more efficient in their field than a generalist. They then trade their specialty goods or services, for goods and services in fields where they aren't specialized. Telling people to learn to code and implement the feature themselves, is tantamount to telling us to roll the economy back to the stone age.
Seriously, the developer-user relationship in open source right now is like noble-peasant. The nobles do whatever they want, and simply don't care about the peasants' needs or desires. The example I like to cite is VLC. It's an excellent project and my preferred video player, but the lead developer believed that the mouse wheel should control volume. Users wanted to use the mouse wheel for seeking (FF/RW) through the video. No amount of begging, pleading, constructive criticism, would get him to change his mind. He was so set in his opinion that he refused to even allow the option to change what the mouse wheel did. If you used VLC, the mouse wheel would control volume, and only volume. For 7 years he held out, before finally giving in and allowing the mouse wheel to be remapped to other functions (it still defaults to volume to protect his ego).
 
Linux's problem (which also causes the dependency issue you point out) is pretty simple: There is no effective feedback mechanism for users to impress their wants and needs onto developers. So you have developers in charge of projects like GNOME going nuts doing whatever they think is cool or following their preconceived notion of how a UI should work. Oblivious to how much their users hate it, ignoring features users want, removing features users like. Linux thus ends up being an OS by developers for developers. And it never makes a dent in the desktop market

With commercial software, that feedback mechanism is provided by money. Users are willing to pay more for software they like. That helps guide developers to implement features users want, and to waste less time doing things they may find cool but users find useless or counter-productive.
I would argue this is a problem with software development in general. While money can speak loudly, there are still commercially available products that, despite people moving in droves to some other software because it suits their needs one way or another, still continue to do things in asinine ways. The easiest target to point at is Microsoft.

However, I do agree that most Linux distributions are designed "for developers, by developers", even with more "user friendly" distributions like Ubuntu and everyone who forks from them. The only distribution I'd say that's actually designed for normies in mind is Android. Or at least Google's version of it.
 
However, I do agree that most Linux distributions are designed "for developers, by developers", even with more "user friendly" distributions like Ubuntu and everyone who forks from them.
One of my major gripes with Linux is that every time I tried using it regardless of which distro, I always run into at least one obscure problem almost right away. For example, with Ubuntu 18 LTS and 20 LTS, I have to reload ALSA because audio randomly quits working. At first, I thought it may have been the audio chip on the motherboard having issues but then I ran into what appears to be the exact same issue on a different PC I installed 20 LTS on too.