News AMD Hybrid CPUs: No Point in Big.Little for PCs Unless OS Can Use It

spongiemaster

Admirable
Dec 12, 2019
2,273
1,277
7,560
And OS developers won't see a point in hybrid CPUs until they come out.

Which is why I'm a believer that hardware manufacturers need a strong software team behind them to overcome the "chicken and egg" problem.
That's why Intel will be using it first. Even if they have to fund all the development, you can bet Windows will be ready for big.little when Intel releases Alder Lake.
 
Last edited:
  • Like
Reactions: hotaru.hino
I get it from AMD's perspective, high-performance architectures like Zen are very power efficient when idling, and AMD's impressive Precision Boost algorithm and super-fast frequency changes allow the CPUs to enter idle states super quickly. So making a big.LITTLE architecture doesn't really make a lot of sense. Even in laptops.
 
  • Like
Reactions: Makaveli

Makaveli

Splendid
I get it from AMD's perspective, high-performance architectures like Zen are very power efficient when idling, and AMD's impressive Precision Boost algorithm and super-fast frequency changes allow the CPUs to enter idle states super quickly. So making a big.LITTLE architecture doesn't really make a lot of sense. Even in laptops.

Agreed with your post and what amd is saying.

Were not there yet.
 

spongiemaster

Admirable
Dec 12, 2019
2,273
1,277
7,560
I get it from AMD's perspective, high-performance architectures like Zen are very power efficient when idling, and AMD's impressive Precision Boost algorithm and super-fast frequency changes allow the CPUs to enter idle states super quickly. So making a big.LITTLE architecture doesn't really make a lot of sense. Even in laptops.
What performance oriented laptops have you used that the battery life was so great it couldn't be improved? With replaceable batteries becoming extinct, battery life is even more critical as the battery wears down over its lifetime.

This is classic marketing 101 that every company does. Anything we don't have that the competition does isn't important or is too early to be useful. Fortunately, it's just that, marketing speak. If companies actually operated that way, there would never been any innovation in the industy.
 
No point unless you are severely TDP constrained anyhow. AMD will be on 5nm soon enough so I absolutely agree with them. There is plenty of space to shrink dies and add cores. I mean its not like you cant get a 64 core beast in the HEDT desktop already. That likely satisfies 99% of use cases already. Yes pointless.
 

Gomez Addams

Prominent
Mar 4, 2020
53
27
560
Count me as another who doesn't see the point. I can't see myself ever buying a machine with a mixed-mode CPU. I want something to either last as long as possible on a battery OR to give as much performance as possible, depending on my use for it. I do not want something that is going to compromise so heavily on both unless I could only have one and that's not likely any time soon.
 
Count me as another who doesn't see the point. I can't see myself ever buying a machine with a mixed-mode CPU. I want something to either last as long as possible on a battery OR to give as much performance as possible, depending on my use for it. I do not want something that is going to compromise so heavily on both unless I could only have one and that's not likely any time soon.
So you don't have a modern smartphone? They all use big.LITTLE.

Also "last as long as possible on the battery" is a little vague in some sense. You could put a Z80 on 7nm and it'll probably last forever on modern batteries, but all that time is wasted waiting for the thing to do something.
 
So you don't have a modern smartphone? They all use big.LITTLE.

Also "last as long as possible on the battery" is a little vague in some sense. You could put a Z80 on 7nm and it'll probably last forever on modern batteries, but all that time is wasted waiting for the thing to do something.

Yes but we are all talking desktops. So all those reasons you mentioned are valid and why its there on the phone/tablet's but pointless in desktops.
 

everettfsargent

Honorable
Oct 13, 2017
130
35
10,610
Am I the only one that sees an oxymoron in the making? Talking to PC Gamer about big.LITTLE is the last place one ought to be seen talking about highly efficient CPU's. Keeping total power consumption down over a fixed period of time while completing the same task should be the primary goal? Is there some sort of (last place) prize for most efficient gaming system?
 

Nightseer

Prominent
Jun 12, 2020
17
7
515
The way I see it, AMD is under no pressure, they already have more cores than vast majority will need any time soon. Even 6 cores and 12 threads is plenty for most. O AMD sees no rush, especially with plans to go even lower than 7nm. And especially when Intel is backporting 10nm architectures to 14nm. So the way I believe they see it, is just to let Intel deal with early problems and come in whenever OSes can distinguish and smartly use big and little cores. Which is mainly Windows. I mean AND already had to trick Windows into not swapping threads between CCXes, because task scheduler wasn't smart enough to figure out that this comes with latency penalty. Imagine giving it big and little cores and it being just random about moving stuff around. Or putting background running processes on fast cores and your game where you want it to run fast on slow cores.

But other than that, her definitely is market for low TDP parts, so I could see that working on desktop, especially if those CPUs would be cheaper, due to having slower cores. Plus software itself might need to get aware too. So for gaming, main thread and crucial stuff to go in fast cores and stuff that can be processed slower into slow cores. So I am not surprised AMD is ready to leave this to Intel. Plus as good as AMD is, Intel has more money, they also are bigger and gave their business in more areas, plus they get listened to more too, so why not let the deal with it.
 

epobirs

Distinguished
Jul 18, 2011
197
13
18,695
So you don't have a modern smartphone? They all use big.LITTLE.

Also "last as long as possible on the battery" is a little vague in some sense. You could put a Z80 on 7nm and it'll probably last forever on modern batteries, but all that time is wasted waiting for the thing to do something.

That is a very specific use case that drove the development of hybrid CPUs and the needed OS support. Intel tried to get x86 in phones and it never caught on. It isn't clear what they're expecting the market to be for their hybrid line, unless they're achieving battery life far in excess of what everyone else expects. It appears AMD is asking this question and finding a lack of compelling reason to sink capital into it. Gating and variable clocks appear to be getting the job done on advancing those market where x86 still dominates. If Apple implements big.LITTLE in the SOC for their ARM based Macs, that will come under heavy scrutiny from everyone looking to see if it was worth the trouble.
 
That is a very specific use case that drove the development of hybrid CPUs and the needed OS support. Intel tried to get x86 in phones and it never caught on. It isn't clear what they're expecting the market to be for their hybrid line, unless they're achieving battery life far in excess of what everyone else expects. It appears AMD is asking this question and finding a lack of compelling reason to sink capital into it. Gating and variable clocks appear to be getting the job done on advancing those market where x86 still dominates. If Apple implements big.LITTLE in the SOC for their ARM based Macs, that will come under heavy scrutiny from everyone looking to see if it was worth the trouble.
Most of the x86 consumer market uses laptops still. And there may be a compelling enough reason to look into this for server clusters if heterogeneous CPUs can prove to be more efficient than homogeneous ones.

There are compelling arguments why it doesn't really matter, but not bothering to risk and try also is where innovation dies.
 
  • Like
Reactions: TJ Hooker
The problem is that we're putting WAY too much emphasis on the big.LITTLE architecture saving the day for everyone. It's not that cut and dry, big.LITTLE is just one in a large number of ways to optimize battery life and optimize CPU performance for more efficiency. There are plenty of other ways to do it.

It's funny how nobody talks about the extra die space big.LITTLE CPU cores consume, that's also a huge consideration. And it has to pay off tremendously. The only reason it works so great in phones is because the architecture is specifically optimized for phones (I believe), from the efficiency of the cores, to the software. The fact that phones use big.LITTLE architecture is just one feature that makes android and iPhone CPUs efficient.
 

JayNor

Reputable
May 31, 2019
426
85
4,760
Perhaps Intel's Gracemont small cores can be used efficiently for SIMD and for interaction with accelerators ... GPUs, FPGAs, NNPs, COMMS ASICS. Gracemont cores have been mentioned as being used in the P5900 24 core Tremont successor, Grand Ridge.

https://www.techpowerup.com/270718/...-24-core-processor-features-pcie-4-0-and-ddr5

Perhaps Intel sees a future in which desktops are connected in a network via multiple high speed ethernet connections, similar to the Habana NNP architecture.

I'm looking forward to their story.
 
  • Like
Reactions: hotaru.hino
There's probably one thing to look at to see how heterogeneous CPUs will pan out outside of the mobile market: Apple's M1 on the Mac Mini and their laptops (which I guess still counts as mobile).

So considering Linux should account for it (though I'm not sure if Android uses a different scheduler), Windows may have something from the Surface Pro X, and macOS definitely accounts for it in Big Sur, AMD's reason that "unless an OS can use it" falls flat.
 
  • Like
Reactions: TJ Hooker

TJ Hooker

Titan
Ambassador
It's funny how nobody talks about the extra die space big.LITTLE CPU cores consume, that's also a huge consideration. And it has to pay off tremendously. The only reason it works so great in phones is because the architecture is specifically optimized for phones (I believe), from the efficiency of the cores, to the software. The fact that phones use big.LITTLE architecture is just one feature that makes android and iPhone CPUs efficient.
Well, the thing about the little cores is that they are in fact small. E.g. in the Apple A14 SoC, the total die area 4 small cores is about equal to the size of 1 big core (if you include associated L2 caches, area for 4 small cores + cache is roughly half the size of 2 big cores + cache). Relative to the total die, the small cores + L2 are only about 5% of die area.

Edit: I used the areas listed here. Tom's article has estimated the areas a bit differently, such that 4 little cores + L2 are about 2/3 the area of the 2 big cores + L2.
 
Last edited:
  • Like
Reactions: hotaru.hino

jasonf2

Distinguished
Big.Little's advantage is in low priority, low power states. The magic is in the laptop segment, not the desktop and certainly not about core count. Dedicated background task cores that take significantly less power have proven effective in phones and will do the same for laptops. The new apple chips are utilizing it and x86 will be forced into playing ball much sooner than later if battery life is going to stay on par.
 
Big.Little's advantage is in low priority, low power states. The magic is in the laptop segment, not the desktop and certainly not about core count.
If you look at the 5950 and 5900, when using all cores they only have 6-7 and 7-8W per core available,that IS a low power state and if you have a little core that can give you better performance at such low power it can be useful for desktops and especially for high core count.
Not much helpful for the normal user that doesn't even need that many cores but for special use cases it could possibly be helpful.
https://www.anandtech.com/show/1621...e-review-5950x-5900x-5800x-and-5700x-tested/8
 
What's the problem?! AMD released FX before windows was ready for them and the same goes for ZENs CCX and windows always patched it in relatively fast.
FX was likely how they learned this lesson.
(failure is the best way to learn)

They are correct though...the support for this has to be there for them to invest $ in making the product.
They cant risk a failure that could cost em massive R&D.

I mean early SSD days Windows wasnt ready for em and had plenty of issues. it wasnt always a "fun" experience.
 

AnimeMania

Distinguished
Dec 8, 2014
334
18
18,815
I don't know anything about this, but I don't see why AMD wouldn't build a 2 core/4 thread high speed CPU into their GPU cards. I update my GPUs 3 or 4 times for every CPU. Getting a high end CPU embedded in the GPU might standardize the speed at which the card performs, regardless of how new or old your computer might be. This would also allow AMD to control how the CPU and GPU communicate with each other. Any PC could become a high-end gaming PC, just by adding an expensive high-end graphic card. Your graphics card tends to have more memory and more compute power than your computer so gaming computers should center the majority of the high end computer functions on the graphic card and the lower end functions on the CPU. Other than gaming, I really don't perform any high end functions.