News AMD Patent Hints At Hybrid CPU To Rival Intel's Raptor Lake CPUs

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Intel used to have an AVX offset and they could have used that again to reduce heat and power as much as they wanted, they could even hardcode it into the CPU so that reviewers wouldn't be able to run AVX at full throttle no matter what.
You do realize that the current 11900k die is as big as the 10900k? Adding an extra 2 cores would have added quite a bit of die space. Then in order to do your "AVX offset" they would have needed to clock the CPU much lower and therefore a 10c/20t Rocket Lake would have had lower performance than the previous generation on ST tasks and gaming.
 
This tech isn't intended for desktops, and certainly not intended for servers. It's all about portable devices - laptops and tablets. Every year the market share of laptops vs desktops increases, and AMD know that they need to compete directly with both Intel and ARM in that space.

I've worked with a company that has over 100,000 employees and another with over 15,000, and in both every office worker was issued a Lenovo laptop. The high performance Ryzens are great and everything, but that isn't what the market is after. That's why Intel still outsells AMD 7 to 1.

The big.little configuration will also enable the perceived desire for always-on laptops that constantly update your emails and notifications just like a mobile device does. Only ARM devices have been used for that so far.
Intel outsells AMD due to sheer market size and production ability. Not to mention the industry wide thinking of older, and usually higher up IT people, that no one was ever fired for buying Intel. Ryzen mobile has been competitive in terms of performance all be it behind Intel and further behind in battery life at first. Starting with the 4000 series the Ryzens have overtaken both performance & battery life from Intel. The new ZenBook 13 offers 13.5hrs real battery life as tested by Tomshardware. The Apple M1 offers 16.5hrs which is a big.little design. Some things the Ryzen is faster than the M1 and vice versa. Basically big.little will help for battery life, but for overall power draw the CPU is minimal compared to your monitor. Getting more efficient monitors will help with power draw FAR more than a more than a big.little CPU ever could.
 
You do realize that the current 11900k die is as big as the 10900k? Adding an extra 2 cores would have added quite a bit of die space. Then in order to do your "AVX offset" they would have needed to clock the CPU much lower and therefore a 10c/20t Rocket Lake would have had lower performance than the previous generation on ST tasks and gaming.
So it's not a matter of power and heat! I'm glad we finally agree on something.

AVX Offset means that the clocks go down only when AVX is being used, from the benchmarks that you or anybody else of us gets to see it would not change the performance numbers at all because all of those don't use any AVX at all.
 
This tech isn't intended for desktops, and certainly not intended for servers. It's all about portable devices - laptops and tablets. Every year the market share of laptops vs desktops increases, and AMD know that they need to compete directly with both Intel and ARM in that space.
So it's not a matter of power and heat! I'm glad we finally agree on something.

Except Intel themselves said it was power and heat using marketing speak.

“I think one of the questions many of you might have right away is, Why are you going from ten to eight cores? “ Guttridge said. “The answer to that question really goes back to...our focus was on maximizing real world performance, which is a combination of frequency and IPC [instructions per clock]. So as we looked at the microarchitecture, we ported the 10nm design for both the CPU and the graphics back to the 14nm manufacturing node."

A combination of frequency and IPC, and in order to maximize frequency, they had to remove 2 cores. Pretty simple and straightforward.
 
Intel outsells AMD due to sheer market size and production ability.
And having a better product for businesses. AMD didn't really overtake Intel in everything until the 5000 series which was released a few months ago. As dominant as the 5000 series is, it's not in any way targeted for volume business desktops which need an iGPU and no more than about 6 cores.
 
And having a better product for businesses. AMD didn't really overtake Intel in everything until the 5000 series which was released a few months ago. As dominant as the 5000 series is, it's not in any way targeted for volume business desktops which need an iGPU and no more than about 6 cores.
The Ryzen G series are very good products for business desktops. The 2000 series G products were released Q2 2018 for Ryzen Pro and Q1 for non Pro. Clock for clock they were a bit slower than Intel, but in business desktops that isn't a huge deal. The difference was the equivalent AMD vs Intel builds were 10-20% cheaper going AMD. From a business perspective, getting near equal performance (5% difference clock for clock) and saving 10-20% makes sense. Sure this has only been a thing for 3 years now. That said back in the late 90s to mid 2000s Intel did shady business practices to limit AMDs market share. Despite having an inferior product, sometimes vastly, in the P3 vs Athlon and Athlon/64 vs P4 days, Intel still outsold AMD in businesses due to paying off places like Dell & HP. Intel didn't have a competitive server solution from when the Athlon MP was released all the way until Nehlem based Xeon came out. However, they still outsold AMD. Market size matter a lot as you can put pressure on the system integrators and designers.
 
Except Intel themselves said it was power and heat using marketing speak.

“I think one of the questions many of you might have right away is, Why are you going from ten to eight cores? “ Guttridge said. “The answer to that question really goes back to...our focus was on maximizing real world performance, which is a combination of frequency and IPC [instructions per clock]. So as we looked at the microarchitecture, we ported the 10nm design for both the CPU and the graphics back to the 14nm manufacturing node."
Yeah, because intel would come out and say "why sell you 10 cores for the price of 8"
A combination of frequency and IPC, and in order to maximize frequency, they had to remove 2 cores. Pretty simple and straightforward.
Look at what is being benchmarked to result in people saying that intel has high (twice the) power draw, list them here and show us performance numbers for those benches from your most trusted website.
Spoiler alert, you are not going to find any performance results because they use software that has no performance numbers.
 
Look at what is being benchmarked to result in people saying that intel has high (twice the) power draw, list them here and show us performance numbers for those benches from your most trusted website.
Spoiler alert, you are not going to find any performance results because they use software that has no performance numbers.
https://www.anandtech.com/show/16495/intel-rocket-lake-14nm-review-11900k-11700k-11600k/5
They go and tell you exactly what they were using for their power draw tests. In the benchmarks you can see what performance each of those programs got.
 
  • Like
Reactions: everettfsargent
Yeah, because intel would come out and say "why sell you 10 cores for the price of 8"
I'm not really sure what that means. Is there an industry standard price per core in either manufacturing costs or at retail?

Look at what is being benchmarked to result in people saying that intel has high (twice the) power draw, list them here and show us performance numbers for those benches from your most trusted website.
Spoiler alert, you are not going to find any performance results because they use software that has no performance numbers.

I don't know how this is relevant to Intel saying they needed to remove cores to hit performance targets.
 
Basically big.little will help for battery life, but for overall power draw the CPU is minimal compared to your monitor. Getting more efficient monitors will help with power draw FAR more than a more than a big.little CPU ever could.
I'm not sure how you came to the conclusion that the monitor is using most of the power. It is a little tricky to find some data on the power usage of various components, but this 2004 article measured the screen as using 3.3 W at full brightness. At that time, it would have been using a less efficient CFL backlight, rather than LED as now used.

So, safe to assume a modern laptop screen uses around 2W. With standard battery sizes of ~60Whr, that would be enough for 30 hours of 'screen' time. With laptop CPUs rated at 15W, plus overheads of other components, at full power the battery would last around 2-3 hours. So big.little is there to address the usage in between that 2 to 30 hours, which is actual normal usage.

Another paper I looked at measured smartphone power usage when watching a video at around 0.5 W. The same amount of decoding needs to happen on the phone as on the laptop - so it really highlights the potential power savings of super-efficient small cores for certain tasks.

I do agree though, that more efficient monitors would help. It's a pity that OLED has not reached its potential given the inherent inefficiency of a LCD system that by design blocks around 80% of the backlight even with a fully white screen.

Intel outsells AMD due to sheer market size and production ability.

You're saying Intel outsells AMD because it is bigger and sells more products. That's some real MBA material there! (sorry, I chuckled a little when I read this).
 
I'm not sure how you came to the conclusion that the monitor is using most of the power. It is a little tricky to find some data on the power usage of various components, but this 2004 article measured the screen as using 3.3 W at full brightness.
The value of 3.3W doesn't show up in the abstract (all that is available without having to put in a request for the whole document it would seem), but this statement does: "The display is the other main source of power consumption in a laptop; it dominates when the CPU is idle."

If you're doing something like web browsing and/or watching video, your CPU is going to be mostly idle most of the time. Because it'll only turbo briefly as needed to load new webpage(s) then downclock ASAP, and the video will be offloaded to a dedicated (low power) decoder.
 
The value of 3.3W doesn't show up in the abstract (all that is available without having to put in a request for the whole document it would seem), but this statement does: "The display is the other main source of power consumption in a laptop; it dominates when the CPU is idle."

If you're doing something like web browsing and/or watching video, your CPU is going to be mostly idle most of the time. Because it'll only turbo briefly as needed to load new webpage(s) then downclock ASAP, and the video will be offloaded to a dedicated (low power) decoder.
The full article is here: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.87.5604&rep=rep1&type=pdf

- Looking at the charts more carefully, the monitor power usage is split into 'backlight' and 'LCD', which combined is 38% of power draw (totaling 11.57W) when at idle. The CPU power draw is 4%. The 'other' share of power is significant at 31% - this is presumably the motherboard chipset etc. Although, it's hard to compare a 2004 laptop with a modern setup since in those days the Northbridge was a separate component on the MB rather than part of the CPU etc.

But of course, idle means completely idle. Moving the mouse, scrolling th page is not idle. A small supplementary core is not there to reduce the idle power draw - that's managed by the power states. The small core is to reduce power when doing small things.

Other than that, I'm not entirely sure of your point - are you suggesting the engineers at ARM, Apple, Intel and AMD are all wasting their time in adding small cores to their CPUs? I'm happy to give them a little credit :)
 
Last edited:
https://www.anandtech.com/show/16495/intel-rocket-lake-14nm-review-11900k-11700k-11600k/5
They go and tell you exactly what they were using for their power draw tests. In the benchmarks you can see what performance each of those programs got.
Exactly what I'm talking about, the only thing there that has a performance number is handbrake and if you are all about efficiency why wouldn't you use qsv to convert that?! Everything you see online is transcoded by qsv or nvenc or similar.

Also the power draw in handbrake is way below the maximum PL2 that intel allows for turbo even though these numbers are with power limits lifted as the other benches on that page show, especially for the 11600k it's about 100W less then it could use and is pretty much at its rated 125W TDP, for the others it's about 50W less then PL2.
The 5800x is supposed to be 105W TDP but is only barely below the 125W 11600k in power draw.
Core i9-11900K12525156
Core i7-11700K12525156
Core i5-11600K12522456
 
Exactly what I'm talking about, the only thing there that has a performance number is handbrake and if you are all about efficiency why wouldn't you use qsv to convert that?! Everything you see online is transcoded by qsv or nvenc or similar.

Also the power draw in handbrake is way below the maximum PL2 that intel allows for turbo even though these numbers are with power limits lifted as the other benches on that page show, especially for the 11600k it's about 100W less then it could use and is pretty much at its rated 125W TDP, for the others it's about 50W less then PL2.
The 5800x is supposed to be 105W TDP but is only barely below the 125W 11600k in power draw.
Core i9-11900K12525156
Core i7-11700K12525156
Core i5-11600K12522456
Ba ha ha ha ha ...
From your own link, no less ...
Ex0ckC0VoAosyNE.jpg

Also, that page never even remotely mentions the 5800X.
 
Also, that page never even remotely mentions the 5800X.
Don't confuse every post together, I was answering on the anandtech site that jeremy posted.
Ba ha ha ha ha ...
From your own link, no less ...
Ex0ckC0VoAosyNE.jpg
I posted that link to show a list with Pl1 Pl2 and tau, show me the performance numbers for aida64 FPU AVX and prime95 smallFFT AVX, show me how much performance you get in these things because you would only run these things if they would provide you with anything useful.
Just as my initial post said...power draw "benchmarks" that don't test any actual performance.
Look at what is being benchmarked to result in people saying that intel has high (twice the) power draw, list them here and show us performance numbers for those benches from your most trusted website.
Spoiler alert, you are not going to find any performance results because they use software that has no performance numbers.
 
Don't confuse every post together, I was answering on the anandtech site that jeremy posted.

I posted that link to show a list with Pl1 Pl2 and tau, show me the performance numbers for aida64 FPU AVX and prime95 smallFFT AVX, show me how much performance you get in these things because you would only run these things if they would provide you with anything useful.
Just as my initial post said...power draw "benchmarks" that don't test any actual performance.
You do understand conservation of energy?
https://en.wikipedia.org/wiki/Conservation_of_energy
Power-Agi_575px.png

  • Intel Core i9-11900K (1912 sec): 164 W dropping to 135 W
  • Intel Core i7-11700K (1989 sec): 149 W dropping to 121 W
  • Intel Core i5-11600K (2292 sec): 109 W dropping to 96 W
  • AMD Ryzen 7 5800X (1890 sec): 121 W dropping to 96 W
No AVX, AVX2 or AVX-512, the i5-11600K uses ~35% 20% more energy than the 5800X, the i7-11700K uses ~50% 40% more energy than the 5800X and the i9-11900K uses ~60% 50% more energy than the 5800X. Oh and both the i7/i9 exceed their PL1 and Tau limits to boot, so the numbers quoted (~40%/~50%) would be slightly conservative. ...
MSI-Intel-11th-Gen-Rocket-Lake-Desktop-CPU-Overclocking-Power-Limits-Temperatures-Adaptive-Boost-Technology-Gear-Modes-Detailed-_2-1030x579.png

MSI-Intel-11th-Gen-Rocket-Lake-Desktop-CPU-Overclocking-Power-Limits-Temperatures-Adaptive-Boost-Technology-Gear-Modes-Detailed-_3-1030x580.png

(checks on math changed percentages, as shown above, but ymmv)
 
Last edited:
The ideal of a homogenous CPU core is nearing its end. With current silicon lithography technology heat is the enemy first and second is that no matter what there is a minimum theoretical size that a silicon transistor can be, and we are not too far off. So big.little gives a big advantage here. First off rather than running a full heat core on a low priority task we are able to do the same thing with less heat. That gives headroom to the core running your high priority tasks resulting in higher sustained boosts. Little cores also give the ability for "always on" devices to operate in low power mode in the hybrid space. Longer term though it lays the groundwork for the OS guys to get task schedulers lined out for core specialization. With all of the modularization of the cores I can really see a future where you have cores that are programmable ASIC that are able act as software specific hardware accelerators in the mainstream space. There is nothing really new here relative to the server space but the magic will be that it will be moving into on die systems with stock desktop os.

Not to get in the middle of a war here but core count does not directly scale with performance. In fact it is really pretty much the opposite. If I could have one core at 5 ghz or two at 2.5 ghz the machine with the 5ghz chip will out perform the 2.5 because there is no thread parallelism penalty on the one core. As we have to separate and then recombine concurrently running threads into the real world time line something always has to wait on something else and that creates a logical inefficiency as certain time sensitive calculations are discarded. The penalty is great enough that in many cases multiple next step cases are calculated knowing that only one will be used and the others discarded. So massive parallelism in something like a GPU is used to calculate a users next anticipated movement in a game and have it ready to reduce latency, but at a pretty significant waste of resources. This make for more core activity and directly translates to more heat produced. In a on die set due to the tight proximity having alot of cpu cores directly relates to higher tdp as well which means lower boost states because getting rid of the heat is problematic. Intel shrank core count because of their process node issues. The +++ moniker on the process node is really more about getting clock speeds up to compete with ryzen which is running on a more efficient process node . So higher clocks = higher heat and they pushed it far enough that they had to reduce core count to keep in the envelope, not to rip off the public. AMD doesn't have to run in the what we used to see as overclock space because their more compact and efficient process node gives them the thermal headroom intrinsically.
 
You do understand conservation of energy?
https://en.wikipedia.org/wiki/Conservation_of_energy
  • Intel Core i9-11900K (1912 sec): 164 W dropping to 135 W
  • Intel Core i7-11700K (1989 sec): 149 W dropping to 121 W
  • Intel Core i5-11600K (2292 sec): 109 W dropping to 96 W
  • AMD Ryzen 7 5800X (1890 sec): 121 W dropping to 96 W
No AVX, AVX2 or AVX-512, the i5-11600K uses ~35% 20% more energy than the 5800X, the i7-11700K uses ~50% 40% more energy than the 5800X and the i9-11900K uses ~60% 50% more energy than the 5800X. Oh and both the i7/i9 exceed their PL1 and Tau limits to boot, so the numbers quoted (~40%/~50%) would be slightly conservative. ...
But where are the performance numbers of agisoft? When am I ever going to run this, what is this even measuring?

Also, as you very correctly noticed, this is with power limits lifted so not what intel sells you but what reviews show you.
As I have shown in the previous post, 30% more power for 1% more performance.
You wanna do the math again with power limits enforced?
 
If it were that easy, how come someone else hasn't risen up to take on any of those companies?
Because they got crushed by Intel monopolistic actions.

See how hard is still for AMD to get support from OEMs, like for example Dell business line of computers.

Dell was caught before getting bribes from Intel to keep AMD away and it looks like they are still doing it.

Dell business line (Latitude, Optiplex and Precision) are the money maker for dell and guess what, none of those comes with an AMD cpu.
 
But where are the performance numbers of agisoft? When am I ever going to run this, what is this even measuring?

Also, as you very correctly noticed, this is with power limits lifted so not what intel sells you but what reviews show you.
As I have shown in the previous post, 30% more power for 1% more performance.
You wanna do the math again with power limits enforced?
The problem there is we already know watt-for-watt which of the two processors is the most efficient, that being the 5800X, without question. The only place where the 5800X is vastly outgunned is in AVX-512. Period. Full stop.
 
But where are the performance numbers of agisoft? When am I ever going to run this, what is this even measuring?

Also, as you very correctly noticed, this is with power limits lifted so not what intel sells you but what reviews show you.
As I have shown in the previous post, 30% more power for 1% more performance.
You wanna do the math again with power limits enforced?
It helps if you actually read the entire review. I put this in the same part where I linked to the Power draw in the article.
In the benchmarks you can see what performance each of those programs got.
However, since you seem incapable of doing anything by yourself I will save you the difficult task of looking through the benchmarks and link each specific page.
Agisoft & 3DParticle Movement- https://www.anandtech.com/show/16495/intel-rocket-lake-14nm-review-11900k-11700k-11600k/7
Handbrake - https://www.anandtech.com/show/16495/intel-rocket-lake-14nm-review-11900k-11700k-11600k/9

We know that strictly enforcing power limits on Intel CPUs does affect their overall performance. That also IS NOT the default setup for the majority of motherboard. Most Intel motherboards are set to unlimited Tau.

Who in their right mind thinks that 1% better performance for a 30% increase in power is a good thing when comparing 2 CPUs?
 
Last edited: