News Intel's 14-Core Alder Lake-P CPU Falls to 8-Core AMD Ryzen 7 5800H in Early Benchmarks

As far as I see it, it makes perfect sense that a 6 core processor is beaten by an 8 core processor.
The tiny gracemont cores are mostly going to be a marketing gimmick to trick the "more cores is more better" market into thinking they're getting some mega-ultra ryzen-killer. But really, it's unreasonable to expect those extra Atom cores to provide much more than the compute power of a phone or netbook- assuming highly-parallel software can even use all the threads simultaneously.
 
As far as I see it, it makes perfect sense that a 6 core processor is beaten by an 8 core processor.
The tiny gracemont cores are mostly going to be a marketing gimmick to trick the "more cores is more better" market into thinking they're getting some mega-ultra ryzen-killer. But really, it's unreasonable to expect those extra Atom cores to provide much more than the compute power of a phone or netbook- assuming highly-parallel software can even use all the threads simultaneously.

That really remains to be seen if Windows and other devs can make good use of them. The little cores are supposedly as quick as low-power Skylake cores, so they ought to be able to handle important tasks. It's possible that most everyday tasks end up on the little cores until the user hits re-calculate on their spreadsheet. Little cores have become standard on mobile, after all, even on cheap chips. I also don't think the target market cares so much about core counts. Top-end Ryzens will be faster, so the hardcore will poo-poo Alder Lake regardless. These seem more geared to mainstream and business users who care more about battery life than top performance. Those systems often have lots of messy software on them - Office document synchs, anti-malware, authentication helpers, screenshot tools, IT management tools scanning and deploying and updating things under the hood, bloatware applying 3D surround effects to every beep and displaying the forecast on your taskbar, etc. Those seem like a waste of big core watts.
 
  • Like
Reactions: TCA_ChinChin
As far as I see it, it makes perfect sense that a 6 core processor is beaten by an 8 core processor.
The tiny gracemont cores are mostly going to be a marketing gimmick to trick the "more cores is more better" market into thinking they're getting some mega-ultra ryzen-killer. But really, it's unreasonable to expect those extra Atom cores to provide much more than the compute power of a phone or netbook- assuming highly-parallel software can even use all the threads simultaneously.
You must have missed the 12900k benchmarks. 8 core beating 16 core. The leaked marketing slides from Intel indicated up to twice the multithreaded performance of an 11900k. And the below benchmarks indicate, this might actually end up being true:

Intel Core i9-12900K benchmark leak puts the Alder Lake-S processor way beyond the capabilities of the AMD Ryzen 9 5950X while embarrassing the Core i9-11900K and Core i9-10900K
 
Last edited:
  • Like
Reactions: TCA_ChinChin
You must have missed the 12900k benchmarks. 8 core beating 16 core. The leaked marketing slides from Intel indicated up to twice the multithreaded performance of an 11900k. And the below benchmarks indicate, this might actually end up being true:

Intel Core i9-12900K benchmark leak puts the Alder Lake-S processor way beyond the capabilities of the AMD Ryzen 9 5950X while embarrassing the Core i9-11900K and Core i9-10900K

that's a desktop result, and apparently the alder lake desktop chip was running on a custom loop and heavily factory overclocked per the guy who posted the results. furthermore the article is about MOBILE chips not desktop, you posted desktop results.

-finally if this is all intel has under the hood, then alder lake will be DOA to the zen3+ release which will get a +15% ipc boost from the new cache. to say nothing for the ipc boost coming with zen4. it's a nice temp win for intel. and it will be a nice chip. but apparently per that same overclocker that intel chip was a 200TDP factory overclocked chip.
 
You must have missed the 12900k benchmarks. 8 core beating 16 core. The leaked marketing slides from Intel indicated up to twice the multithreaded performance of an 11900k. And the below benchmarks indicate, this might actually end up being true:

My starting thesis is that Intel's plan will be to essentially lie about core count, in order to trick the layman - similar to how they let their hardware partners count those optane cache drives as extra "memory" (RAM).. Even though the whole point of those things was to make storage run a bit faster ...
So I don't put any faith whatsoever into Inel's marketing.
Will a 10nm processor supporting DDR5 get better IPC than intel's 14nm? Definitely.
Will having a couple lower power cores to pick up background tasks improve battery life at Idle? Probably.

Do I care about the battery life in a desktop computer? Well... Somebody at Intel seems to think that matters.

Do I think Atom cores are going to meaningfully improve rendering performance? Well every Atom processor so far has had beyond terrible workstation performance.. So I'm highly doubtful of that.

Will games even ever see a benefit?
Not many games get any benefit from having over 16 threads, and odd are the extra complexity is going to cause frame-time issues, possibly to the point that either the game or the user will need to disable the low power cores, at least until software catches up.

If software actually ever does catch up... It doesn't exactly seem like Microsoft has their best and brightest engineers working on modernizing Windows as a functional OS. They're too focused on form-over-function.

I'm predicting Alder Lake will be a disaster in basically everything but ultrabooks or low end chromebooks - where the big.little concept actually makes sense. I think this next generation is the result of a cascade of management mistakes, miscommunication, and cost-cutting made during Intel's transition to a new CEO. I don't see this concept making it into an eventual 13900K. Trying to force a mobile architecture into the high performance market is just a bad idea, on the face of it. Bad enough that there's still a very real possibility Alder Lake-S will just be a paper launch.

But then again, Alder lake will probably still sell fine. Intel knows your mom doesn't know what a core or a benchmark is, she just sees a higher number and trusts that it will be better at the emails.
 
I think benchmarking against AMD is a fail of the worst kind.

It seems painfully obvious that Alder Lake was designed to compete with ARM. So the benchmarks would be battery life against ARM battery life and other areas where ARM is known to excel.
 
My starting thesis is that Intel's plan will be to essentially lie about core count, in order to trick the layman - similar to how they let their hardware partners count those optane cache drives as extra "memory" (RAM).. Even though the whole point of those things was to make storage run a bit faster ...
So I don't put any faith whatsoever into Inel's marketing.

I don't put much faith in anyone's marketing. Raichu is a reputable leaker, so it is unlikely the results he posted are just made up. Something generated the benchmark results above, which come quite close to what Intel was claiming behind closed doors. It's not the marketing that should create interest, it's the benchmarks that seem to back it up.

Will games even ever see a benefit?
Not many games get any benefit from having over 16 threads, and odd are the extra complexity is going to cause frame-time issues, possibly to the point that either the game or the user will need to disable the low power cores, at least until software catches up.

I wouldn't pay too much attention to the multi-threaded results. For most people, they won't ever see the benefits of that because they don't need 24 threads almost ever. Look at the single threaded. 810 is a crazy result. That's 52% higher than a 10900k and 26% higher than a 5950x. 8 cores with 26% higher IPC than a 5950x should do ok in gaming, I think. It's really hard to see a scenario where that level of performance is going to be a disaster.
 
Will having a couple lower power cores to pick up background tasks improve battery life at Idle? Probably.

Do I care about the battery life in a desktop computer? Well... Somebody at Intel seems to think that matters.
So you think that intel added more cores, that draw more power, because they thought that that would decrease power draw?
You know very well that 100% every single review will unlock power limits and run avx 512 on all available cores at once and we will get headlines of "OMG intel at 400W" just like we did last time (just with even more watts) .
Do I think Atom cores are going to meaningfully improve rendering performance? Well every Atom processor so far has had beyond terrible workstation performance.. So I'm highly doubtful of that.
Adding terrible performance is still adding performance, AMDs 16 core CPU loses 25% on all cores when all cores are running. If alder can keep the bigger cores at full clocks, or close to, then there is so much less the smaller cores will have to do.
The dissection will have to show which one will get better results.
PerCore-1-5950X_575px.png

Will games even ever see a benefit?
Not many games get any benefit from having over 16 threads, and odd are the extra complexity is going to cause frame-time issues, possibly to the point that either the game or the user will need to disable the low power cores, at least until software catches up.
From having all the background tasks completely isolated on the smaller cores away from the game threads? You bet your behind they will.
Software for multiple CPUs on the same system is plenty mature and intel is one of the few that can actually implement it into windows.

All that said I still think that adding smaller cores is crap and only a hand full of people will actually benefit from it, I hope intel will release a full line up of CPUs without the smaller cores.
 
The answer is simple: Alder Lake needs over 200w to win vs Zen3.

Reduce it's wattage and it loses.

And then comes Zen3+(3D). And that's a very different battle for Intel...
 
Things are starting to get really, really interesting, now that Intel has apparently optimized their 10nm process to a level where it allows for improved efficiency but also for quite some scalability w/r to energy consumption and TDP tolerances.
It will be interesting to see whether and how much AMD's strategy of larger L3 cashes will help to fight off Alder Lake, since the advantages of L3 cash are partly set off by increased RAM speed - and Alder Lake is supposed to go full in with DDR5 RAM. So, Zen3+'s large and stacked L3 might help increase IPC gen on gen compared with original Zen3 but Alder Lake may make up for this in part by using DDR5 Ram.

Actually, I consider the rumored Zen3+ as a mere transitional product and Alder Lake as a "first gen" big.Little"-esque concept by Intel. So, while it will be great fun to watch those upcoming CPUs perform and fight with each other, the sensible choice would be to wait another generation before buying into an entirely new ecosystem.
 
Last edited:
one thing to bear in mind is the "little" core is a very relative term.
Gracemount cores are roughly on par with cores we've seen in Skylake.
So it's not the kind of Atom you were used to see in chromebooks and NAS boxes.

As for the "nobody cares about power consumption" argument you may often see echoing in discussions;
you would be surprised how relevant it can be depending on where you live .

Electricity in US can be dirt cheap, and may go as low 10 cents US per kWh.
In Europe or Japan by contrast, the average price per kWh before fees and tax is around 25-30 cents US.

Saving as little as 5kWh per day can mean the difference between sitting at home, and spending an extra weekend in Aspen with your family.
 
Last edited:
i think if desktop has little core its mostly will be used for
AV, emails, weather, messenger, win updates, or else that running in background so that it wont impact your gaming performance when you push it up to max
 
yep. Win11 is needed to Big/little support. But how much does it help remains to be seing.

Microsoft has gained some experience in this regard from their Surface Pro product line. It remains to be seen how much that experience will they be able to carry over to high performace desktop ecosystem. Another piece of the of the puzzle will be Microsoft willigness to implement these new power schemes into Windows 10.
Windows 11 may be just around the corner, but i don't see any great enthusiasm for the new "last ever" version fo windows. Combined with other changes, some which were terribly communicated, we may be in for a slow adaption like it was the case with win7->win10.
 
Let’s set a few things straight.

(a) This benchmark is for a mobile/laptop CPU. If you notice at the benchmark (actual Geekbench source here) you also see that it is Socket 1744 FCBGA. So, this is a BGA chip and on a socket with 1744 contacts instead of 1700 which is what we know the desktop socket will have. Besides we already know that there is no 14-core desktop sku. In the desktop you have 16-core (8+8) i9s, 12-core (8+4) i7s and 10-core (6+4) i5s. This 14-core has 6 Golden Cove cores plus 8 Gracemont ones. Hence the 20 threads (6 Golden with hyperthreading=12 threads + 8 Gracemont physical threads without hyperthreading=8threads). It is either an i5 or i7 H-series CPU.

(b) Geekbench is a benchmark that Intel has dominated with its previous Cove architectures, and I doubt we will see a reversal with the best Cove yet. First Tigerlake and then Rocketlake CPUs have been chart toppers in Geekbench 5 – the 11900K scores above 1900 points in single thread and is over 15% ahead of the 5950X. The 11900H (Tigerlake mobile) does around 1660. A score of 1258 ST would mean a huge regression for Golden Cove over its predecessors and such a thing won’t happen.

(c) This is one of the first (and probably THE) first mobile Alderlake benchmarks that has leaked. For desktop chips on the other hand, there have been leaks for several months now. ES/QS of mobile and desktop cpus are not in the same level of readiness. Desktops are actually way ahead. Desktop chips are already in the Qualification Sample stage whereas mobile chip are in engineering sample stage. Also this benchmark is from Jun 30th almost a month ago, ages in ES time.

(d) There have been leaks of Alderlake’s desktop qualification samples. The stock 12900K (with unlocked power limits but stock frequencies) scores over 11600 points in Cinbebench R20 beating the 16-core 5950X (which at stock scores around 9950-10050) by over 15% in multithread. In single thread it does over 810 beating 5950X (which does around 640) by over 26.5%. Yet earlier, in the engineering sample stage the score for the 12900K was only around 9500. This means there was an improvement of over 22% just going from late-stage ES to early/mid-stage QS. And I am not even mentioning the scores of early-stage ES which were much slower. So don’t judge based on ES leaks.

(e) The test was on Windows 10. Windows 11 is what will add hybrid CPU support.
 
Let’s set a few things straight.

(a) This benchmark is for a mobile/laptop CPU. If you notice at the benchmark (actual Geekbench source here) you also see that it is Socket 1744 FCBGA. So, this is a BGA chip and on a socket with 1744 contacts instead of 1700 which is what we know the desktop socket will have. Besides we already know that there is no 14-core desktop sku. In the desktop you have 16-core (8+8) i9s, 12-core (8+4) i7s and 10-core (6+4) i5s. This 14-core has 6 Golden Cove cores plus 8 Gracemont ones. Hence the 20 threads (6 Golden with hyperthreading=12 threads + 8 Gracemont physical threads without hyperthreading=8threads). It is either an i5 or i7 H-series CPU.

(b) Geekbench is a benchmark that Intel has dominated with its previous Cove architectures, and I doubt we will see a reversal with the best Cove yet. First Tigerlake and then Rocketlake CPUs have been chart toppers in Geekbench 5 – the 11900K scores above 1900 points in single thread and is over 15% ahead of the 5950X. The 11900H (Tigerlake mobile) does around 1660. A score of 1258 ST would mean a huge regression for Golden Cove over its predecessors and such a thing won’t happen.

(c) This is one of the first (and probably THE) first mobile Alderlake benchmarks that has leaked. For desktop chips on the other hand, there have been leaks for several months now. ES/QS of mobile and desktop cpus are not in the same level of readiness. Desktops are actually way ahead. Desktop chips are already in the Qualification Sample stage whereas mobile chip are in engineering sample stage. Also this benchmark is from Jun 30th almost a month ago, ages in ES time.

(d) There have been leaks of Alderlake’s desktop qualification samples. The stock 12900K (with unlocked power limits but stock frequencies) scores over 11600 points in Cinbebench R20 beating the 16-core 5950X (which at stock scores around 9950-10050) by over 15% in multithread. In single thread it does over 810 beating 5950X (which does around 640) by over 26.5%. Yet earlier, in the engineering sample stage the score for the 12900K was only around 9500. This means there was an improvement of over 22% just going from late-stage ES to early/mid-stage QS. And I am not even mentioning the scores of early-stage ES which were much slower. So don’t judge based on ES leaks.

(e) The test was on Windows 10. Windows 11 is what will add hybrid CPU support.

That was alot to read but basically we need to wait for final silicon. And by the time this will hit the market Zen 3 will be a year old. If this isn't faster then Intel is seriously doing something wrong.
 
As for the "nobody cares about power consumption" argument you may often see echoing in discussions;
you would be surprised how relevant it can be depending on where you live .

Electricity in US can be dirt cheap, and may go as low 10 cents US per kWh.
In Europe or Japan by contrast, the average price per kWh before fees and tax is around 25-30 cents US.

Saving as little as 5kWh per day can mean the difference between sitting at home, and spending an extra weekend in Aspen with your family.
At 11.4 cents/kWh, the USD cost of electricity to run a system 24/7 for a year equals its wattage. e.g. A CPU which draws 100 Watts will cost you $100/yr if your electricity is 11.4 cents/kWh. It's a happy coincidence for Americans since 11.4 cents/kWh is pretty close to the average price of electricity in the US (usually 10-12 cents/kWh; currently 10.4 cents/kWh). You can just look at the Wattage of a device, and put a dollar sign in front of it to see roughly how much electricity it would cost you if you ran it 24/7 for a year.

So when electricity is 3x11.4 = 34.2 cents/kWh, a 135W CPU run 24/7 for a year will cost you 3x135 = US$405/yr in electricity.

However, we're not comparing against zero. We're comparing against another CPU which draws (nominally) 105 Watts. So the wattage delta is only 30 Watts. Meaning the difference in electricity cost over a year between your two choices is only $30/yr in the U.S., 3x30 = $90/yr @ 34.2 cents/kWh.

And if you're not running it at max power 24/7, say it spends only 5 nights per week rendering, dropping average wattage to just 1/3 the peak, then now the difference drops to just $10/yr in the U.S., $30/yr @34.2 cents/kWh.

These CPUs cost around $500. So although the lifetime electricity cost can be on the order of the CPU cost, when comparing between two CPUs the electricity consumption difference is usually not enough to change a purchase decision. It only becomes a significant factor when you're building things like a large server or render farm. (And in that case, you're probably looking at $ per performance, rather than $ per component or $ per year.)

It's mostly mobile systems where power consumption (between choice of CPU) is a significant factor. And that's because your energy budget is constrained by your battery capacity, not because of the cost of electricity. (In fact this is the root cause of all our climate change woes. There's little incentive for an individual to save energy because the savings is peanuts on a per person basis. But multiply that by 8 billion people and it becomes a massive amount.)
 
i think if desktop has little core its mostly will be used for
AV, emails, weather, messenger, win updates, or else that running in background so that it wont impact your gaming performance when you push it up to max
For the desktop, beyond 2 little cores, they aren't being added for power savings, they're there for better peak multithreaded performance. By going from 8+0 to 8+2, you add the ability to shutdown the bigger cores and use the lower power cores for mundane tasks. If you have an 8+4, you're not moving to an 8+8 config to save power, only to add performance. Once you have some lower power cores, you don't keep adding them to save more power. That's not how it works.
 
Microsoft has gained some experience in this regard from their Surface Pro product line. It remains to be seen how much that experience will they be able to carry over to high performace desktop ecosystem. Another piece of the of the puzzle will be Microsoft willigness to implement these new power schemes into Windows 10.
Windows 11 may be just around the corner, but i don't see any great enthusiasm for the new "last ever" version fo windows. Combined with other changes, some which were terribly communicated, we may be in for a slow adaption like it was the case with win7->win10.
Windows 11 will be out by the time Alder Lake S is released. Any of the major OEM's will ship Alder Lake with Windows 11 by default. If the OEM allows it, you'd have to intentionally opt in for Windows 10 over 11, which wouldn't make a lot of sense. On the DIY side, people will switch if given a reason to. If Alder Lake performs much better with Windows 11, then people building AL systems will switch. The upgrade is free, so it's not like there is some financial barrier to entry.
 
At 11.4 cents/kWh, the USD cost of electricity to run a system 24/7 for a year equals its wattage. e.g. A CPU which draws 100 Watts will cost you $100/yr if your electricity is 11.4 cents/kWh. It's a happy coincidence for Americans since 11.4 cents/kWh is pretty close to the average price of electricity in the US (usually 10-12 cents/kWh; currently 10.4 cents/kWh). You can just look at the Wattage of a device, and put a dollar sign in front of it to see roughly how much electricity it would cost you if you ran it 24/7 for a year.

So when electricity is 3x11.4 = 34.2 cents/kWh, a 135W CPU run 24/7 for a year will cost you 3x135 = US$405/yr in electricity.

However, we're not comparing against zero. We're comparing against another CPU which draws (nominally) 105 Watts. So the wattage delta is only 30 Watts. Meaning the difference in electricity cost over a year between your two choices is only $30/yr in the U.S., 3x30 = $90/yr @ 34.2 cents/kWh.

And if you're not running it at max power 24/7, say it spends only 5 nights per week rendering, dropping average wattage to just 1/3 the peak, then now the difference drops to just $10/yr in the U.S., $30/yr @34.2 cents/kWh.

These CPUs cost around $500. So although the lifetime electricity cost can be on the order of the CPU cost, when comparing between two CPUs the electricity consumption difference is usually not enough to change a purchase decision. It only becomes a significant factor when you're building things like a large server or render farm. (And in that case, you're probably looking at $ per performance, rather than $ per component or $ per year.)

It's mostly mobile systems where power consumption (between choice of CPU) is a significant factor. And that's because your energy budget is constrained by your battery capacity, not because of the cost of electricity. (In fact this is the root cause of all our climate change woes. There's little incentive for an individual to save energy because the savings is peanuts on a per person basis. But multiply that by 8 billion people and it becomes a massive amount.)


Even these scenarios are quite unreasonable. Average power consumption between Intel and AMD on multi-hour mixed workloads, which are far more typical, usually show negligible and sometimes even Intel-favoring average power consumption. If you render a significant amt of the time sure, maybe, but then you should go for HEDT. For the average desktop consumer this is or should be a complete non-issue.
 
The tiny gracemont cores are mostly going to be a marketing gimmick to trick the "more cores is more better" market into thinking they're getting some mega-ultra ryzen-killer.

The little cores are there to provide savings to large organizations with huge numbers of business systems.

It's also providing business laptop makers a way to make more economical/higher margin/lighter (pick 1 - 3) "all day" models, since they don't have to cram as much battery in.

The fact that they're also offered to consumers on the desktop is just a side product. Intel's actual focuses are business and mobile.

Do I care about the battery life in a desktop computer? Well... Somebody at Intel seems to think that matters.

Businesses care (for power, not desktop battery life). The costs - not only of electricity for the systems but office A/C as well - will stack with a large fleet of computers.

Who would think that Intel designs new tech primarily for consumers? Gamers are very close to an afterthought, or at best a 2nd or 3rd consideration - excluding marketing, of course.
 
Surely power efficiency is the primary reason that Intel is doing hybrid cpus for mobile but for desktops although important it is rather secondary. The primary benefit of using additional little cores instead of additional big cores is that Atom cores are up to 4x smaller than Core cores while offering about half the performance (even after accounting for lower IPC and lower frequency). So, from Atom cores you get up to double the performance per mm2. This area efficiency allows Intel to increase multi-threaded performance, without needing to use too much die area. The 12900K will have the area of a 10-core Golden Cove cpu while having the multi-thread performance of a 12-core Golden Cove cpu. This lowers manufacturing cost - and it is not just the area cost it is also the smaller yields that you would have by increasing the die size. This will also allow Intel in the future to transition to their newer nodes much faster as they can just move only the simpler/smaller atom cores to the new node first and wait a year until the node is more mature to move the big/complex cores there. Re-enabling a sort of a new tick-tock. Last but not least by having the same architecture in both desktops and laptops there won’t be software intended or optimised for mobile or for desktop. There will be a universal ecosystem allowing for a faster and more optimised x86 software transition to hybrid CPUs.