Question In reality, how much better is Alder Lake Golden Cove compared to Zen 3?

Apr 26, 2022
103
10
85
0
I have seen so many mixed messages. Some say Alder Lake even in multi threaded workloads blows Zen 3 out of water and especially more so on IPC.

Though many say IPC is only 11% better at same clock speed and not very universal and sometimes even Zen 3 pulls ahead in IPC depending on software.

Though the strange thing many benchmarks so 12900K ahead of 5950X sometimes by a lot in multi threaded work which is strange since supposedly those e cores are so much weaker than Zen 3 and Intel P cores. And independent tests show the e cores are meh at best when trying to run certain things on them.

.

https://www.tomshardware.com/news/intel-core-i9-12900k-vs-ryzen-9-5900x-5950x


A lot of it is so complicated and hard to understand.

It does seem Alder Lakes biggest advantage especially when it pulls ahead in multi threading is likely because the P cores default turbo so much higher clock speed than Zen 3 CPU cores temps by darned unless they hit 100C or higher. And those much higher clocks probably compensate for the much lower default clocks on Zen 3 all core workload to help it pull ahead with 11% higher IPC even with only 8 strong cores and 8 much weaker cores with much worse IPC and a max clock of like 4GHz.

Your thoughts.
 
It does seem Alder Lakes biggest advantage especially when it pulls ahead in multi threading is likely because the P cores default turbo so much higher clock speed than Zen 3 CPU cores temps by darned unless they hit 100C or higher.
It's not temps by darned or even be darned, that's the temp threshold of the arc/node or whatever dictates max temps.
Intel is 100 and ryzen is at 95, it's not even a difference at that point.

Also turbo doesn't go above the rated clocks no matter how much power you pump into it or how hot you let it get.
Intel does have much better clocks in general but not because it lets it go wild.
 
Reactions: KyaraM
Apr 26, 2022
103
10
85
0
It's not temps by darned or even be darned, that's the temp threshold of the arc/node or whatever dictates max temps.
Intel is 100 and ryzen is at 95, it's not even a difference at that point.

Also turbo doesn't go above the rated clocks no matter how much power you pump into it or how hot you let it get.
Intel does have much better clocks in general but not because it lets it go wild.

Well both AMD and Intel are guilty of temps be darned. Yes they have a threshold of 100C for Intel and 95 or 90C for AMD. So its all well and ok as long as temps stay in that threshold even if they run close to it for heavy use all the time. I find that dangerous and unacceptable. I would want below 80C for long term usage or maybe even lower for both types of CPUs.
 
Well both AMD and Intel are guilty of temps be darned. Yes they have a threshold of 100C for Intel and 95 or 90C for AMD. So its all well and ok as long as temps stay in that threshold even if they run close to it for heavy use all the time. I find that dangerous and unacceptable. I would want below 80C for long term usage or maybe even lower for both types of CPUs.
And you can have that very easily with either company as long as you stick with their recommendations.
You only get these high temps if you put everything on "auto" (meaning mobo makers put everything on full speed) and let it rip.
 
Reactions: KyaraM
Apr 26, 2022
103
10
85
0
And you can have that very easily with either company as long as you stick with their recommendations.
You only get these high temps if you put everything on "auto" (meaning mobo makers put everything on full speed) and let it rip.

Yes which is why I hate auto settings and like a manual overclock static frequency. I hate the whole how everyone says overclocking is dead, just leave it auto they have best performance out of the box these days and you will only hurt it by overclocking. Well maybe a slight single thread performance regression, but not much. And way better performance for stuff that touches more than one core.

Plus ability for much better temps as auto seems to overshoot everything these days due to motherboard manufacturers and temps be darned as long as they do not exceed the spec. Manual tuning is alive and well as such.
 

KyaraM

Notable
Mar 11, 2022
802
280
890
33
Yes which is why I hate auto settings and like a manual overclock static frequency. I hate the whole how everyone says overclocking is dead, just leave it auto they have best performance out of the box these days and you will only hurt it by overclocking. Well maybe a slight single thread performance regression, but not much. And way better performance for stuff that touches more than one core.

Plus ability for much better temps as auto seems to overshoot everything these days due to motherboard manufacturers and temps be darned as long as they do not exceed the spec. Manual tuning is alive and well as such.
I honestly have no idea what you are even on about. Auto-boost works perfectly well, and last time I checked, manual-OC didn't automatically set the CPU to highest clocks only. Auto-boost also doesn't increase temps to dangerous levels or anything provided yoh got a cooler strong enough to handle the CPU. Manual-OC does since it exceeds stock clocks. So what even is your problem?

About the e-cores, they are by no means weak. They are Skylake-era in performance. I know people who still use Skylake chips. I used a Kabylake until just this year that was weaker than some Skylakes. They also aren't even meant for the tasks they were tested at in certain configuration tests. They are primarily there to run everything you wouldn't want to put on the p-cores, like Windows, Firefox, and other low-demand tasks. Yes, they also help when all cores are loaded... but it's not their job to be supef powerful.

Yes, part of the greater performance certainly come from clock speed and higher IPC of the p-cores in multithread, but again, e-cores are by no means weak.

Also, mobo makers do not increase clock speeds. My board is currently set to unlimited power and my 12700k still doesn't clock higher than the 4.7GHz it is supposed to run all-core, or 5GHz single. Power Limits do not influence core clocks.
 

Math Geek

Titan
Ambassador
My opinion is that if you didn't run a benchmark and just used either of their (comparable) products without knowing, that you wouldn't know. Both AMD and Intel offerings are extremely powerful and capable.
PREACH ON BROTHER!!! CAN I GET AN AMEN

this question is basically an affordable argument along the same veins of "should i get the Lamborghini or the McClaren?" i mean one has a .1 sec faster 0-60 time but the other one has a 2 mph higher top speed. but with some mods, the lambo was 2 secs faster on the track and blah blah blah......

in the end, either one is more than you need or ever hoped to experience yourself.

my thoughts run differently. my question is "for 2 mph faster, what mpg does it get?" lol

right now, AMD is the way to go for me. at very similar performance, the amd does the job at MUCH LESS power used. the am4 slot is capped at 142w, (yes i know you can turn that off and go higher) while the intel offerings hit that at below stock advertised speeds. to get the big boosts you see folks talk about, is 300w+ for intel offerings. that extra .1 sec 0-60 time, or in pc world, 2-3 fps more is not even close to worth the massive extra power used. that extra strong mobo and cooling you need to run that intel cpu for that extra few fps, is not worth the costs upfront or on my monthly electricity bill. take that extra cash and put it into a better gpu now that they are back to somewhat sane prices.

just my opinion though, take it and a couple bucks and you can buy a coke.....
 
Last edited:
Reactions: punkncat
Apr 26, 2022
103
10
85
0
I honestly have no idea what you are even on about. Auto-boost works perfectly well, and last time I checked, manual-OC didn't automatically set the CPU to highest clocks only. Auto-boost also doesn't increase temps to dangerous levels or anything provided yoh got a cooler strong enough to handle the CPU. Manual-OC does since it exceeds stock clocks. So what even is your problem?

About the e-cores, they are by no means weak. They are Skylake-era in performance. I know people who still use Skylake chips. I used a Kabylake until just this year that was weaker than some Skylakes. They also aren't even meant for the tasks they were tested at in certain configuration tests. They are primarily there to run everything you wouldn't want to put on the p-cores, like Windows, Firefox, and other low-demand tasks. Yes, they also help when all cores are loaded... but it's not their job to be supef powerful.

Yes, part of the greater performance certainly come from clock speed and higher IPC of the p-cores in multithread, but again, e-cores are by no means weak.

Also, mobo makers do not increase clock speeds. My board is currently set to unlimited power and my 12700k still doesn't clock higher than the 4.7GHz it is supposed to run all-core, or 5GHz single. Power Limits do not influence core clocks.

Well I had tested a 12900K on auto and just starting CInebench, voltages and clocks peed went up to 5.1GHz and temps hit mid 98C right away and was about to throttle before I stopped it.

I did a manual overclock at 5.1GHz all P core with 1.275 voltage on the same chip and running Cinebench temps just got into the 80s. Motherboard makers especially Asus ROG boards set it so all performance goes to the max temps be darned. Same behavior with an AMD system on Asus boards. But manually tuned they are great.

And the e cores have Skylake IPC supposedly in theory, but the latency is very very bad. I think it would be like taking a Skylake CPU and taking all the L2 and L3 cache off of it.

.

If the e cores really had Skylake like IPC, they should get Skylake like performance in games. Per above link they are not even close.
 
I hate the whole how everyone says overclocking is dead, just leave it auto they have best performance out of the box these days and you will only hurt it by overclocking.
Well I had tested a 12900K on auto and just starting CInebench, voltages and clocks peed went up to 5.1GHz and temps hit mid 98C right away and was about to throttle before I stopped it.

I did a manual overclock at 5.1GHz all P core with 1.275 voltage on the same chip and running Cinebench temps just got into the 80s.
And this is the problem. What you found is that CPUs are just left to their own devices with regards to voltage, and voltage is the worse between the two when it comes to power dissipation (that whole P = C * V ^ 2 * f formula).

For somewhat understandable reasons, AMD and Intel set a overkill V-F curve to make sure the processor at least hits the advertised turbo boost speeds, even though it's very likely the processor will hit them with less voltage. They can't tweak the V-F curve of every processor, as that's impractical, so it's a one-size-fits-all approach and hope for the best.

And this goes beyond CPUs. GPUs are just as bad. My RTX 2070 Super when left to its own devices will boost to about 2000MHz, but also go up to 1.2V. I can get it down to 1930MHz at 0.935V and it consumes a lot less power for almost no appreciable performance loss.
 
Reactions: KyaraM
now, AMD is the way to go for me. at very similar performance, the amd does the job at MUCH LESS power used. the am4 slot is capped at 142w, (yes i know you can turn that off and go higher) while the intel offerings hit that at below stock advertised speeds. to get the big boosts you see folks talk about, is 300w+ for intel offerings.
You only believe that because every review you looked at used a single test for power draw and that was to put the CPU on unlimited power and then run a power virus, it's not really representative of what the CPU is capable of.
The 12900k can beat the 5950x at 160W, that's 18W more that nobody is going to shed a tear over, and at single it beats it at much lower power all across the board.
https://www.hardwareluxx.de/index.php/artikel/hardware/prozessoren/57430-core-i9-12900k-und-core-i5-12600k-hybrid-desktop-cpus-alder-lake-im-test.html?start=8
 
Reactions: KyaraM

KyaraM

Notable
Mar 11, 2022
802
280
890
33
Well I had tested a 12900K on auto and just starting CInebench, voltages and clocks peed went up to 5.1GHz and temps hit mid 98C right away and was about to throttle before I stopped it.

I did a manual overclock at 5.1GHz all P core with 1.275 voltage on the same chip and running Cinebench temps just got into the 80s. Motherboard makers especially Asus ROG boards set it so all performance goes to the max temps be darned. Same behavior with an AMD system on Asus boards. But manually tuned they are great.

And the e cores have Skylake IPC supposedly in theory, but the latency is very very bad. I think it would be like taking a Skylake CPU and taking all the L2 and L3 cache off of it.

.

If the e cores really had Skylake like IPC, they should get Skylake like performance in games. Per above link they are not even close.
You literally set a fixed voltage. Get rid of that "manual OC" and just keep the fixed voltage, and see what it does. Hint, you won't feel your "manual OC" missing. Because as @hotaru.hino said, that's the actual issue. Not you setting your CPU to exactly its maximum clock speed.

How I know, you ask? Because I run my 12700k at adaptive and offset, set to 1.28V and -0.135 respectively, everything else at stock, and get exactly the same or better performance than without at manageable temperatures. Manageable by a Pure Rock 2, that is. Which helds the CPU at around 80°C. And 130W power draw. Want to know something even better? Even at an actual all-core overclock, even if small, to 4.8GHz all-core, it held the CPU below 90°C. Next time before you complain, you might want to do some research. Intel isn't any different with that overly aggressive voltage, btw. They want to make sure that each CPU that meet the specs performs about the same, so they set those aggressive voltages to make sure of that.

Also, gaming isn't even the e-core's strenght nor use, and they still performed rather well despite that. No idea what the heck your issue even is. And if you want to see how a CPU performs, either look at the benchmarks again, or just test it for yourself, but don't ask here just to shoot down any answers you get.

You only believe that because every review you looked at used a single test for power draw and that was to put the CPU on unlimited power and then run a power virus, it's not really representative of what the CPU is capable of.
The 12900k can beat the 5950x at 160W, that's 18W more that nobody is going to shed a tear over, and at single it beats it at much lower power all across the board.
https://www.hardwareluxx.de/index.php/artikel/hardware/prozessoren/57430-core-i9-12900k-und-core-i5-12600k-hybrid-desktop-cpus-alder-lake-im-test.html?start=8
Yeah, power use is nowhere near as bad as people make it out to be... even limited to 130W by undervolting, my 12700k at stock clocks still beats a 5900X in Cinebench at by about 2k or more points, according to this list
https://nanoreview.net/en/cpu-list/cinebench-scores
The result for the 12700K is a tad bit high so I think it's overclocked, but the 12700KF checks out and is around the level of my own 12700K, which scores anywhere between 22500 and 23100 depending on the run and clocks. And it is also where most reviews I have seen place the chip.

If I just limited it without undervolting, it would still perform better than the stock 5900X. I tested it before. From what I have seen, people report the 5900X drawing anywhere from 130 to 180W in Cinebench, even higher according to this reddit thread:
https://www.reddit.com/r/Amd/comments/l9tsa8 View: https://www.reddit.com/r/Amd/comments/l9tsa8/5900x_peak_power_draw_which_is_it/


Maximum for me was 160W, btw. 190W without uv or power limits... which a 5900X with PBO can do, too, according to that topic. So, it's definitely wrong to claim that Alder Lake is so much more inefficient.
Last night gaming, it sat at around 30W. Wooow, such high power consumption... not as if the 230W of my 3070Ti was almost 10x that... and sure, I can only measure it in HWINFO and can't say anything about what the system draws from the wall. But fact is, the chip at stock clocks with unvervolt and 130W power draw performs as well as or better than an equivalent Ryzen at about the same draw.
 
If I just limited it without undervolting, it would still perform better than the stock 5900X. I tested it before. From what I have seen, people report the 5900X drawing anywhere from 130 to 180W in Cinebench, even higher according to this reddit thread:
https://www.reddit.com/r/Amd/comments/l9tsa8 View: https://www.reddit.com/r/Amd/comments/l9tsa8/5900x_peak_power_draw_which_is_it/


Maximum for me was 160W, btw. 190W without uv or power limits... which a 5900X with PBO can do, too, according to that topic. So, it's definitely wrong to claim that Alder Lake is so much more inefficient.
The only difference is that intel is pretty lose with enforcing the power limits so almost every mobo has the power settings at max (241) or even unlimited (300+ ) , while for ryzen amd enforces the power limits so almost all mobos stick with the PPT but you can still get one that has PPT set to unlimited by default, it's just much more rare.
 

KyaraM

Notable
Mar 11, 2022
802
280
890
33
The only difference is that intel is pretty lose with enforcing the power limits so almost every mobo has the power settings at max (241) or even unlimited (300+ ) , while for ryzen amd enforces the power limits so almost all mobos stick with the PPT but you can still get one that has PPT set to unlimited by default, it's just much more rare.
My board has the limit unlocked currently. Cinebench 23 still had a draw of only around 130W, so undervolting is insanely efficient in reigning CPUs in. 130W is pretty good for a 190W PL2 CPU, and most private users wouldn't ever get there in the first place.
 
Reactions: Why_Me
Apr 26, 2022
103
10
85
0
You literally set a fixed voltage. Get rid of that "manual OC" and just keep the fixed voltage, and see what it does. Hint, you won't feel your "manual OC" missing. Because as @hotaru.hino said, that's the actual issue. Not you setting your CPU to exactly its maximum clock speed.

How I know, you ask? Because I run my 12700k at adaptive and offset, set to 1.28V and -0.135 respectively, everything else at stock, and get exactly the same or better performance than without at manageable temperatures. Manageable by a Pure Rock 2, that is. Which helds the CPU at around 80°C. And 130W power draw. Want to know something even better? Even at an actual all-core overclock, even if small, to 4.8GHz all-core, it held the CPU below 90°C. Next time before you complain, you might want to do some research. Intel isn't any different with that overly aggressive voltage, btw. They want to make sure that each CPU that meet the specs performs about the same, so they set those aggressive voltages to make sure of that.

Also, gaming isn't even the e-core's strenght nor use, and they still performed rather well despite that. No idea what the heck your issue even is. And if you want to see how a CPU performs, either look at the benchmarks again, or just test it for yourself, but don't ask here just to shoot down any answers you get.


Yeah, power use is nowhere near as bad as people make it out to be... even limited to 130W by undervolting, my 12700k at stock clocks still beats a 5900X in Cinebench at by about 2k or more points, according to this list
https://nanoreview.net/en/cpu-list/cinebench-scores
The result for the 12700K is a tad bit high so I think it's overclocked, but the 12700KF checks out and is around the level of my own 12700K, which scores anywhere between 22500 and 23100 depending on the run and clocks. And it is also where most reviews I have seen place the chip.

If I just limited it without undervolting, it would still perform better than the stock 5900X. I tested it before. From what I have seen, people report the 5900X drawing anywhere from 130 to 180W in Cinebench, even higher according to this reddit thread:
https://www.reddit.com/r/Amd/comments/l9tsa8 View: https://www.reddit.com/r/Amd/comments/l9tsa8/5900x_peak_power_draw_which_is_it/


Maximum for me was 160W, btw. 190W without uv or power limits... which a 5900X with PBO can do, too, according to that topic. So, it's definitely wrong to claim that Alder Lake is so much more inefficient.
Last night gaming, it sat at around 30W. Wooow, such high power consumption... not as if the 230W of my 3070Ti was almost 10x that... and sure, I can only measure it in HWINFO and can't say anything about what the system draws from the wall. But fact is, the chip at stock clocks with unvervolt and 130W power draw performs as well as or better than an equivalent Ryzen at about the same draw.

I know gaming is not e cores strength or use, but will the e cores perform well in more than just assisting the p cores?? Like could you run a VM on them or another task and get Skylake like IPC? Or is it only Skylake IPC for certain tasks of which gaming is not one. But what tasks would e cores do well with with true Skylake like IPC if you set affinity to them??
 
I know gaming is not e cores strength or use, but will the e cores perform well in more than just assisting the p cores?? Like could you run a VM on them or another task and get Skylake like IPC? Or is it only Skylake IPC for certain tasks of which gaming is not one. But what tasks would e cores do well with with true Skylake like IPC if you set affinity to them??
I'm under the impression that the E-cores aren't really there to assist the P-cores, although that's a plus. It's more that if you want the maximum performance out of the P-cores, you need E-cores to handle the background tasks so the P-cores aren't constantly task switching.

Probably not the best analogy, but it's like how some vehicles have a primary power plant/engine and an auxiliary one. Examples of this include jet airplanes, tanks, and some semi-trucks. The auxiliary one is just there to provide essential power because running off the primary consumes a crapton of fuel.
 
Reactions: KyaraM

KyaraM

Notable
Mar 11, 2022
802
280
890
33
I know gaming is not e cores strength or use, but will the e cores perform well in more than just assisting the p cores?? Like could you run a VM on them or another task and get Skylake like IPC? Or is it only Skylake IPC for certain tasks of which gaming is not one. But what tasks would e cores do well with with true Skylake like IPC if you set affinity to them??
As hotaru.hino said, the e-cores are there to take work off the p-cores so they can run more important threads instead. For example, things that would go into the e-cores are database background tasks, or the OS. Those are tasks that don't need much processing power. It doesn't matter if they have Skylake-era IPC or not. To really know you will have to test yourself, don't think anyone here can tell you.

Edit:
Also, looking at the test you linked above, it was VERY poorly done. The e-cores don't have HT, while all the other tested 4 core configs do. Modern games can run okay on 4c/4t configs, but I can tell you that those chips struggle. My 7600k certainly does produce a lag fest. It's not really a fair comparison. Under those circumstances, should have gone 8 e-cores to level the field a bit, or test without HT, but not this crap. Here, got something better done for you:

It might even answer your application question.
 
Last edited:
Apr 26, 2022
103
10
85
0
As hotaru.hino said, the e-cores are there to take work off the p-cores so they can run more important threads instead. For example, things that would go into the e-cores are database background tasks, or the OS. Those are tasks that don't need much processing power. It doesn't matter if they have Skylake-era IPC or not. To really know you will have to test yourself, don't think anyone here can tell you.

Edit:
Also, looking at the test you linked above, it was VERY poorly done. The e-cores don't have HT, while all the other tested 4 core configs do. Modern games can run okay on 4c/4t configs, but I can tell you that those chips struggle. My 7600k certainly does produce a lag fest. It's not really a fair comparison. Under those circumstances, should have gone 8 e-cores to level the field a bit, or test without HT, but not this crap. Here, got something better done for you:

It might even answer your application question.

Well this test is testing the 8 e cores and they get beat at equal core counts in like all cases and barely win against some 4 and 6 core configs of old generations. They are not all that goof by themselves. Now for pushing the p cores to the top in Cinebench and some benchmarks to compete with other high core counts of big cores, they seem to assist well.
 
Well this test is testing the 8 e cores and they get beat at equal core counts in like all cases and barely win against some 4 and 6 core configs of old generations. They are not all that goof by themselves. Now for pushing the p cores to the top in Cinebench and some benchmarks to compete with other high core counts of big cores, they seem to assist well.
If this is commenting on the results of their test, keep in mind in their testing each of the core types was capped at 3.9GHz and with HyperThreading off on the P-Core only test. The results with both types enabled also included turbo boosting and HyperThreading.
 
Reactions: KyaraM
Well this test is testing the 8 e cores and they get beat at equal core counts in like all cases and barely win against some 4 and 6 core configs of old generations. They are not all that goof by themselves. Now for pushing the p cores to the top in Cinebench and some benchmarks to compete with other high core counts of big cores, they seem to assist well.
Skylake is 6th gen though and the 8 e-cores are beating the 8 thread i7-6700k by 15% over all.
And since you brought up cinebench the e-cores alone are almost twice as fast in cinebench than the 6700k and about the same, a little lower, in single.
Yes in most other things it's not that much faster but still.

The question wasn't if they are good compared to modern CPU cores but rather if they are really skylake level or not.
 
Reactions: KyaraM
Skylake is 6th gen though and the 8 e-cores are beating the 8 thread i7-6700k by 15% over all.
And since you brought up cinebench the e-cores alone are almost twice as fast in cinebench than the 6700k and about the same, a little lower, in single.
Yes in most other things it's not that much faster but still.

The question wasn't if they are good compared to modern CPU cores but rather if they are really skylake level or not.
If you have to have the fastest for gaming 5800X3D.
For multi threaded or single threaded workloads, 12900K.

BUT the 5900X and 5950X can be found for significantly less now, so they do deliver a better bang for the buck. Competition is a good thing.
 

KyaraM

Notable
Mar 11, 2022
802
280
890
33
Well this test is testing the 8 e cores and they get beat at equal core counts in like all cases and barely win against some 4 and 6 core configs of old generations. They are not all that goof by themselves. Now for pushing the p cores to the top in Cinebench and some benchmarks to compete with other high core counts of big cores, they seem to assist well.
Please stop moving the goal posts here. The original discussion was about the e-cores compared to Skylake-era CPUs. As said elsewhere, the e-cores are lower-clocked than said processors, not hyper-threaded so comparing 4c to 4c/8t as in your article makes no sense, and in quite a few cases, they are still beating those older CPU, them having 8 physical cores being only part of the reason. Also, they never were supposed to be awesome by themselves, but I would frankly argue that they still, in fact, are.

If you have to have the fastest for gaming 5800X3D.
For multi threaded or single threaded workloads, 12900K.

BUT the 5900X and 5950X can be found for significantly less now, so they do deliver a better bang for the buck. Competition is a good thing.
That wasn't the question here, though. Also, the 12700K is a great option here, too, just FYI. Also, no, the 5800X3D is at best 1% faster than the 12900K over all (it really depends on the games if it can even reach it at all...), which in turn is a handful percent better than the 12700K, which again is a couple percent better than the 5900X and the 5950X (for gaming in case of the latter), with the latter being kind of a waste for gaming. The 12700K(F) and the 5900X cost about the same, and the 12700 is even cheaper than the 5900X while being almost as good as the 12700K. Then there is the 12600K that is on a similar level as well, and, and, and. Thus it doesn't even matter which of those CPUs anyone picks and if you value application performance at all, the 5800X3D flies out the window faster than you can look. There are more options than the ones you listed. It's not "all AMD unless you only do applications"...
 

ASK THE COMMUNITY

TRENDING THREADS