News AMD updates Zen 5 Ryzen 9000 benchmark comparisons to Intel chips — details Admin mode boosts, chipset driver fix

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

bit_user

Titan
Ambassador
You are not comparing core vs core on efficiency,
In order for 8P + 8E cores to be more efficient than 10P cores or 12P cores, the extra 8E cores must be more efficient than adding 2P or 4P cores!

That's not the same as saying ecores are more efficient than p cores cause they are not.
You can say "water isn't wet" or whatever other fantasies you dream up, but the facts I've presented clearly show that E-cores are more efficient for much of their operational range and improve SoC efficiency with a sane frequency-scaling algorithm.

My point is that since 12p cores have roughly the same efficiency as 8+8 then obviously 16 Pcores would be both faster and more efficient than 8+8.
Only if cost is no object, which is why I cited the $1440 street price for an actual 16 P-core CPU. At that point, it's a silly comparison.

At any reasonable wattage you will be using desktop chips at a P-Core is more efficient than an ecore. Even at let's say a very low power limit of 125w that's 8w per core.
Well, 125W PL1 wasn't even a "thing" until Comet Lake. Before that, the standard TDP for K-series was 95W, but K-series are marketed for overclocking. The non-K are 65W, so that's what I consider reasonable. Divided across 16 E-cores, you'd get 16.25W per E-core cluster. According to this data, that's worth about 21 MB/s * 4 = 84 MB/s @ 7zip compression:

image-16-1.png


The same 65W across 8 P-cores gets you 32.5 W per 4 P-cores, which provide 28 MB/s of compression performance. So multiply that by 2 and we get a meager 56 MB/s, which is only 66.7% as fast as 16 E-cores.

Yes, the reason I went to 16 E-cores is that you can't burn that much power with 8 E-cores. Let's say you ran 8 E-cores at 3.6 GHz, which is as high as their data goes. That gets you 23 MB/s * 2 = 46 MB/s @ 48W. So, at full tilt, 8 E-cores would deliver 0.958 MB/s/W, while the 8 P-cores would deliver only 0.862 MB/s/W. So, even in that case (which is above the efficiency sweet spot of the E-cores), 8 E-cores are still 11.2% more efficient than 8 P-cores!

Again, guy is saying Intel doesn't add more than 8 pcores cause they run too hot and too inefficient, which is completely not true.
It's a "both/and" situation. The E-cores help Intel on both fronts. They both increase MT perf per mm^2, and they also increase energy efficiency relative to having all P-cores. The only way E-cores hurt energy efficiency is if you use a braindead frequency-scaling algorithm, which Intel obviously doesn't

16 Pcores would be way faster and way more efficient than the 8+8 configuration
The reason I quoted the pricing data on the Xeon W5-2465X was to show what a nonsensical proposition that was. I mean, why don't we just throw ThreadRippers into the mix, if we're going to ignore pricing and commercial realities?

It's a fundamental misunderstanding people have, they think adding more cores will increase power draw, temperatures and reduce efficiency
At the same power limits, the 10P option would have lower performance and therefore worse efficiency than 8P + 8E. 12P would have lower performance and therefore worse efficiency than 8P + 16E. The data clearly shows that. There's nothing new or interesting, here.
 
  • Like
Reactions: Peksha

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
502
2,060
In order for 8P + 8E cores to be more efficient than 10P cores or 12P cores, the extra 8E cores must be more efficient than adding 2P or 4P cores!


You can say "water isn't wet" or whatever other fantasies you dream up, but the facts I've presented clearly show that E-cores are more efficient for much of their operational range and improve SoC efficiency with a sane frequency-scaling algorithm.


Only if cost is no object, which is why I cited the $1440 street price for an actual 16 P-core CPU. At that point, it's a silly comparison.


Well, 125W PL1 wasn't even a "thing" until Comet Lake. Before that, the standard TDP for K-series was 95W, but K-series are marketed for overclocking. The non-K are 65W, so that's what I consider reasonable. Divided across 16 E-cores, you'd get 16.25W per E-core cluster. According to this data, that's worth about 21 MB/s * 4 = 84 MB/s @ 7zip compression:
image-16-1.png

The same 65W across 8 P-cores gets you 32.5 W per 4 P-cores, which provide 28 MB/s of compression performance. So multiply that by 2 and we get a meager 56 MB/s, which is only 66.7% as fast as 16 E-cores.

Yes, the reason I went to 16 E-cores is that you can't burn that much power with 8 E-cores. Let's say you ran 8 E-cores at 3.6 GHz, which is as high as their data goes. That gets you 23 MB/s * 2 = 46 MB/s @ 48W. So, at full tilt, 8 E-cores would deliver 0.958 MB/s/W, while the 8 P-cores would deliver only 0.862 MB/s/W. So, even in that case (which is above the efficiency sweet spot of the E-cores), 8 E-cores are still 11.2% more efficient than 8 P-cores!


It's a "both/and" situation. The E-cores help Intel on both fronts. They both increase MT perf per mm^2, and they also increase energy efficiency relative to having all P-cores. The only way E-cores hurt energy efficiency is if you use a braindead frequency-scaling algorithm, which Intel obviously doesn't


The reason I quoted the pricing data on the Xeon W5-2465X was to show what a nonsensical proposition that was. I mean, why don't we just throw ThreadRippers into the mix, if we're going to ignore pricing and commercial realities?


At the same power limits, the 10P option would have lower performance and therefore worse efficiency than 8P + 8E. 12P would have lower performance and therefore worse efficiency than 8P + 16E. The data clearly shows that. There's nothing new or interesting, here.
You are arguing completely different things. In fact you are proving my point exactly. One P-Core is more expensive than one ecore since its much bigger in die size. So the only reason they are being used in die space, not heat, temperatures, power draw or efficiency.

It boils down to this, is one ecore more efficient than one pcore? The answer is simply no, in the vast majority of use cases on a desktop chip the pcores are both faster, more efficient and a lot easier to cool. Therefore obviously efficiency, power draw and temperatures cannot beat the reason intel doesn't add more of them. That's what the guy claimed on the previous page. Please, stop arguing against self evident things.

You are saying I ignore pricing. The original argument ignored pricing. Obviously, I'm with you, pricing is the reason intel doesn't ship 50pcores on the desktop, it has nothing to do with efficiency. Guy said it's due to temps and power draw, it's not, it's just pricing. You are repeating exactly my point while... disagreeing with me.

I mean it's fairly easy to prove how wrong the guy was. We don't even have to thibk about ecores at all. Here, are 8pcores more efficient than 10 pcores? No,they are slower less efficient and harder to cool. Are 8pcores more efficient than 12pcores? No, see above. Are 16pcores slower less efficient and harder to cool? No, again see above.

Therefore heat and efficiency cannot be the reason intel doesn't add more. It just can't. If you are still in disagreement I give up, you win.
 
Last edited:

bit_user

Titan
Ambassador
You are arguing completely different things. In fact you are proving my point exactly. One P-Core is more expensive than one ecore since its much bigger in die size. So the only reason they are being used in die space, not heat, temperatures, power draw or efficiency.
A 16 P-core CPU is not commercially viable for mainstream desktop. It doesn't exist. Even on the now-mature Intel 7 node, Bartlett Lake is sticking with the same die size and going with 12 P-cores.

So, why you're arguing about products that don't and can't exist is confounding and really just puts this little argument to bed.

It boils down to this, is one ecore more efficient than one pcore? The answer is simply no,
Yes they are! I've shown that time and again! Did you not see where 8 E-cores @ 48 W are more efficient than 8 P-cores @ 65 W? That's pushing the E-cores to the top of their frequency range, where they're the least efficient, and they're still more efficient than 8 P-cores at just 65 W, to say nothing of higher power levels!

I think you've fallen victim to a narrative being spread by Intel's marketing department. They're trying to make people believe that E-cores aren't about energy-efficiency, because that naturally leads one to the conclusion that P-cores are inefficient, which supports one of the most common allegations against recent Intel CPUs as being too power-hungry.

The first time I saw this, it caught me off-guard too. Don't believe it, though. Look at the data. E-cores can be about more than just one thing.

If it's not so, where's your data? I've presented abundant supporting data and you have yet to share one iota.
 
Last edited:
  • Like
Reactions: KyaraM

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
502
2,060
So, why you're arguing about products that don't and can't exist is confounding and really just puts this little argument to bed.
Cause the whole argument was about a product that doesn't exist. An Intel cpu with more pcores. I didn't bring it up, why are you quoting me about it?
s they are! I've shown that time and again! Did younot see where 8 E-cores @ 48 W are more efficient than 8 P-cores @ 65 W? That's pushing the E-cores to the top of their frequency range, where they're the least efficient, and they're still more efficient than 8 P-cores at just 65 W, to say nothing of higher power levels!

I think you've fallen victim to a narrative being spread by Intel's marketing department. They're trying to make people believe that E-cores aren't about energy-efficiency, because that naturally leads one to the conclusion that P-cores are inefficient, which supports one of the most common allegations against recent Intel CPUs as being too power-hungry.

The first time I saw this, it caught me off-guard too. Don't believe it, though. Look at the data. E-cores can be about more than just one thing.

If it's not so, where's your data? I've presented abundant supporting data and you have yet to share one iota.
LOL, that's misleading as hell. You are using different power levels on a single workload... Bro, come on now. Your own graph shows otherwise anyways, since 12p is roughly equal in efficiency to 8+8 then it's obviously obvious that 16p is more efficient than 8+8. If 16p is more efficient than 8+8 then 1p has to be more efficient than 1e.

I don't need to provide one iota of data cause your own data proves my point.
 

bit_user

Titan
Ambassador
Cause the whole argument was about a product that doesn't exist. An Intel cpu with more pcores.
It's one thing to talk about 8P + 8E vs. 10P (or 8P + 16E vs. 12P). Those configurations have about the same die area. Intel shipped Comet Lake with a 10P + 0E configuration. It's a very plausible proposition and commercially viable. Therefore, it makes sense examine the relative merits and interrogate why Intel opted for the hybrid approach, rather than the all-P approach.

By contrast, 16P is not something that could've been a commercially viable desktop processor, and is therefore nonsensical in this context. As I said, we might as well include discussion of 32-core Ryzens in the same sentence, because it makes about as much sense to include in this context.

I didn't bring it up, why are you quoting me about it?
You're the one who introduced the false dichotomy of a 16 P-core CPU. The other poster merely said:

"Intel P cores are so inefficient and hot that Intel can't provide CPUs with more than 8 of them."

To pose a nonsensical alternative and characterize it as what the other party is advocating is a classic "strawman" fallacy.

We all know 16P wasn't in the cards. It was 10P for Alder Lake and 12P for Raptor Lake. To pretend 16P was even a viable option is disingenuous.

LOL, that's misleading as hell. You are using different power levels on a single workload...
Okay, the beauty of the data is that you can compare 8E with 8P each at 48W, if you so choose. I was just giving you the advantage of letting the 8P take all 65W, but if you restrict them to 48W, then they'd deliver about 42 MB/s, which works out to 0.875 MB/s/W and is only 91.3% as efficient as the E-cores.

it's obviously obvious that 16p is more efficient than 8+8.
16P can't exist in this market. If you're going to talk about 16P, then why don't we compare it to 8P + 32E? They both make as much sense.

I've shown that core-for-core, the E-cores are more efficient. In order to overturn that, you have to push the P-cores way down to their peak efficiency range, which is a waste of silicon. Intel has a 16 P-core CPU and it doesn't have a 125W TDP - it has a 200W TDP. Guess why?

However, that wasn't the original problem posed. The original problem was: "why not go with more than 8 P-cores?" The benefits provided by hybrid are both energy-efficiency and more performance per mm^2. It's not just one or the other.
 
Last edited:

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
502
2,060
I've shown that core-for-core, the E-cores are more efficient.
No you have not. Your own graph shows otherwise. From 8 to 12P your graph shows an increase in efficiency, and the 12P are roughly on par with 8+8. Therefore 16P would be above 8+8. Period. You are just wrong. Let's move on.
 

bit_user

Titan
Ambassador
No you have not.
Yes, I did. iso-power, as per your request. You love to crow about how we should measure things iso-power, but now we see how quickly you dispense with it, when it doesn't suit your interests.

Your own graph shows otherwise. From 8 to 12P your graph shows an increase in efficiency, and the 12P are roughly on par with 8+8.
It makes no real sense to compare 8P + 8E with 12P. For that, we should be comparing 8P + 16E.

8P + 8E is more efficient than 10P, which was the alternative at the time.

BTW, you claim equivalence of 12P, but that gap is very distinct and about half as big as the gap between 10P and 12P. It looks to me like 13P would be needed to draw equal to 8P + 8E.

Therefore 16P would be above 8+8. Period. You are just wrong. Let's move on.
It's funny how you decided to put 16P against 8P + 8E. Why only 16P? If you're going to dispense with "small" matters like the economic viability of these products, why not go to 24P vs. 8P + 16E?

Heck, why not go all the way to 56P or 64P?

It makes me wonder why we even compare Raptor Lake to AMD's 16-core CPUs... why aren't we comparing the i9-14900K to an AMD CPU with 24 cores? If you're telling me the most important factor is to compare iso-core count, then that says we should compare AMD and Intel CPUs that have the same number of cores!
 
Last edited:

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
502
2,060
It's funny how you decided to put 16P against 8P + 8E. Why only 16P? If you're going to dispense with "small" matters like the economic viability of these products, why not go to 24P vs. 8P + 16E?
It's funny that you keep insisting that ecores are more efficient than pcores but then complain that im putting them against each other at same core count. If ecores were more efficient then 16p wouldnt be beating 8+8 but according to your own freaking graph man they would. Just stop it.
 

bit_user

Titan
Ambassador
It's funny that you keep insisting that ecores are more efficient than pcores but then complain that im putting them against each other at same core count.
What you're doing is concocting a nonsensical product. You're not only increasing the P-core count, but also holding the TDP constant, which Intel explicitly rejected with the 200 W TDP of the Xeon W5-2465X. It's not a realistic product, both because it'd cost too much to be economically viable, and because they wouldn't hamstring it by giving it a meager 125W TDP. The higher TDP would obviously hurt its efficiency, which is why you obviously don't want to go there.

As I pointed out before, you're twisting the discussion from the original conjecture (why Intel doesn't use more than 8 P-cores) to a bizarre proposition of a 16 P-core CPU that you didn't even have the courage or decency to own, instead blaming it on poor @DrDocumentum . The obvious point of comparison should be 10P (Gen 12) or 12P (Gen 13).

If ecores were more efficient then 16p wouldnt be beating 8+8
I know it's hard to get your head around, but 8P + 8E isn't just about the E-cores. What the E-cores are doing is taking a share of the power budget from the P-cores, which forces them to run at lower clock frequencies that are more efficient. This is also why increasing the number P-cores improves efficiency, because those extra P-cores need an even larger share of power than the 8 E-cores are taking, which forces all of the P-cores to run at even lower frequencies. So, it's not a straight comparison of 8 E-cores to 8 P-cores, the way it might seem.

To illustrate this, their power data for x264 has the 8 E-cores topping out at about 52 W. That graph goes up to about 225 W. At that point, the E-cores are maxed, so the 8 P-cores are collectively using about 177 W (22.1 W each; 88.5 W for 4). That yields a performance of 24 fps for the lot of them (0.136 fps/W). Meanwhile, the E-cores are delivering about 11.8 fps at 52 W (0.227 fps/W). So, in the 8P + 8E configuration, those E-cores max out at 66.9% more efficiency than the P-cores.

Now, let's consider what happens when you replace the 8 E-cores with another 8 P-cores. In the same power budget (225W), each P-core would now get 14.1 W. Four of them get 56.3 W. That yields a performance of 43 fps, or 0.191 fps/W. What's interesting about this number is it's still 15.8% lower than the 0.227 fps/W the E-cores got.

So, it's not the case that P-cores are more efficient. No, what you're seeing is primarily the effect of additional P-cores on the base 8 P-cores.


Note: that graph I made was for the x264 workload, which is AVX2-heavy and favors the P-cores. If I recreated the same graph with the 7zip data, we would certainly see the margin of the 8+8 configuration grow even further, due to how fast and efficiently the E-cores execute integer workloads.


Just stop it.
You're free to leave the thread any time you like.
 
Last edited:

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
502
2,060
What you're doing is concocting a nonsensical product. You're not only increasing the P-core count, but also holding the TDP constant, which Intel explicitly rejected with the 200 W TDP of the Xeon W5-2465X. It's not a realistic product, both because it'd cost too much to be economically viable, and because they wouldn't hamstring it by giving it a meager 125W TDP. The higher TDP would obviously hurt its efficiency, which is why you obviously don't want to go there.

As I pointed out before, you're twisting the discussion from the original conjecture (why Intel doesn't use more than 8 P-cores) to a bizarre proposition of a 16 P-core CPU that you didn't even have the courage or decency to own, instead blaming it on poor @DrDocumentum . The obvious point of comparison should be 10P (Gen 12) or 12P (Gen 13).


I know it's hard to get your head around, but 8P + 8E isn't just about the E-cores. What the E-cores are doing is taking a share of the power budget from the P-cores, which forces them to run at lower clock frequencies that are more efficient. This is also why increasing the number P-cores improves efficiency, because those extra P-cores need an even larger share of power than the 8 E-cores are taking, which forces all of the P-cores to run at even lower frequencies. So, it's not a straight comparison of 8 E-cores to 8 P-cores, the way it might seem.

To illustrate this, their power data for x264 has the 8 E-cores topping out at about 52 W. That graph goes up to about 225 W. At that point, the E-cores are maxed, so the 8 P-cores are collectively using about 177 W (22.1 W each; 88.5 W for 4). That yields a performance of 24 fps for the lot of them (0.136 fps/W). Meanwhile, the E-cores are delivering about 11.8 fps at 52 W (0.227 fps/W). So, in the 8P + 8E configuration, those E-cores max out at 66.9% more efficiency than the P-cores.

Now, let's consider what happens when you replace the 8 E-cores with another 8 P-cores. In the same power budget (225W), each P-core would now get 14.1 W. Four of them get 56.3 W. That yields a performance of 43 fps, or 0.191 fps/W. What's interesting about this number is it's still 15.8% lower than the 0.227 fps/W the E-cores got.

So, it's not the case that P-cores are more efficient. No, what you're seeing is primarily the effect of additional P-cores on the base 8 P-cores.


Note: that graph I made was for the x264 workload, which is AVX2-heavy and favors the P-cores. If I recreated the same graph with the 7zip data, we would certainly see the margin of the 8+8 configuration grow even further, due to how fast and efficiently the E-cores execute integer workloads.



You're free to leave the thread any time you like.
So again, as you just admitted (I don't care about the why) - increasing the Pcore count while decreasing the ecore count by the same amount INCREASES efficiency and performance. Therefore obviously it can't be the efficiency or the heat or whatever else guy claimed the reason intel doesn't increase the Pcore count. It's just die size.

Increasing core counts can never decrease efficiency therefore heat - power - temperatures - power draw ain't the reason Intel doesn't use more Pcores. That is so asinine to even suggest, it's basically contradicts physics. But here you are claiming otherwise, lol.


There is a very simple question that shows how wrong you are but you aren't going to answer are you? How is it possible that , and im quoting "Intel P cores are so inefficient and hot that Intel can't provide CPUs with more than 8 of them." when in fact they do have CPUs with a lot more than 8? Either those xeons don't actually have Pcores and intel is lying to us, or what drdocumentum said is absolutely horrendously wrong. Which one is it?
 

bit_user

Titan
Ambassador
So again, as you just admitted (I don't care about the why)
Oh, so now it doesn't matter whether E-cores or P-cores are more efficient? Goal posts: moved.

increasing the Pcore count while decreasing the ecore count by the same amount INCREASES efficiency and performance. Therefore obviously it can't be the efficiency or the heat or whatever else guy claimed the reason intel doesn't increase the Pcore count.
Again, that was your conjecture, not the other poster's. They only said "above 8" P-cores - nothing about trading E-cores for P-cores, 1:1.

Seriously, why stop at 16P cores? You're already well into the realm of impossibility (for mainstream desktop), so why not swing for the fences and go for 56 P-cores?

Increasing core counts can never decrease efficiency therefore heat
Again, you're off-topic. If we constrain ourselves to reality, then Alder Lake could've had 10P + 0E, which is not a net core-count increase.

There is a very simple question that shows how wrong you are but you aren't going to answer are you? How is it possible that , and im quoting "Intel P cores are so inefficient and hot that Intel can't provide CPUs with more than 8 of them." when in fact they do have CPUs with a lot more than 8?
I already did answer - you just didn't want to hear it. I said that the E-cores improve perf/area and perf/W, and I meant that relative to the realistic alternative of a similar die size with only P-cores.

You and I both know what would've happened if Intel shipped Alder Lake with a 10P + 0E configuration. It still probably would've had a 125W TDP, in which case P-core base clocks actually would've stayed about the same (each E-core cluster uses about the same max power as a P-core, meaning a 4:1 substitution shouldn't much impact P-cores' power budgets), but all-core MT performance definitely would've suffered. Since die size and (presumably) power limits would've been the same, cooling shouldn't have been more of an issue - at least, for those not overclocking.

Either those xeons don't actually have Pcores and intel is lying to us, or what drdocumentum said is absolutely horrendously wrong. Which one is it?
Don't be obtuse. It's clear the poster was referring to mainstream desktop CPUs.
 
Last edited:

KyaraM

Admirable
Watch out for power consumption too.
If even a few FPS make a difference to you, by all means. But having a "furnace" next to you while playing may not be ideal.
The 14900K reaches 260W vs 68W of the 7800X3D (https://www.tomshardware.com/news/intel-core-i9-14900k-cpu-review); dissipating those extra ~200W could mean noise, heat, or just a lot of money in a wafercooling system instead of the Wraith cooler which is whisper quiet.
Yes, when all cores are loaded, not in gaming. For example, my 12700K can, in theory, draw 190W+... but it won't do that when I game. In fact, I don't see it exceed 100W in gaming, and it is overclocked to 4.9GHz. It is air cooled, and will last me quite a while longer, I don't plan to upgrade in the next 4 years, even. By then, the current AMD socked will be end of life, too. The difference to a 7800X3D is about 20-30W, which isn't that bad for an older 12c/20t CPU, and very far from being a furnace. The 14900K won't reach the 260W, either, though it will obviously use more power. At least make an actual comparison, please... and fix that link, it's not working because of the ); at the end. Needs a space in between.
 
  • Like
Reactions: bit_user

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
502
2,060
Oh, so now it doesn't matter whether E-cores or P-cores are more efficient? Goal posts: moved.
No. It never did actually. Whether they are or aren't more efficient doesn't change the fact thah what he said is objectively wrong. The same with ryzen, just because amd might start using zen 5 c cores doesn't mean zen 5 are "hot and inefficient and they can't add more of them".

Again, that was your conjecture, not the other poster's. They only said "above 8" P-cores - nothing about trading E-cores for P-cores, 1:1.

Seriously, why stop at 16P cores? You're already well into the realm of impossibility (for mainstream desktop), so why not swing for the fences and go for 56 P-cores?


Again, you're off-topic. If we constrain ourselves to reality, then Alder Lake could've had 10P + 0E, which is not a net core-count increase.


I already did answer - you just didn't want to hear it. I said that the E-cores improve perf/area and perf/W, and I meant that relative to the realistic alternative of a similar die size with only P-cores.


Don't be obtuse. It's clear the poster was referring to mainstream desktop CPUs.
Ah, so they can't add more pcores on mainstream due to them being inefficient, uncoolable, use too much power, but somehow that's not an issue on hedt. Make this make sense.

It makes more sense to use ecores when the goal is MT performance, but that doesn't mean heat or power or efficiency is stopping intel from adding 99 pcores. That's the whole point of contention, whether they can add more or not. Obviously they can.
 

bit_user

Titan
Ambassador
Ah, so they can't add more pcores on mainstream due to them being inefficient, uncoolable, use too much power, but somehow that's not an issue on hedt. Make this make sense.
I never said any of that. Actually, I updated my post after you hit "reply", with a more detailed description of what I think an all P-core Alder Lake would've been like.

that doesn't mean heat or power or efficiency is stopping intel from adding 99 pcores. That's the whole point of contention, whether they can add more or not. Obviously they can.
So, let's just look at one aspect: base clocks. The i9-12900K has a PL1 of 125 W and a P-core base clocks of 3.2 GHz. The Xeon W5-2465X has a PL1 of 200 W and a P-core base clocks of 3.1 GHz.

So, you can theoretically add more cores without increasing power, but it'll cost a lot more and perf/$ will not keep pace. That's the main reason they had to increase the TDP. Otherwise, nobody would pay about 2x as much for it.
 
Last edited:

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
502
2,060
I never said any of that. Actually, I updated my post after you hit "reply", with a more detailed description of what I think an all P-core Alder Lake would've been like.
It's not complicated. A 10p core cpu is better than an 8pcore cpu on every metric, efficiency, coolability, performance etc. A 12p core is also better than a 10p core. A 16p core is also better than a 12pcore.

In order for the guys point to make sense adding more pcores would have to reduce efficiency which isn't the case, the exact opossite is the case. Why are you still arguing this?
 

bit_user

Titan
Ambassador
It's not complicated. A 10p core cpu is better than an 8pcore cpu on every metric, efficiency, coolability, performance etc.
10P + 0E vs. 8P + 0E? Well, cost will be higher. If you don't commensurately increase power, then perf/$ will be worse. So, no - not every metric.

I actually did a real-world version of this, by looking at i9-12900K vs. Xeon W5-2465X cost, power, and base clocks. For the purposes of this facile analysis, we can regard the i9-12900K as a 10P + 0E, at least as far as P-core clocks go. So, the 16P CPU is using 1.6x as much power for 1.6x as many core tiles. However, its base clocks dropped 100 MHz probably because they had to switch to a mesh interconnect.
 
Last edited:
All I can gather from the exchange so far: AMD was right in calling the "E-cores" Economy and not Efficient. Looks like they're not after efficiency and only economy (of space). Even if Intel calls them "Efficiency cores", it doesn't matter because Intel is wrong.

It's all clear now and we can put that discussion to rest.

Regards.
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
502
2,060
All I can gather from the exchange so far: AMD was right in calling the "E-cores" Economy and not Efficient. Looks like they're not after efficiency and only economy (of space). Even if Intel calls them "Efficiency cores", it doesn't matter because Intel is wrong.

It's all clear now and we can put that discussion to rest.

Regards.
Intel is not wrong, they are efficient in terms of die space. They never claimed power efficiency, their own slides shows that p cores are more efficient.
 
Mar 10, 2020
420
385
5,070
Intel is not wrong, they are efficient in terms of die space. They never claimed power efficiency, their own slides shows that p cores are more efficient.
From the intel website…. https://www.intel.com/content/www/us/en/gaming/resources/how-hybrid-design-works.html

Efficient-cores are:

  • Physically smaller, with multiple E-cores fitting into the physical space of one P-core.
  • Designed to maximize CPU efficiency, measured as performance-per-watt.
  • Ideal for scalable, multi-threaded performance. They work in concert with P-cores to accelerate core-hungry tasks (like when rendering video, for example).
  • Optimized to run background tasks efficiently. Smaller tasks can be offloaded to E-cores — for example, handling Discord or antivirus software — leaving P-cores free to drive gaming performance.
  • Capable of running a single software thread.
So yes, more cores in the area of 1 P core. Also designed to maximise cpu efficiency, work per watt.

There are a few ideas regarding efficiency and effectiveness.
Consider a fixed work task, i.e. it’s finite, for example encoding a video.
Assume a low power core is slower, a high power core is quicker.

The energy taken to complete the task is IVt, or watts per time.

The quicker core is more effective, it gets the task done more quickly. If that is needed/desired then it is the preferred core.

Power and Energy
Assume
Higher power core uses 10 W, low power core uses 5 W.
The high power core must be better than twice as fast to be more efficient. Play with the real numbers to get a realistic comparison.
 
  • Like
Reactions: bit_user and KyaraM

bit_user

Titan
Ambassador
So it's cost that's stopping them, not that they are hot and inefficient. That's been my point....
Not necessarily. If they could increase power, as they did with the Xeon W5-2465X and its 200 W TDP, then they could justify the price increase. So, in a very real way, heat & power constraints are still very much live issues.

That said, cost is going to be a somewhat hard constraint for mainstream desktop. The evidence we have indicates 16P is not viable for that market, so it's weird you keep talking about it. At most, they could've shipped 10P + 0E for Gen 12 and 12P + 0E for Gen 13 & 14 without increasing cost. If they also didn't increase power, then the main deficit would be MT performance. If they wanted to regain some ground, on that front, they'd feel pressure to push up power limits and that would introduce more thermal issues.

So, in a very real way, E-cores tie into the heart of the heat/power, performance, and cost nexus. Like I've been saying, they can be about more than just one thing.

In a variation on the old saying, the challenge faced by modern CPU designers is that they can have (at most) any two of the following:
  • Low power/temps
  • High (MT) performance
  • Low cost

You can trade more die area (i.e. cost) for lower power and/or more MT performance. Regain some ground on die area (cost) by increasing clocks/power/temps or reducing performance, etc. A hybrid architecture represents Intel's attempt to break this Gordian Knot.
 
Last edited:
May 28, 2024
143
82
160
No. You don't think amd tested and developed this thing first and foremost for Windows? How can it be windows fault? Doesn't make any sense.
There you go again with your propaganda Intel shilling. It's funny how the results speak for themselves and you still bash anything that has to do with AMD.

What's your deal? AMD's Zen 5 is still a good set of CPUs. Whether you like them or not. Currently we've already seen some improvements from a MS update. That's not AMDs fault that MS has core parking issues.

Try to see what's going on with unbiased hateful eyes... Please. Maybe you should jump on WCCF Tech's comment sections. People on there expect to see your kind of in correct and biased bullying and bashing.

I get a laugh over there. You humor me also on here. Lol
 
AMD and nVidia don't have all features in their Linux drivers vs Windows, so some things (like RayTracing, HDR and such) are just not implemented the same way or at all in them.

That's the biggest contributing factor why in Linux some things perform so much better. If you do an image analysis, you will spot differences across Linux and Windows.

Regards.
True for AMD (as the most used Vulkan driver for AMD hardware is community-supported and almost no one uses the official AMD Vulkan driver) for Vulkan drivers, false for Nvidia as they use the same code branch for both Linux and Windows driver releases.
The main difference can be found in how DXVK interprets DirectX commands into Vulkan ones VS how the "native" DX drivers do it on Windows.
Note that while DXVK is mostly based on official Microsoft specs, AMD and Nvidia drivers may do whatever they want with the specs in their drivers.