Question Why did Intel choose 16 efficient cores rather than 8 (or 4) performance cores for i9-13900K?

kadioguy

Reputable
Dec 24, 2019
3
1
4,515
Hello,

I know we've had 8 performance cores since the 9th generation Intel CPUs. I was wondering why Intel still chose to add to i9-13900K 16 efficient cores rather than 8 (or 4) performance cores. Would it be overheat issues?

This might be not correct but I think 10, 12, or even 16 performance cores would look great!
 
Last edited:
I know we've had 8 performance cores since the 9th generation Intel CPUs. I was wondering why Intel still chose to add to i9-13900K 16 efficient cores rather than 8 (or 4) performance cores. Would it be overheat issues?
So, of course it has 8 performance cores PLUS 16 efficiency cores, as well as an additional 8 hyperthreads.

The biggest problem for giving it additional performance cores is the fact that it already has very serious trouble keeping it's thermal performance within the specifications as it is and this is a direct result of the architectural design and the extremely high clock speed. Even with high end liquid cooling the 13900k is very much at the edge of tolerance under any kind of demanding sustained loads, so if they added more performance cores it would simply end up in throttling situations more often than not which would mean nothing was gained by the additional cores.

Certainly there are other CPUs with more cores, but most of those are server or workstation architectures with much lower clock frequencies, which makes their thermal performance much easier to manage and since they are not dealing with the same types of linear workloads as consumer CPUs, the lower clock speeds don't factor in as much. Until they figure out a way to better mitigate problems with heat transfer and core temperatures at these high clock rates, you can pretty well expect to see the continuation of efficiency cores stick around for some time.
 
  • Like
Reactions: kadioguy
The point of efficient cores is to have something that can handle lower priority tasks that aren't time sensitive so that the higher performance cores can focus on things that are either high performance or time sensitive. The goal is to reduce energy consumption, which is power over time, while performing the same amount of work.

As a point of reference, a cluster of 4 E-cores tends to take the same amount of power as 1 P-core. And if we went by https://www.techpowerup.com/review/intel-core-i9-12900k-e-cores-only-performance/2.html , the E-cores are only 70% performant compared to a P-core. So 4 E-cores given something to do will get more work done than a P-core, despite taking up about the same amount of power.
 
  • Like
Reactions: kadioguy
Right. And the additional fact is, most processes and tasks don't NEED a performance core. And the processes and tasks that DO, generally still to this day don't need as many as most mid to upper tiered CPUs already have. There are few games or applications that can make a serious argument for additional performance cores making for any kind of drastic change in performance unless you are running a heck of a lot of resource intensive threads at the same time, and even then, much of that will be offset by the E cores.
 
  • Like
Reactions: kadioguy

kadioguy

Reputable
Dec 24, 2019
3
1
4,515
As a point of reference, a cluster of 4 E-cores tends to take the same amount of power as 1 P-core. And if we went by https://www.techpowerup.com/review/intel-core-i9-12900k-e-cores-only-performance/2.html , the E-cores are only 70% performant compared to a P-core.
If you don't mind, could you please tell me how you knew that? 🙂
I can't figure it out from the following:
relative-performance-cpu.png
 
The biggest problem for giving it additional performance cores is the fact that it already has very serious trouble keeping it's thermal performance within the specifications as it is and this is a direct result of the architectural design and the extremely high clock speed. Even with high end liquid cooling the 13900k is very much at the edge of tolerance under any kind of demanding sustained loads, so if they added more performance cores it would simply end up in throttling situations more often than not which would mean nothing was gained by the additional cores.
Yeah sorry, but no, that's completely backwards.
100° C is the specs for the thermal performance and no matter how much you cool it, as long as you have ABT enabled it will try to get to 100°C.
If you don't need that extra small boost "out-of-the-box" you can turn that feature off.
https://www.intel.com/content/www/u...intel-technologies-boost-cpu-performance.html
All-Core TVBUltra fast speeds when all cores are active and the CPU is under its temperature threshold.
Intel® Adaptive Boost TechnologyOpportunistically increases all-core turbo frequency when current, power, and thermal headroom exists. Works below a temperature limit of 100°C.Intel® Core™ i9-11900K and KF only

At stock settings with 253W max the 13900k is even one degree cooler than the 7950x that "only" runs at 230W max.
https://www.computerbase.de/2022-10...test/3/#abschnitt_temperaturen_unter_volllast
 
  • Like
Reactions: kadioguy
Hello,

I know we've had 8 performance cores since the 9th generation Intel CPUs. I was wondering why Intel still chose to add to i9-13900K 16 efficient cores rather than 8 (or 4) performance cores. Would it be overheat issues?

This might be not correct but I think 10, 12, or even 16 performance cores would look great!
I think there are a few reasons:

1) Cost, larger CPU dies are generally more expensive because they reduce the yield per wafer and the larger size increases the chance of a defect. This could be a defective core or a core that simply doesn't hit the desired clock speed for the CPU in question. The issue is made worse because Intel's newer manufacturing process hasn't matched the yields of previous generations. AMD builds it's CPU's from modular blocks for this reason because it helps make the best use of what they produce and reduce wastage.

It's much easier for Intel to produce a die with 8 P cores that hit high clock speeds than 16 P cores that do so. The efficiency cores are a fraction of the size of the P cores so it's a cost effective way of scaling the core counts.

2) Heat and power consumption are already high given the high clock speeds Intel is targeting and their manufacturing process is less efficient than TSMC's. The efficiency cores are an easier path to scaling core counts and keeping them competitive with AMD in multi-threaded performance and keeping power and heat at a manageable level.

3) Hybrid CPU's make sense for maximising performance, battery life and cost in mobile devices like laptop's which is their main market for consumers.
 
  • Like
Reactions: kadioguy
Hello,

I know we've had 8 performance cores since the 9th generation Intel CPUs. I was wondering why Intel still chose to add to i9-13900K 16 efficient cores rather than 8 (or 4) performance cores. Would it be overheat issues?

This might be not correct but I think 10, 12, or even 16 performance cores would look great!

You can get higher synthetics by going wider instead of deeper, same reason GPU's work. Realistically speaking nobody is going to be using all those E cores as any task that would scale well on those would scale even better on a 3,840 "E" core GPU. They have to release something and that something has to have bigger numbers then the last product, hopefully bigger numbers then the competitors product and some way for paid reviewers to announce some sort of crown or reason for folks to buy the new product.
 
  • Like
Reactions: kadioguy
Yeah sorry, but no, that's completely backwards.
100° C is the specs for the thermal performance and no matter how much you cool it, as long as you have ABT enabled it will try to get to 100°C.
I completely disagree with this, as does most independent research, but you can have your opinion. I already know based on what I've seen you post elsewhere that arguing with you, even after presenting the facts, does no good anyhow, so I won't bother. It's a waste of time. But you are certainly entitled to believe what you wish to believe and leave it at that other than to say that there are no architectures that "want" to run at the very edge of throttling/Tjuncture. What board manufacturers decide to do with the Intel spec is an entirely different discussion.
 
You can get higher synthetics by going wider instead of deeper, same reason GPU's work. Realistically speaking nobody is going to be using all those E cores as any task that would scale well on those would scale even better on a 3,840 "E" core GPU. They have to release something and that something has to have bigger numbers then the last product, hopefully bigger numbers then the competitors product and some way for paid reviewers to announce some sort of crown or reason for folks to buy the new product.
Right. And this lines up exactly with what I posted earlier.
 
I completely disagree with this, as does most independent research, but you can have your opinion. I already know based on what I've seen you post elsewhere that arguing with you, even after presenting the facts, does no good anyhow, so I won't bother. It's a waste of time. But you are certainly entitled to believe what you wish to believe and leave it at that other than to say that there are no architectures that "want" to run at the very edge of throttling/Tjuncture. What board manufacturers decide to do with the Intel spec is an entirely different discussion.
Funny you say that since I linked directly to intel saying that 100 degrees is what they use as the limit.

Also...
https://www.anandtech.com/show/17641/lighter-touch-cpu-power-scaling-13900k-7950x/3
130799.png
 
As an example of where E-cores would be helpful: keeping the system feeling responsive. https://arstechnica.com/gadgets/202...-cpu-but-m1-macs-feel-even-faster-due-to-qos/

The points of interest are:
It's worth noting that Big Sur certainly could employ the same strategy with an eight-core Intel processor. Although there is no similar big/little split in core performance on x86, nothing is stopping an OS from arbitrarily declaring a certain number of cores to be background only. What makes the Apple M1 feel so fast isn't the fact that four of its cores are slower than the others—it's the operating system's willingness to sacrifice maximum throughput in favor of lower task latency.
 
As an example of where E-cores would be helpful: keeping the system feeling responsive. https://arstechnica.com/gadgets/202...-cpu-but-m1-macs-feel-even-faster-due-to-qos/

The points of interest are:
You don't need e-cores for that.
Consoles have ben doing that for years by just keeping one or two cores free from anything other than OS threads so that they always have enough resources free to run lag free.

You can just use BES to prevent heavy apps from using 100% of available CPU or you can use process lasso or something similar to always leave one core free with affinity.
 
You don't need e-cores for that.
Consoles have ben doing that for years by just keeping one or two cores free from anything other than OS threads so that they always have enough resources free to run lag free.
The point isn't to run anything lag free. The point is to prevent game developers from monopolizing the system and give the OS a chance to run.

You can just use BES to prevent heavy apps from using 100% of available CPU or you can use process lasso or something similar to always leave one core free with affinity.
And yet that requires something the user has to set up or do. Why not just have hardware and the OS do it for you?
 
The point isn't to run anything lag free. The point is to prevent game developers from monopolizing the system and give the OS a chance to run.
Same difference, the point is you get core(s) that are always free so anything you want to run runs immediately and without lag.
And yet that requires something the user has to set up or do. Why not just have hardware and the OS do it for you?
Sure, I'm all for that, but not by "wasting" 16 e-cores...1 or 2 e-cores, fine.
 
Same difference, the point is you get core(s) that are always free so anything you want to run runs immediately and without lag.
No it's not.

With a hybrid system, the application can use all of the cores. But if a high priority task comes in, then the scheduler puts it on a performance core. When that high priority task is done, the application can go back to using the high performance core. By restricting one or more cores from the application, you prevent the application from fully using the CPU.

And after digging around some, I'm more convinced the point of cordoning off cores from developers was really just to make sure they don't monopolize the system so that certain features can be used. For example, Microsoft allowed developers to use "80%" of the 7th core, but the tradeoff is voice commands could no longer be used: http://www.redgamingtech.com/microsoft-frees-seventh-cpu-core-xbox-one-developers/ .

Sure, I'm all for that, but not by "wasting" 16 e-cores...1 or 2 e-cores, fine.
E-Cores don't take up that much space compared to the rest of the CPU. Considering that there are geometric constraints, among other things, if throwing on 8 E-cores makes better use of the silicon area of the processor, then I'd rather have 8 E-Cores.

In any case, I'm also with Darkbreeze on the matter with you. So I'll just be on my way.
 
No it's not.

With a hybrid system, the application can use all of the cores. But if a high priority task comes in, then the scheduler puts it on a performance core. When that high priority task is done, the application can go back to using the high performance core. By restricting one or more cores from the application, you prevent the application from fully using the CPU.
No, if a high priority task comes in it will take away processor power from anything that has lower priority then it does, no matter if it is a hybrid or traditional system.
You don't need to leave any resources, e or p cores, free to do that.

https://learn.microsoft.com/en-us/windows/win32/procthread/scheduling-priorities
If a higher-priority thread becomes available to run, the system ceases to execute the lower-priority thread (without allowing it to finish using its time slice) and assigns a full time slice to the higher-priority thread.

You would only leave any core, either e or p, free if you want to make sure that you can still use the system even if a real-time app runs that takes over all available cores.
Basically to prevent this from happening.
You should almost never use REALTIME_PRIORITY_CLASS, because this interrupts system threads that manage mouse input, keyboard input, and background disk flushing.