News Cyberpunk 2077 adds core prioritization for hybrid CPUs, which would be great if it didn't cause other problems

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
If you were testing motherboards, sure. I'm testing GPUs, and I want maximum performance possible from the CPU, so I want the performance optimizations to be applied. The problem is that the MSI MEG Z790 Ace likely pushes things too far, or else the Cooler Master 360mm cooler just isn't quite able to handle the heat, so to speak.

Really, one of two things should happen:

1) All the motherboard manufacturers should do additional testing to verify their tweaks don't cause the CPU to overheat. And honestly, I'm really not doing much: I enable XMP in the BIOS after loading the defaults. If that crashes consistently, it's a mobo firmware issue as far as I'm concerned.

2) If Oodle has code that consistently crashes on Raptor Lake CPUs, whether it's overheating or something else, Oodle should debug this and figure out what needs to be done to prevent the crashing. It doesn't matter if "the code works on everything else!" I was a programmer, and if your code has problems on a popular subset of systems, you should try to fix it. Hell, it might be an Oodle bug in the first place that only shows up with Raptor Lake.

Frankly, both of these should happen as far as I'm concerned. Because if Oodle is causing the crash, that means other applications should trigger it as well — it's only a matter of hitting the right 'buttons' on the CPU, so to speak. And in the interim, Oodle shouldn't be okay with code that consistently causes crashes.
Well, not to pry, but I do see you are an editor here at Tom's Hardware. Perhaps you could pass this story on to the staff, and get a broader discussion going, so we can get clearer answers and a definite fix from either Intel, Microsoft, or the game developers whose games are affected?

As of yet, despite this issue increasingly affecting more and more games, and more and more consumers are buying 13900ks and 14900ks, no tech outlet has written about it, and I've tried to get GamersNexus to cover it many times to no avail.
 
Well, not to pry, but I do see you are an editor here at Tom's Hardware. Perhaps you could pass this story on to the staff, and get a broader discussion going, so we can get clearer answers and a definite fix from either Intel, Microsoft, or the game developers whose games are affected?

As of yet, despite this issue increasingly affecting more and more games, and more and more consumers are buying 13900ks and 14900ks, no tech outlet has written about it, and I've tried to get GamersNexus to cover it many times to no avail.
I'm not our CPU guy, which means I don't have access to as many platforms, but I've asked him about the issues I've experienced with games crashing quite a bit. It is an interesting topic to me, just because I've been dealing with this crap for over a year now (since upgrading to a Raptor Lake test PC in late 2022). I've often thought about trying to get more details, and I can at least give some people at TH a set of steps to try to see if they get problems — particularly with the higher end Raptor Lake CPUs.

Paul has a massive cooler for his CPU test PCs, so that might be why he doesn't encounter problems. LOL. But if I can get him to switch, when he has time, to something more mainstream and do some testing, that could be fun.
 
I'm not our CPU guy, which means I don't have access to as many platforms, but I've asked him about the issues I've experienced with games crashing quite a bit. It is an interesting topic to me, just because I've been dealing with this crap for over a year now (since upgrading to a Raptor Lake test PC in late 2022). I've often thought about trying to get more details, and I can at least give some people at TH a set of steps to try to see if they get problems — particularly with the higher end Raptor Lake CPUs.

Paul has a massive cooler for his CPU test PCs, so that might be why he doesn't encounter problems. LOL. But if I can get him to switch, when he has time, to something more mainstream and do some testing, that could be fun.
I have an Ice Giant Thermosiphon with 4 industrial-grade 3000 RPM Noctua fans as my CPU cooler. I have this exact same issue in Cyberpunk 2077 (and still Returnal) with an i9 13900k and now the i9 14900k I have just recently upgraded to.

Until enabling the hybrid CPU utilization feature in the test builds, my CPU temps would skyrocket to the high 90s just during the intro cutscenes and loading screens of Cyberpunk 2077, which makes no sense whatsoever as the game engine isn't even actively loaded or working at that point. It is safe to say, IMO, it is not related to the cooler in particular, unless your cooler is woefully inefficient.

If it would help, I can post many, MANY, *MANY* comments from different users in many different games using the i9 13900k & i9 14900k who have experienced the same issue, and resolved it using the same fix (downclocking of p-cores in XTU)
 
More cooling isn't going to help if the issue is that the CPU hangs because the architecture/node is hitting the limit of what it can support (hits the bin limit) . It's like saying that you can prevent an x3d from blowing up by cooling it better even though the issue is the vsoc.
More cooling could be producing this issue since that allows for more and more vcore/amps/watts and one of those might be reaching the limit of what the node or the specific CPU can handle.
 
Intel's going all in with their mobile chips even getting a set of 3rd class CPUs in Meteor Lake. Apple does fine with MacOS with hybrid cores. Is it Microsoft's fault that Windows is horrible at managing a heterogenous core?
In fact MacOS version of Cyberpunk does not suffer of the problems of the Windows version.
 
  • Like
Reactions: Order 66
I have an Ice Giant Thermosiphon with 4 industrial-grade 3000 RPM Noctua fans as my CPU cooler. I have this exact same issue in Cyberpunk 2077 (and still Returnal) with an i9 13900k and now the i9 14900k I have just recently upgraded to.

Until enabling the hybrid CPU utilization feature in the test builds, my CPU temps would skyrocket to the high 90s just during the intro cutscenes and loading screens of Cyberpunk 2077, which makes no sense whatsoever as the game engine isn't even actively loaded or working at that point. It is safe to say, IMO, it is not related to the cooler in particular, unless your cooler is woefully inefficient.

If it would help, I can post many, MANY, *MANY* comments from different users in many different games using the i9 13900k & i9 14900k who have experienced the same issue, and resolved it using the same fix (downclocking of p-cores in XTU)
F'ing hell.

I have been dealing with this crap for over a year. So, based on your comment, I decided to just download XTU and give that a shot. I swear, there should be some appropriate settings in the BIOS for this as well, but they're not as immediately obvious.

I set a maximum power limit of 300W, and max current of 350A. No more crashes, stuttering is greatly reduced as well. The Last of Us, which previously would crash almost instantly after starting (unless I set affinity), is happily chugging along on the shader compile. CPU is at 92C now, instead of reaching 97C and higher and crashing.
 
F'ing hell.

I have been dealing with this crap for over a year. So, based on your comment, I decided to just download XTU and give that a shot. I swear, there should be some appropriate settings in the BIOS for this as well, but they're not as immediately obvious.

I set a maximum power limit of 300W, and max current of 350A. No more crashes, stuttering is greatly reduced as well. The Last of Us, which previously would crash almost instantly after starting (unless I set affinity), is happily chugging along on the shader compile. CPU is at 92C now, instead of reaching 97C and higher and crashing.
Glad to have helped solve your issue! As I said, this problem has actually been shockingly widespread without any major tech outlet addressing it or reporting on it for a long time now, and it will only get worse as more people get higher end Intel CPUs and more UE5/modern and demanding games come out.
 
Glad to have helped solve your issue! As I said, this problem has actually been shockingly widespread without any major tech outlet addressing it or reporting on it for a long time now, and it will only get worse as more people get higher end Intel CPUs and more UE5/modern and demanding games come out.
FWIW, having determined exactly what seemed to be the problem, I then went digging into the BIOS. The power limiting and current limiting stuff was buried under an extra sub-menu for CPU settings. I changed those (from the default 4096W!) and that has also fixed the problem. My issue was just that it was so inconsistent. Like, most games were fine, but a few weren't and so I figured out a workaround instead of looking for the appropriate BIOS tweak.

Now I need to see if this has impacted performance results in anything. I do know I have been having really odd stuttering that perhaps was related to the CPU overheating or maybe even "almost crashing" or something. This seems to have helped there as well.
 
Here in my asrock motherboard has Intel Specs for the cpu... but not work
those new motherboars push the cpu beyond of specs... making crashing or behave Like a space heater
My cpus have 35w tdp but push beyond 130w if you don't lock it on bios settings
 
  • Like
Reactions: Order 66
Even after the second "overclock" on the same arch, Golden cove still uses less power than even the x3d version of AMDs newest CPUs.
Unless you overclock on-top of the factory overclock.

Your point would be about the power draw of the 12900k which is just 26W compared to the 35-37 of 14th and ryzen4.
https://www.techpowerup.com/review/intel-core-i9-14900k/22.html
power-singlethread.png
I knew I shouldn't have engaged with you as your willful ignorance regarding design and TDP is unending.

First of all I'm not even sure what your point is here with that graph. Saying 12900K uses less than the 14900K is a given seeing as the latter clocks significantly higher and as we all know the efficiency curves are bad at high clocks. AMD's X CPUs threw efficiency out the window just like Intel does to maximize performance hence the 15W difference between 7900 and X.

When citing X3D did it ever cross your mind to wonder why the 7950X3D uses 20W more than the 7800X3D? Since it clearly didn't I'll explain it to you: the test on the 7950X3D uses the non-VCache CCD which runs far outside the efficiency curve exactly like ADL/RPL do. AMD's performance returns outside the efficiency curve are lower than Intel's so their choice to max clock is pretty bad in a lot of lighter workloads where the 7950X will use more power than the 14900K.
The inefficiencies being that it uses less power??
The e-cores are the ones that use more power, damaging the overall efficiency of their CPUs.
Intel does that because they can afford to, it's still cheaper for them to have e-cores then the few sales they might lose to AMD.

You can see here how much less efficient (watt/performance) the e-cores are overall.
https://chipsandcheese.com/2022/01/28/alder-lakes-power-efficiency-a-complicated-picture/
This is where your lack of a grasp on TDP comes in. When under maximum boost and running outside of the efficiency curve GM is awful just like GC on ADL. When you compare the performance at base clock GM is more efficient which is the part that really matters.

Now if you start looking at RPL instead of ADL it makes the concept of "just add more P-cores" flat out stupid. The only place I could find benchmarks with SPR and RPL was here (I picked cinebench because memory bandwidth barely impacts results), but as you can see the 13900K outperforms the 2465X by about 10.85% with a TDP about 5.42% higher. It's not a perfect test, but it's as close as there is to the hypothetical adding more P-cores. We already know at base clocks GM is more efficient than GC, but the same is likely true for RPC as well when compared to the RPL GM implementation.
 
I knew I shouldn't have engaged with you as your willful ignorance regarding design and TDP is unending.

First of all I'm not even sure what your point is here with that graph. Saying 12900K uses less than the 14900K is a given seeing as the latter clocks significantly higher and as we all know the efficiency curves are bad at high clocks. AMD's X CPUs threw efficiency out the window just like Intel does to maximize performance hence the 15W difference between 7900 and X.

When citing X3D did it ever cross your mind to wonder why the 7950X3D uses 20W more than the 7800X3D? Since it clearly didn't I'll explain it to you: the test on the 7950X3D uses the non-VCache CCD which runs far outside the efficiency curve exactly like ADL/RPL do. AMD's performance returns outside the efficiency curve are lower than Intel's so their choice to max clock is pretty bad in a lot of lighter workloads where the 7950X will use more power than the 14900K.

This is where your lack of a grasp on TDP comes in. When under maximum boost and running outside of the efficiency curve GM is awful just like GC on ADL. When you compare the performance at base clock GM is more efficient which is the part that really matters.

Now if you start looking at RPL instead of ADL it makes the concept of "just add more P-cores" flat out stupid. The only place I could find benchmarks with SPR and RPL was here (I picked cinebench because memory bandwidth barely impacts results), but as you can see the 13900K outperforms the 2465X by about 10.85% with a TDP about 5.42% higher. It's not a perfect test, but it's as close as there is to the hypothetical adding more P-cores. We already know at base clocks GM is more efficient than GC, but the same is likely true for RPC as well when compared to the RPL GM implementation.
Let me remind you what you said because you apparently keep typing until you forget what you were typing about...
Yeah this isn't really the reason we saw hybrid architecture (I do think this was inevitable no matter what though) when we did. Golden Cove uses a lot of power to get high clockspeeds so adding more balloons the power budget if the aim is to be competitive multithreaded performance wise.
Myth busted, Golden cove (especially when the decision to use hybrid was made, so on the 12900) uses A LOT LESS power per core than the new ryzen or new intel gens does/do, when getting the highest clocks they can.

This is where your lack of a grasp on TDP comes in. When under maximum boost and running outside of the efficiency curve GM is awful just like GC on ADL. When you compare the performance at base clock GM is more efficient which is the part that really matters.
Oh, pleeeeease go ahead and explain to me how I have a lack of a grasp on TDP when your whole argument about Golden cove using a lot of power is based exactly on looking at it way way way above it's efficiency curve.
 
  • Like
Reactions: Order 66
Nope, it won't make any difference nor will it work on AMD's hybrid CPUs either, because the Zen 4 and 4c cores don't behave in the same way like Intel's P and E cores. P and E cores are based on different architectures ...

It should, since AMD doesn't use any hardware scheduling between 4 and 4c cores and relies entirely on CPPC2 to let Windows prioritize cores for single-thread; I'm not sure if AMD classifies 4c cores as class 1 efficiency cores to Windows, as performance cores are class 0. ISA identical is great and all, but the clocks of Zen 4c are so much lower that you will certainly see issues if threads are scheduled on 4c instead of Zen 4, especially if the main render thread somehow gets misplaced onto a 4c core, you'll run into a CPU-limited scenario as 4c core can't issue draw calls as quickly as a regular 4 core simply due to the frequency difference.

This isn't an issue in mobile, power limited scenarios because all-core clocks will be less than Zen 4c's maximum clocks in scenarios like gaming (especially if you use iGPU). But, for desktop 65W APUs, like 8500G, this will undoubtedly be an issue.

Zen 4 core max boost: 5.0GHz
Zen 4c core max boost: 3.7GHz
https://www.amd.com/en/product/14086

L3 cache in mobile APUs has been half of desktop product L3 for some time now regardless of core type.
Mobile: 16MB max
Desktop: 32MB per CCD; X3D: 96MB per CCD

Which brings me to a slight tangent: AMD's 7900/7950X3Ds are hybrid products as well (in a different way), and scheduling on the wrong CCD also causes performance regression.

On my old 5900X, Windows would sometimes schedule game's main thread on CCD1 instead of fastest core on CCD0 and I'd lose 10-20fps or more (plus wonderful microstuttering). ISA identical cores, but not identical frequencies nor completely resident on a single CCD - there were cross-CCD latency penalties at play too.
 
It should, since AMD doesn't use any hardware scheduling between 4 and 4c cores and relies entirely on CPPC2 to let Windows prioritize cores for single-thread; I'm not sure if AMD classifies 4c cores as class 1 efficiency cores to Windows, as performance cores are class 0. ISA identical is great and all, but the clocks of Zen 4c are so much lower that you will certainly see issues if threads are scheduled on 4c instead of Zen 4, especially if the main render thread somehow gets misplaced onto a 4c core, you'll run into a CPU-limited scenario as 4c core can't issue draw calls as quickly as a regular 4 core simply due to the frequency difference.
The second ccx is already slower than the first one, and when all cores are under load all of them slow down to the same speed.
So what you say might happen once in a while but it won't be a big issue, if an issue at all.
Also preferred cores are a thing for quite a few years now and it works well so there is no way of a light workloads main thread to end up on a c core.
 
  • Like
Reactions: Order 66
Even after the second "overclock" on the same arch, Golden cove still uses less power than even the x3d version of AMDs newest CPUs.
Unless you overclock on-top of the factory overclock.

Your point would be about the power draw of the 12900k which is just 26W compared to the 35-37 of 14th and ryzen4.
https://www.techpowerup.com/review/intel-core-i9-14900k/22.html
I truly don't wish to be contentious, but I find your frequent use of these TPU charts somewhat disingenuous at times. The one you linked here is for single-thread only. Name me a single instance when only one thread is ever used outside of benchmarking.

On that very same page, we have all the other charts that you fail to mention, including the multi-threaded one that shows a stock 14900k using almost exactly 2x the power of the 7950X3D for probably no more than 10% higher performance in the best-case scenarios. That is 8 Intel 'P' cores and 16 so-called 'E' cores using double the power of 16 full AMD 'P' cores. At full tilt, sure, but the single-core benefit Intel has going evaporates rapidly the more cores are used.

And let's be clear here: I want Intel to do better with regards to power consumption, because I want choices in the future that don't involve linking my PC up to a personal nuclear reactor or geothermal vent.
 
I truly don't wish to be contentious, but I find your frequent use of these TPU charts somewhat disingenuous at times. The one you linked here is for single-thread only. Name me a single instance when only one thread is ever used outside of benchmarking.

On that very same page, we have all the other charts that you fail to mention, including the multi-threaded one that shows a stock 14900k using almost exactly 2x the power of the 7950X3D for probably no more than 10% higher performance in the best-case scenarios. That is 8 Intel 'P' cores and 16 so-called 'E' cores using double the power of 16 full AMD 'P' cores. At full tilt, sure, but the single-core benefit Intel has going evaporates rapidly the more cores are used.

And let's be clear here: I want Intel to do better with regards to power consumption, because I want choices in the future that don't involve linking my PC up to a personal nuclear reactor or geothermal vent.
In my book, Intel's single-core advantage is a lot like AMD's X3D advantage: There are certain applications where it can help, and others where it doesn't help and can actually be a hindrance. Lots of games still have a primarily single-threaded nature, or at least one thread that does a lot more of the work, so that helps Intel... unless the game happens to be one that really likes X3D, in which case AMD wins.

But Intel's current "7nm-class" process is clearly inferior to TSMC N5, and Intel really pushed far on the voltage and frequency curve with both Raptor Lake and the RPL Refresh chips. I mean, I just finally figured out which BIOS setting I needed to adjust to keep my 13900K from crashing in certain workloads. (See above.) The fact that an "uncapped" power limit was causing the CPU to probably hit >350W is pretty insane, not to mention the 100C+ temperatures.

It's going to be very interesting to see how things develop with the P-cores going forward, though. You can make a lot of good arguments for saying that a lot of workloads don't really need more than eight P-cores — and Arrow Lake may even disable Hyper-Threading on the P-cores to get an additional performance boost for the threads that end up there! Anything that can scale really well with total core count meanwhile ends up on the far more efficient E-cores.

And they really are more power efficient. I'd have to do testing to figure it out, but I want to say all 16 E-cores on the 13900K running full tilt probably use half as much power as the eight P-cores running full tilt. Of course clocks (voltage frequency) are a big part of that.
 
But Intel's current "7nm-class" process is clearly inferior to TSMC N5, and Intel really pushed far on the voltage and frequency curve with both Raptor Lake and the RPL Refresh chips. I mean, I just finally figured out which BIOS setting I needed to adjust to keep my 13900K from crashing in certain workloads. (See above.) The fact that an "uncapped" power limit was causing the CPU to probably hit >350W is pretty insane, not to mention the 100C+ temperatures.
I appreciate that there are probably still many cases where the single/primary thread rules, but I honestly think that a typical PC setup has so much other nonsense going on that makes such situations hard to come by. That said, I'm no expert here.

I'm very interested to see how things develop in this sort of area, particularly with things like E-cores, but think we can all agree it needs to happen.

In the case of the P-cores, I think part of Intel's problem is that they have made a rod for their own back that started around about the 9900K. Any new processor seemingly has to be clocked miles outside of its efficiency curve so that it doesn't end up being slower than the predecessor. AMD kind of did the same thing to a lesser extent of course. I wish they'd just both knock it off already.
 
  • Like
Reactions: Order 66
FWIW, having determined exactly what seemed to be the problem, I then went digging into the BIOS. The power limiting and current limiting stuff was buried under an extra sub-menu for CPU settings. I changed those (from the default 4096W!) and that has also fixed the problem. My issue was just that it was so inconsistent. Like, most games were fine, but a few weren't and so I figured out a workaround instead of looking for the appropriate BIOS tweak.

Now I need to see if this has impacted performance results in anything. I do know I have been having really odd stuttering that perhaps was related to the CPU overheating or maybe even "almost crashing" or something. This seems to have helped there as well.
When I started seeing my Asus Prime Z690 P throwing over 1.5v at the 12700k I used to have while on auto, just because I raised the clocks a bit I knew I couldn't trust it. What I do now is I just increased my clocks to max stable without increasing the voltage and undervolt them (using a general power setting, per point voltage offset, adaptive voltage, and LLC) as much as I can without losing stability.
Which for me is 6.0 for 2, decreasing to 5.5 for all, and 5.0 cache, 4.5 E. But I still have the same stability issues at the beginning of games if I try to go to 5.6 all (which requires added volts) or 5.7 all (which requires a lot of added volts). Currently I get less than 200W peak starting CP2077 with a 3080 and some decent gen 4 ssd.

My motherboard also seems to not give enough volts to my 13900kf's L2 cache at reduced voltages so I add 20mv. Also my Prime Z690 P is terrible with memory performance, but it is a low end Z690 so I can't expect to get everything. And it will dump over 1.45v to my memory controller per HWinfo if I let the thing run at auto settings.

The auto voltages these motherboards apply are just a mess. Especially with the i9. Worth looking over what they are doing with HWinfo.
 
I truly don't wish to be contentious, but I find your frequent use of these TPU charts somewhat disingenuous at times. The one you linked here is for single-thread only. Name me a single instance when only one thread is ever used outside of benchmarking.

On that very same page, we have all the other charts that you fail to mention, including the multi-threaded one that shows a stock 14900k using almost exactly 2x the power of the 7950X3D for probably no more than 10% higher performance in the best-case scenarios. That is 8 Intel 'P' cores and 16 so-called 'E' cores using double the power of 16 full AMD 'P' cores. At full tilt, sure, but the single-core benefit Intel has going evaporates rapidly the more cores are used.

And let's be clear here: I want Intel to do better with regards to power consumption, because I want choices in the future that don't involve linking my PC up to a personal nuclear reactor or geothermal vent.
Is it still unknown to the wide public that the p-cores and the e-cores are two different things????
How can you tell the performance and/or efficiency of the p-cores by looking at mixed results gotten from running both types at the same time?
You have to look at results of only one type of core to get results for only that one type of core, since no reviewer even deemed it interesting to compare the 8 p-cores of an intel CPU to the main(faster) cxx of an ryzen CPU my only option and data point are the single core results since they are the only ones showing clean p-core numbers, everything else is "polluted" data by the second type of cores.
 
  • Like
Reactions: Order 66
Yeah this isn't really the reason we saw hybrid architecture (I do think this was inevitable no matter what though) when we did. Golden Cove uses a lot of power to get high clockspeeds so adding more balloons the power budget if the aim is to be competitive multithreaded performance wise. ADL's tradeoff was made completely to keep the TDP in line while still competing. This has served Intel extremely well given the node disadvantage (moreso with RPL v Zen 4 than ADL v Zen 3) and having to deal with the scheduler side of things.
Not true at all. The power budget is irrelevant. 16 P cores will be faster than 8 P cores at the same power budget. E cores are just space efficient, they take much less space than P cores for their performance.
 
So now I'm curious, because I get this exact problem in certain games. The thing is, it has nothing to do with VRAM or the GPU! The real problem in my case is that the CPU appears to be either overheating or just getting into a bad state and crashing the running process.

For me, this happens during shader compilation when you first launch a game. I can name quite a few with this issue. Hogwarts Legacy, The Last of Us, Alan Wake 2 (not quite as frequent), Metro Exodus, Horizon Zero Dawn, Watch Dogs Legion (sometimes), and I'm sure there are many others that I've never tried and thus don't have personal experience with.

The solution, in every case I've encountered, is to set the game's affinity to just the P-cores, during shader compilation. Once that's done, reset the game's affinity to all cores. But I can see just looking at the LED POST codes on my motherboard (which show CPU temps after the initial boot process), that the CPU hits 99+ C when affinity isn't set, and anything in the high 90s seems likely to cause a crash.

I'm not entirely sure if it's just with my motherboard, or if it's something with the 13900K. I suspect it's a motherboard setting, like being too aggressive with boosting and voltages or current. And since motherboard BIOS code often gets at least partially shared between a lot of different models from the same vendor, and even across vendors, it could be that lots/most Z790 motherboards are impacted.
If it's crashing on stock settings, either voltage applied is not enough or your 13900k is a dud. I've tested 12900k, 13900k and 14900k. This never happened to me during shader compilation. Tinker around with LL settings, make sure it's stable under heavy workloads. Either that, or just set a temperature limit in the bios to 85c. Lower temperatures will make your system stable with lower voltages so that's a workaround
 
On that very same page, we have all the other charts that you fail to mention, including the multi-threaded one that shows a stock 14900k using almost exactly 2x the power of the 7950X3D for probably no more than 10% higher performance in the best-case scenarios.
That's great, and I'm glad you mentioned. Yes, it is true, the 14900k uses exactly 2x the power of the 7950x 3d for just 10% performance.

But you know what you are actually missing from the picture? That the 7950x is using 60% more power than the 7950x 3d for 4% performance. Now do the math. In order for the 7950x or the 7950x 3d to close that 10% gap to the 14900k, they will end up consuming MORE than the 14900k!! The 14900k is actually incredibly efficient. It loses by a small single digit percentage to the 7950x / 7950x 3d but they are really close. Here are the numbers

https://tpucdn.com/review/intel-cor...own-to-35-w/images/efficiency-multithread.png

At 200 watts the 14900k is more efficient than the 7950x. At 125 watts it's as efficient as the 7950x 3d is. So, since you seem to care about efficiency, why do you insist at pushing the CPU to 400 watts and THEN measuring it's efficiency?
 
16 P-cores will be slower than 8 P-cores with 16 E-cores in the same power envelope which is the point.
Of course, but that's irrelevant. You said thermals stopped the from adding more P cores, which is obviously false and your own links prove it. Didn't you just link a 24 P core cpu? Lol

It's the size of the die that would be the issue. Uber expensive and I'm not sure they can even scale the ring that big.

Even your claim isn't entirely true. It depends on the wattage per core.
 
If it's crashing on stock settings, either voltage applied is not enough or your 13900k is a dud. I've tested 12900k, 13900k and 14900k. This never happened to me during shader compilation. Tinker around with LL settings, make sure it's stable under heavy workloads. Either that, or just set a temperature limit in the bios to 85c. Lower temperatures will make your system stable with lower voltages so that's a workaround
It's not "stock," it's "stock + XMP" that seems to be the issue. Or maybe it is stock. But it's not the CPU, for sure, because changing the (unclamped in the BIOS!) power and current limits has fixed the problem. Which was already discussed later in the thread.
 
Of course, but that's irrelevant. You said thermals stopped the from adding more P cores, which is obviously false and your own links prove it. Didn't you just link a 24 P core cpu? Lol

It's the size of the die that would be the issue. Uber expensive and I'm not sure they can even scale the ring that big.

Even your claim isn't entirely true. It depends on the wattage per core.
Where did I ever say thermals?

I thought about explaining it, but having read through your posts I'm not going to waste my time.
 
  • Like
Reactions: TJ Hooker
Status
Not open for further replies.