News Outpost: Infinity Siege' devs ask 13900K, 14900K owners to downclock their chips to prevent crashing

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Then it's a game problem (game engine= game) not hardware rather the way it acts with that game engine.

EDIT everybody knows most board makers are to aggressive with the PL limits. Your average user knows nothing about that but more advanced users know how to adjust it.
Riddle me this: if there's a problem that crops up on fairly small percentage of a specific type of hardware and the resolution is getting the power limits under control how is it software?
 
Then it's a game problem (game engine= game) not hardware rather the way it acts with that game engine.
If the hardware is causing the game to crash, it's a hardware problem. The way you can tell is that if the game reliably crashes at the same point and adjusting the hardware settings no longer causes the game to crash at that point, then probably it was the hardware.

The first paragraph on the page, says they know it's a game problem and trying to give people a Band-Aid to fix the game until they can.
Nope, that doesn't mean it's a game problem, because even hardware problems can have software mitigations. Most likely, that's exactly what they're working on.

Again, if downclocking your CPU avoids the crash, it's a hardware problem.
 
  • Like
Reactions: NinoPino
If the hardware is causing the game to crash, it's a hardware problem. The way you can tell is that if the game reliably crashes at the same point and adjusting the hardware settings no longer causes the game to crash at that point, then probably it was the hardware.


Nope, that doesn't mean it's a game problem, because even hardware problems can have software mitigations. Most likely, that's exactly what they're working on.

Again, if downclocking your CPU avoids the crash, it's a hardware problem.
You can say anything you wish but this still says it all!!!!!!!!!!!
Still trying to patch the problem!!!!!!!!!!!!
URE5 is just crap!!!!!

Recently released title Outpost: Infinity Siege just got its second patch in a matter of days rectifying certain crashes during cutscenes. However, the developerss revealed in the patch notes that there are still high-priority issues causing severe crashes and screen blackouts in-game. The issues are bad enough that the developers at Team Ranger are recommending users downclock their Core i9 Raptor Lake or Raptor Lake Refresh CPUs to 5 GHz as a temporary measure.

That says game problem.
CPUs to 5 GHz as a temporary measure.
CPUs to 5 GHz as a temporary measure.
CPUs to 5 GHz as a temporary measure.
 
Last edited:
You can say anything you wish but this still says it all!!!!!!!!!!!
Their still trying to patch the problem!!!!!!!!!!!!
A good analogy is the security bugs that have come to light, like Specter and Meltdown. These are actual hardware bugs, but they can be mitigated in software. Therefore. the potential for a software patch doesn't necessarily make it a software bug.

That says game problem.
If downclocking i9 CPUs is actually causing them not to crash, then it is actually a hardware problem! The reason the game is triggering it is just because it's spiking the CPU utilization high enough. I'm sure the way they're going to address it is simply by reducing the number of cores used by the game. That's not a fix, but rather a mitigation (i.e. workaround) of the hardware issue.

By your logic, Prime95 is buggy because it can cause an overclocked machine to crash. No, just because a piece of software can stress your hardware to the point of malfunction doesn't make it the software's fault.
 
Last edited:
A good analogy is the security bugs that have come to light, like Specter and Meltdown. These are actual hardware bugs, but they can be mitigated in software. Therefore. the potential for a software patch doesn't necessarily make it a software bug.


If downclocking i9 CPUs is actually causing them not to crash, then it is actually a hardware problem! The reason the game is triggering it is just because it's spiking the CPU utilization high enough. I'm sure the way they're going to address it is simply by reducing the number of cores used by the game. That's not a fix, but rather a mitigation (i.e. workaround) of the hardware issue.
Has anybody said it fixed the issue? NO none I have seen.

Down clocking a processor to fix a game issue is a stupid suggestion how in the world does clock speed crash a game that is not broke?


Again from their post.

CPUs to 5 GHz as a temporary measure.

as a temporary measure.

Does this not say they are trying to fix it or not LOL!!!!!!!!!!!!!!!!!!!!!!!!!!

as a temporary measure.

To me that says we have a problem and are trying to work on it. Just like the patches coming out very often. = rushed out game using a very poor game engine just to start making money.

EDIT that was my last post on this it's clear you wish to make it a Intel problem when it's a game/game engine problem.
 
Last edited:
Has anybody said it fixed the issue? NO none I have seen.
Agreed. It would be good to get some confirmation of this.

Down clocking a processor to fix a game issue is a stupid suggestion how in the world does clock speed crash a game that is not broke?
I'd recommend reviewing this comment thread, for further discussion of the issue:


Note that a more recent article has been published, where undervolting + disabling CEP are recommended by MSI as a workaround:

Your really sounding like a AMD fanboy.
Nope, non-partisan.

I am a fan of logic and I guess hardware and software which works properly. I have debugged a lot of software, including times when the root cause of the problem was the hardware, the OS, or the compiler. I've developed plenty of workarounds, as well, when fixing the root cause was out of my control.

Again from their post.
I don't know what you're hoping to accomplish by re-posting, but I think it's not productive. I've read it every time you posted it and taken the time to explain why I disagree with your interpretation.

I can accept that you disagree with me. Can you accept the same?
 
Last edited:
How would intel have this data?
Intel has both their own CPU telemetry collection through windows drivers and I am sure Microsoft is letting them in on all the OS crashes caused by Intel hardware.
You think they validate every CPU at unlimited power with every piece of software under the Sun?
If they aren't, they should be if that's how they sell them.

Apparently, there's an instability in a single-digit percentage of Intel CPUs which leads to incorrect program execution and / or outright crashes.

Whichever way you look at it Intel is responsible because they are selling unlocked chips and endorsing overclocking with their K models -- motherboard manufacturers are just giving users the means to do what Intel allows and their only responsibility is to provide sensible defaults.

Those defaults are in turn driven by comparative benchmarking performed by so many hardware reviewers who leave everything at default settings instead of locking every motherboard and CPU they test to the same sensible settings. They are the ones who enabled motherboard makers to compete for benchmark scores using increasingly more insane default BIOS settings.

Moreover, the majority of users, even those who build their own PCs, almost never enter BIOS and touch any of the default settings. Because why would they? Reviewers tested their review samples at default settings so it must work for them too, right? We all know that's now how any of this works.

Finally, software developers can only do so much if the CPU itself misbehaves.
 
Intel has both their own CPU telemetry collection through windows drivers and I am sure Microsoft is letting them in on all the OS crashes caused by Intel hardware.

If they aren't, they should be if that's how they sell them.

Apparently, there's an instability in a single-digit percentage of Intel CPUs which leads to incorrect program execution and / or outright crashes.

Whichever way you look at it Intel is responsible because they are selling unlocked chips and endorsing overclocking with their K models -- motherboard manufacturers are just giving users the means to do what Intel allows and their only responsibility is to provide sensible defaults.

Those defaults are in turn driven by comparative benchmarking performed by so many hardware reviewers who leave everything at default settings instead of locking every motherboard and CPU they test to the same sensible settings. They are the ones who enabled motherboard makers to compete for benchmark scores using increasingly more insane default BIOS settings.

Moreover, the majority of users, even those who build their own PCs, almost never enter BIOS and touch any of the default settings. Because why would they? Reviewers tested their review samples at default settings so it must work for them too, right? We all know that's now how any of this works.

Finally, software developers can only do so much if the CPU itself misbehaves.
I would agree with you but publishers are responsible for releasing tested and stable software (games), and 9 times out of 10 they will set insane deadlines that developers cannot meet if they carry out a reasonable amount of testing . Partly because there are so many hardware combinations to test.

It is publishers who are to blame much more than intel or even motherboard manufacturers.
Most games released today will be stable enough to only require some patches and fixes for things developers could not be expected to catch.
Other games are flat out rushed out the door come hell or high water.
 
I would agree with you but publishers are responsible for releasing tested and stable software (games), and 9 times out of 10 they will set insane deadlines that developers cannot meet if they carry out a reasonable amount of testing . Partly because there are so many hardware combinations to test.

It is publishers who are to blame much more than intel or even motherboard manufacturers.
Most games released today will be stable enough to only require some patches and fixes for things developers could not be expected to catch.
Other games are flat out rushed out the door come hell or high water.
I think there's enough blame to go around for both of these things to be true.

However, I still think Intel is more responsible.

On a side note, I have a workstation with Sapphire Rapids Xeon CPU. SPR has new instructions called AMX which only works in Windows 11. I've had issues with Visual Studio debugger failing misteriously to step through the code but working when I disable AMX or when I try debugging the same thing under Windows 10 where AMX is not supported.

Who's to blame? Obvious choice is Microsoft because they wrote Visual Studio. But when you report the issue they say "We have no workstations with SPR CPU and can't reproduce your issue".

So who's to blame then? Intel of course. When you release a CPU which requires changes to the way how OS and debugger work then you make damn sure that vendors writing those get your support to implement and test it properly.
 
I would agree with you but publishers are responsible for releasing tested and stable software (games),
Their testing is for bugs in the software they make. They don't do enough testing (nor can they be expected to), in order to turn up hardware problems. However, I'm sure it happens that they trip over the occasional GPU/driver bug.

It is publishers who are to blame much more than intel or even motherboard manufacturers.
Given it's not like we even have an established precedent for game crashes by mainstream CPU/motherboard models, why would they even devote time or resources to looking for them? Not least of all, if the crash only happens for a minority of users with a given setup! That means you need several copies of each hardware config and that quickly gets very expensive!

Blaming this on game publishers is like blaming an automobile defect on car reviewers for missing it. No, if the defect is in the hardware, the blame lies with them. They need to test their products and model the degradation curves well enough to know what are the safe & supportable limits. For failing to do that, I hope they get swamped with warranty claims.

Most games released today will be stable enough to only require some patches and fixes for things developers could not be expected to catch.
Other games are flat out rushed out the door come hell or high water.
I'm not defending game development houses for genuine software bugs, mind you. I think there's plenty of legit criticism that can be directed towards that industry.
 
I have a workstation with Sapphire Rapids Xeon CPU. SPR has new instructions called AMX which only works in Windows 11. I've had issues with Visual Studio debugger failing misteriously to step through the code but working when I disable AMX or when I try debugging the same thing under Windows 10 where AMX is not supported.

Who's to blame? Obvious choice is Microsoft because they wrote Visual Studio. But when you report the issue they say "We have no workstations with SPR CPU and can't reproduce your issue".
You have to ask whether Visual Studio is supported on your hardware, and if the answer is "yes", then it should be incumbent on them to fix it. The only way I see MS getting let off the hook is if their platform support clearly excludes the class of machines you're running on.

BTW, an entry level SPR workstation doesn't even cost that much. More than a mainstream desktop, but you can get those things with as few as 6 cores. IMO, they have no good excuse not to procure the hardware if they lack it. I'd make a stink about it on Intel's developer forums, if I were you.

So who's to blame then? Intel of course. When you release a CPU which requires changes to the way how OS and debugger work then you make damn sure that vendors writing those get your support to implement and test it properly.
That might be true, but maybe not. To reach that conclusion, we need to know whether Intel actually did change anything about the way debugging works, or if maybe the Visual Studio debugger is just choking on those instruction opcodes in the binary. If debugging was actually impacted by AMX, we'd also need to know whether Intel published the requisite information and Microsoft ignored it.
 
You have to ask whether Visual Studio is supported on your hardware, and if the answer is "yes", then it should be incumbent on them to fix it.
They don't say it isn't:
https://learn.microsoft.com/en-us/visualstudio/releases/2022/system-requirements
BTW, an entry level SPR workstation doesn't even cost that much.
I know, that's why I have built myself a workstation with W5-2455X.
IMO, they have no good excuse not to procure the hardware if they lack it.
Or just borrow it from someone in the OS division.
I'd make a stink about it on Intel's developer forums, if I were you.
I will try that too, because it's really annoying.
That might be true, but maybe not. To reach that conclusion, we need to know whether Intel actually did change anything about the way debugging works, or if maybe the Visual Studio debugger is just choking on those instruction opcodes in the binary. If debugging was actually impacted by AMX, we'd also need to know whether Intel published the requisite information and Microsoft ignored it.
What AMX changes when it's enabled is the XSAVE/XRESTORE thread state size.

This affects the OS context switching as well and AMX requires OS support to be enabled which Windows 11 already has.

However, it seems the VS debugger under some unspecified conditions chokes on this increased thread state size and says there's no code to debug. I am sure it's AMX because if I disable it in system BIOS the same project which I couldn't debug now debugs just fine without any other changes.
 
I know, that's why I have built myself a workstation with W5-2455X.
Nice! What CPU cooler & motherboard?

What AMX changes when it's enabled is the XSAVE/XRESTORE thread state size.

This affects the OS context switching as well and AMX requires OS support to be enabled which Windows 11 already has.

However, it seems the VS debugger under some unspecified conditions chokes on this increased thread state size and says there's no code to debug. I am sure it's AMX because if I disable it in system BIOS the same project which I couldn't debug now debugs just fine without any other changes.
Oh, so you're saying that your program doesn't even need to include any AMX instructions, in order for the debugger to fail? That's wild!

BTW, I'd like to see the registers window for inspecting the AMX state!
: D
 
Last edited:
Nice! What CPU cooler & motherboard?
Noctua NH-U14S DX-4677 and Supermicro X13SRA-TF. The things I regret about the board is that BIOS is damn slow to initialize PCIe devices, lacks fan control (it's done by BMC which isn't "certified" with low RPM fans), and that you need to boot UEFI shell to flash BIOS updates.
Oh, so you're saying that your program doesn't even need to include any AMX instructions, in order for the debugger to fail? That's wild!
Yes, and it somehow only affects debugging of x86 native or mixed (C++/CLR) code. Debugging other stuff works.
BTW, I'd like to see the registers window for inspecting the AMX state!
: D
You might need a 21:9 monitor just for that.
 
Noctua NH-U14S DX-4677 and Supermicro X13SRA-TF. The things I regret about the board is that BIOS is damn slow to initialize PCIe devices, lacks fan control (it's done by BMC which isn't "certified" with low RPM fans), and that you need to boot UEFI shell to flash BIOS updates.
Sometimes, the BIOS menus give you options to disable features you're not using, which can then speed up boot times. Also, in my limited experience, boards with BMC tend to have longer boot times. That's a major reason I'm going with a non-BMC board, for my next workstation.

Also, when looking at some Supermicro boards, they don't seem to go above initial manufacturer stock values. For instance, the X13SAE is limited to DDR5-4400, even though it supports up to 14th gen CPUs, which allow much faster memory than that.

By contrast, the ASUS Pro WS W680-ACE supports up to DDR5-5600.

 
Sometimes, the BIOS menus give you options to disable features you're not using, which can then speed up boot times.
I am aware of that, but in this instance their support response is that I am using "non-certified" hardware with it like consumer Samsung M.2 drives (Pro series), ASUS RTX 4090, and ASUS M.2 PCIe card. They only have enterprise hardware on their QVL. I am aware that excuse is bullshit, but there's nothing I can do now.
Also, in my limited experience, boards with BMC tend to have longer boot times. That's a major reason I'm going with a non-BMC board, for my next workstation.
Aware of that too, but BMC init can go in parallel, no need to slow down boot and I also checked by post code (92 is PCI bus init and it's the one that takes forever during boot).
Also, when looking at some Supermicro boards, they don't seem to go above initial manufacturer stock values. For instance, the X13SAE is limited to DDR5-4400, even though it supports up to 14th gen CPUs, which allow much faster memory than that.
The boards themselves should not add or remove any limits, memory speed available should depend on the CPU you stick in it.

That said, I usually buy ASUS stuff, but their 790 boards are really overwhelmed with unnecessary stuff. Like, 8x USB 2.0 ports, really? Not to mention physical layout of the PCIe slots and location of M.2 are worse. This Supermicro board has perfect layout and features, the only problem is that Supermicro is extremely consumer-unfriendly.
 
  • Like
Reactions: bit_user
Down clocking a processor to fix a game issue is a stupid suggestion how in the world does clock speed crash a game that is not broke?
Instead, this is not only an intelligent suggestion but also the only thing to isolate the real software problems.
Have you ever tried to debug a crashing software running on an instable system ?
It is simply impossible.

Again from their post.

CPUs to 5 GHz as a temporary measure.

as a temporary measure.

Does this not say they are trying to fix it or not LOL!!!!!!!!!!!!!!!!!!!!!!!!!!

as a temporary measure.

To me that says we have a problem and are trying to work on it. Just like the patches coming out very often. = rushed out game using a very poor game engine just to start making money.
For sure they are trying to fix software problems but they at the same time they NEED to exclude bug reports from peoples with instable systems. To do so, the most simple and reliable thing to say is to downclock the CPU to 5Ghz.
Once they addressed all the bugs than they will give instructions to raise the clock speed again and the remaining instability will due to systems.

For the developers, this is only a debug methodology, not an accusation on Intel's CPU problems.


EDIT that was my last post on this it's clear you wish to make it a Intel problem when it's a game/game engine problem.
As you wish.
 
This Supermicro board has perfect layout and features, the only problem is that Supermicro is extremely consumer-unfriendly.
One superficial thing I like about Supermicro is how some of their boards are still that good 'ol utilitarian green. I'd almost wear it as a point of pride not only to have a board with no RGB or gamer bling, but even for its PCB to sport that familiar color.
 
That is just bollocks. No reason to downclock the CPU here at all (14900k owner). I am not sure where they tested this game but this is completely incorrect. I can run Cinebench R23 and run any game at the same time, it runs fine. I have never seen a CPU running over 48C on any games including Cyberpunk 2077 and I doubt this game is more demanding than Cyberpunk 2077 which can pull a lot of power. And my power in the motherboard is set to unlimited. In general AMD CPUs run hotter in gaming than 13900k/14900k. As I said someone misinterpreted this or dev has a terrible setup they run on it. Please someone tell them not to run CPU with no cooling or Intel OEM cooling of $5.

First of all you are wrong! It has been confirmed that there ARE stability issues with high end Intel CPUs. I myself have a delid liquid metal direct die watercooled 14900KS. And its VERY challenging to keep it stable at FULL clock speed without it requires a ton of power. I can also run Cinebench R23 og Cyberpunk or some crap like that. But there are difference in tests and games. This CPU will fail often if running without power limit. I cannot speak for the 14900K, but the KS with x59/x45 is another ball game to mess around with IF you want it to pass everything AND keep the high clocks all time. To begin with I thought it was a fun experience and challenge for a pretty decent overclocker like myself to mess around with and stabilize. But this is NOT for faint heart! I had many times where I just "!#"!#!"%#"%#" this CPU! Stock, this bastard in Cinebench with no power limit is almost 400 watt. I have everything in cooling and delid etc. But this CPU is not stable in other things unless it has some power limits. If no limits OCCT EXTREME can consume 420W which I actually can cool but I cancelled the test! 😉 I can pass CB R23, OCCT extreme when 320w but one of the most challenging test was actually y cruncher full system stress test. But now I am using Intel Extreme Profile recommendations: 400A/320w. But then you run into another "issue" and that is the CPU downclocks 5800-5900... sometimes 5700Mhz when gaming. Keep in mind this is without thermal issues OR it reaching the BIOS set power limits. So far I have only been able to remove this behavior by setting the 400A to 511A. But then some stuff that before was stable are now unstable! And it requires way more power to compensate! I have copied and pasted below a message I wrote from another board. I would like you to read it AND take the challenge with the game demo mentioned! 😉 I must say I am disappointed about the CPU and can fully understand that this requires VERY experienced overclocking/modding in BIOS to play around and be stable and get the most performance possible. Not to mention EXTREME measures to cool it. It worth noting that if I set my CPU like a 14900K... 5.7/4.4 no issues at all. Or at least its NO challenge with my cooling. But owning the KS I refuse to downclock it and run it below its specs. What I am trying to say is that 5900/4500 and beyond is a REAL challenge if EVERY test and games should pass. Also I have been an Intel fan most of my grown up life but am SO SO disappointed now to discover that its not 5900Mhz you are paying for ..... Intel forgot to mention its UP TO 5900 Mhz. This is especially with their new profiles also the EXTREME. If choosingh the PERFORMANCE profile 253W and most stock bios setting. The clocks are below 14900K most of the time or at least they bounce much! I dont want this behavior. I did not thought I paid for this behavior and at the end of the day I now feel even though of corse I love my machine but this CPU does have some questionable marketing features like its "potential" full speed. I can only imagine how awful waste of money it had been if I had regular cooling and normal IHS. My experience sadly it is NOT the game dev fault. At first I did not believe it either .. could play ALL and pass all test I tried. Until y cruncher and Titanic (read below). Now I run a setting where EVERYTHING PASS and works but still ocasionally some downclocking with no reason! 🙁

Anyway try to read this and try it out yourself! : (cut paste from another post)

I have a 14900KS delid and superior watercooling. But if I set 400A 320w/320w in some games it downclocks to lets say 5700 bouncing between 5900 and 5700.. This is with no thermal issues and most settings manual instead of auto or profiles. My mobo is Asus Z790 Hero Maximus.

I know this sounds funny but I found the ULTIMATE overclock test to pass! I can pass basically ALL stress test. y-cruncher, OCCT extreme, Cinebench, play all games and what not!

But I stumbled over something called Titanic 401 Project DEMO. This demo is UE5. And somehow it makes the EXACT controversial Intel error like nothing I ever seen or experienced before! It is EXTREMELY sensitive to the extract shader error! Again I can pass ALL kinds of torture but this one is truly unique. You load the game. and then "explore". go to mainmenu and explore the next possible start location- keep doing this until at least once all "rooms" are passed. Sometimes it requires 10 "explore room" before error occurs. Many times it cannot even enter the gamer! when I can pass all I found my CPU extremely stable. Trust me guys this one will bring a cpu error like NOTHING else! I have no other app, test or game that is that sensitive at least on my 14900KS at 5900 Mhz

I have two options!

Either I can run 5900 Mhz stock with LLC6 but a 400A/320w power limit. It requires voltage to be adaptive +0.010v LLC6 to pass all rooms in this demo. But again it will downclock here and there to 5700Mhz while ingame, but works EVERY time.

IF I want the full 5900Mhz all the time ingame and pass all rooms it will require 511A/320w setting with LLC7 AUTO. It does downclock but only in between loading.

Anyway my question is why the cpu downclocks when not thermal limited? in HWINFO64 I can see in these games and demos that both AMPS and WATT never exceed the BIOS set.

It have always behaved like this and especially with Intels profiles- I find it kind of BS and fake marketing that the CPU suddenly is not full 5900 Mhz.... but UP TO 5900 Mhz..

I dont know if any specific BIOS option can prevent this? But so far I only managed to remove this behavior with changing the 400A to 511A, but then it require way more power (in specific stuff like this Titanic demo) even though the watt limit is the same 320w

I will say that I can get the CPU running great in basically everything with lower setting but fails in this Titanic UE5 demo. I could say F IT ! all move on with my old settings 6 Ghz that works in everything else.... But I want it perfect and 110% stable ..... especially because this is the infamous Intel UE5 controversy shader error apparently in Titanic DEMO on steroids. Also because UE5 is the future. I know it sounds crazy and it is! I can pass Linpack and OCCT Extreme, Y cruncher but you should really try this Titanic demo. But why are the CPU downclocking in gaming when its not reaching power limits or thermal limits? I REALLY encourage everyone to try this demo at jump from "explore room" to "explore room". Its truly amazing how sensitive it is.

https://titanichg.com/project-401
 
Last edited:
  • Like
Reactions: 35below0
It has been confirmed that there ARE stability issues with high end Intel CPUs. I myself have a delid liquid metal direct die watercooled 14900KS. And its VERY challenging to keep it stable at FULL clock speed without it requires a ton of power. I can also run Cinebench R23 og Cyberpunk or some crap like that. But there are difference in tests and games. This CPU will fail often if running without power limit. I cannot speak for the 14900K, but the KS with x59/x45 is another ball game to mess around with IF you want it to pass everything AND keep the high clocks all time. To begin with I thought it was a fun experience and challenge for a pretty decent overclocker like myself to mess around with and stabilize. But this is NOT for faint heart! I had many times where I just "!#"!#!"%#"%#" this CPU! Stock, this bastard in Cinebench with no power limit is almost 400 watt. I have everything in cooling and delid etc. But this CPU is not stable in other things unless it has some power limits. If no limits OCCT EXTREME can consume 420W which I actually can cool but I cancelled the test! 😉 I can pass CB R23, OCCT extreme when 320w but one of the most challenging test was actually y cruncher full system stress test. But now I am using Intel Extreme Profile recommendations: 400A/320w. But then you run into another "issue" and that is the CPU downclocks 5800-5900... sometimes 5700Mhz when gaming. Keep in mind this is without thermal issues OR it reaching the BIOS set power limits. So far I have only been able to remove this behavior by setting the 400A to 511A. But then some stuff that before was stable are now unstable! And it requires way more power to compensate! I have copied and pasted below a message I wrote from another board. I would like you to read it AND take the challenge with the game demo mentioned! 😉 I must say I am disappointed about the CPU and can fully understand that this requires VERY experienced overclocking/modding in BIOS to play around and be stable and get the most performance possible. Not to mention EXTREME measures to cool it. It worth noting that if I set my CPU like a 14900K... 5.7/4.4 no issues at all. Or at least its NO challenge with my cooling. But owning the KS I refuse to downclock it and run it below its specs. What I am trying to say is that 5900/4500 and beyond is a REAL challenge if EVERY test and games should pass. Also I have been an Intel fan most of my grown up life but am SO SO disappointed now to discover that its not 5900Mhz you are paying for ..... Intel forgot to mention its UP TO 5900 Mhz. This is especially with their new profiles also the EXTREME. If choosingh the PERFORMANCE profile 253W and most stock bios setting. The clocks are below 14900K most of the time or at least they bounce much! I dont want this behavior. I did not thought I paid for this behavior and at the end of the day I now feel even though of corse I love my machine but this CPU does have some questionable marketing features like its "potential" full speed. I can only imagine how awful waste of money it had been if I had regular cooling and normal IHS. My experience sadly it is NOT the game dev fault. At first I did not believe it either .. could play ALL and pass all test I tried. Until y cruncher and Titanic (read below). Now I run a setting where EVERYTHING PASS and works but still ocasionally some downclocking with no reason! 🙁

Anyway try to read this and try it out yourself! : (cut paste from another post)

I have a 14900KS delid and superior watercooling. But if I set 400A 320w/320w in some games it downclocks to lets say 5700 bouncing between 5900 and 5700.. This is with no thermal issues and most settings manual instead of auto or profiles. My mobo is Asus Z790 Hero Maximus.

I know this sounds funny but I found the ULTIMATE overclock test to pass! I can pass basically ALL stress test. y-cruncher, OCCT extreme, Cinebench, play all games and what not!

But I stumbled over something called Titanic 401 Project DEMO. This demo is UE5. And somehow it makes the EXACT controversial Intel error like nothing I ever seen or experienced before! It is EXTREMELY sensitive to the extract shader error! Again I can pass ALL kinds of torture but this one is truly unique. You load the game. and then "explore". go to mainmenu and explore the next possible start location- keep doing this until at least once all "rooms" are passed. Sometimes it requires 10 "explore room" before error occurs. Many times it cannot even enter the gamer! when I can pass all I found my CPU extremely stable. Trust me guys this one will bring a cpu error like NOTHING else! I have no other app, test or game that is that sensitive at least on my 14900KS at 5900 Mhz

I have two options!

Either I can run 5900 Mhz stock with LLC6 but a 400A/320w power limit. It requires voltage to be adaptive +0.010v LLC6 to pass all rooms in this demo. But again it will downclock here and there to 5700Mhz while ingame, but works EVERY time.

IF I want the full 5900Mhz all the time ingame and pass all rooms it will require 511A/320w setting with LLC7 AUTO. It does downclock but only in between loading.

Anyway my question is why the cpu downclocks when not thermal limited? in HWINFO64 I can see in these games and demos that both AMPS and WATT never exceed the BIOS set.

It have always behaved like this and especially with Intels profiles- I find it kind of BS and fake marketing that the CPU suddenly is not full 5900 Mhz.... but UP TO 5900 Mhz..

I dont know if any specific BIOS option can prevent this? But so far I only managed to remove this behavior with changing the 400A to 511A, but then it require way more power (in specific stuff like this Titanic demo) even though the watt limit is the same 320w

I will say that I can get the CPU running great in basically everything with lower setting but fails in this Titanic UE5 demo. I could say F IT ! all move on with my old settings 6 Ghz that works in everything else.... But I want it perfect and 110% stable ..... especially because this is the infamous Intel UE5 controversy shader error apparently in Titanic DEMO on steroids. Also because UE5 is the future. I know it sounds crazy and it is! I can pass Linpack and OCCT Extreme, Y cruncher but you should really try this Titanic demo. But why are the CPU downclocking in gaming when its not reaching power limits or thermal limits? I REALLY encourage everyone to try this demo at jump from "explore room" to "explore room". Its truly amazing how sensitive it is.

https://titanichg.com/project-401