[SOLVED] Ryzen 5600x low single-core performance PBO + Curve optimiser need help

Hiper1995

Commendable
Apr 2, 2019
21
1
1,515
0
Hello,
My specs are:
-Ryzen 5 5600x cooled with Arctic freezer duo 34
-Mag Tomahawk B550
-16G DDR4 Patriot viper B-Die 3200 cl 16 18 18 36 overclocked to 3600 cl 18 20 20 40
-Gtx 1070 TI
-Case well ventilated with 5 case fans but rather bad cable-management
-Windows 10 latest version + updates

So, long story short, I just upgraded my Cpu + mobo from 2600x & Tomahawk B350 to Ryzen 5 5600x and B550 Mag Tomahawk with updated bios (last stable version, latest version is a beta one for Win11).
I messed around for the last 3 days with overclocking Cpu + ram to get as much performance as possible for games (mostly because I love to tinker even tho I know performance improvements probably didnt worth the time spent) and I arrived at the following configuration, after probably 200+ restarts and different tested things:

CPU:
PBO advanced
Power limits manual: PPT 130W, TDC 80A, EDC 125A
Scalar: Auto
AutoOC max freq: +150Mhz
Temp throttle: Auto
Curve optimizer: Negative 25,24,25,15,15,25 (15 are for my best & second best cores).

Results:
- Cinebench R23 11820 average MC, 1556 SC
-4650 to 4700 mhz all core boost which sticks like glue 90% of the time in BF2042, Valorant, Cs go & Warzone ( all tests ran at 1080P, low settings)
-around 85-93% usage on PPT, TDC & EDC
-Temperatures:, 35-48C idle, 63-68C in games like BF2042,Warzone, 75-78C in Cinebench, Max 85C in Prime95 torture test, Vcore does not shoot up past 1.37V
-Stable after running 8 hours of Prime95 Blend test, 4 hours of torture test, several hours of cinebench R23 MT tests, 1 hour of OCCT most demanding settings.

My problem:

Why do I get a rather low and consistent single core score in Cinebench no matter the settings I change and why I see literally 0 SC boost in games above 4650-4700Mhz range, even tho I raised the maximum boost limit to 4.8 GHZ basically. I was expecting at least 1 core to boost for a few seconds at least to that Mhz, but it just does not happen, at all. (funny thing it boosts to 4.8 ghz only in loading screen when i first open BF2042 haha)
Also, if this helps, I noticed that when I run Cinebench SC test, frequency does not stay on a single core, it is shared between my 2 best cores, and only rarely for a few seconds only 1 core is utilized (max 4.7k GHZ) then it goes back to bounce between them until the test is finished. Monitored this behavior with RyzenMaster, and noticed in some youtube videos that people were having a consistent frequency on only 1 core for 90% of the time when running Cinebench SC test.
I tried messing with Motherboard power limits, scalar, lower/higher AutoOC score, lower/higher curve, literally tested almost every possible combination, I just cant bring SC score up & make it boost as well in games to what it should boost.

Anyone has any insight for this? Im open to every suggestion.
Thanks ! (Please excuse my english, its not my first language)
 
Hello,
My specs are:
-Ryzen 5 5600x cooled with Arctic freezer duo 34
-Mag Tomahawk B550
-16G DDR4 Patriot viper B-Die 3200 cl 16 18 18 36 overclocked to 3600 cl 18 20 20 40
-Gtx 1070 TI
-Case well ventilated with 5 case fans but rather bad cable-management
-Windows 10 latest version + updates

So, long story short, I just upgraded my Cpu + mobo from 2600x & Tomahawk B350 to Ryzen 5 5600x and B550 Mag Tomahawk with updated bios (last stable version, latest version is a beta one for Win11).
I messed around for the last 3 days with overclocking Cpu + ram to get as much performance as possible for games (mostly because I love to tinker even tho I know performance improvements probably didnt worth the time spent) and I arrived at the following configuration, after probably 200+ restarts and different tested things:

CPU:
PBO advanced
Power limits manual: PPT 130W, TDC 80A, EDC 125A
Scalar: Auto
AutoOC max freq: +150Mhz
Temp throttle: Auto
Curve optimizer: Negative 25,24,25,15,15,25 (15 are for my best & second best cores).

Results:
- Cinebench R23 11820 average MC, 1556 SC
-4650 to 4700 mhz all core boost which sticks like glue 90% of the time in BF2042, Valorant, Cs go & Warzone ( all tests ran at 1080P, low settings)
-around 85-93% usage on PPT, TDC & EDC
-Temperatures:, 35-48C idle, 63-68C in games like BF2042,Warzone, 75-78C in Cinebench, Max 85C in Prime95 torture test, Vcore does not shoot up past 1.37V
-Stable after running 8 hours of Prime95 Blend test, 4 hours of torture test, several hours of cinebench R23 MT tests, 1 hour of OCCT most demanding settings.

My problem:

Why do I get a rather low and consistent single core score in Cinebench no matter the settings I change and why I see literally 0 SC boost in games above 4650-4700Mhz range, even tho I raised the maximum boost limit to 4.8 GHZ basically. I was expecting at least 1 core to boost for a few seconds at least to that Mhz, but it just does not happen, at all. (funny thing it boosts to 4.8 ghz only in loading screen when i first open BF2042 haha)
Also, if this helps, I noticed that when I run Cinebench SC test, frequency does not stay on a single core, it is shared between my 2 best cores, and only rarely for a few seconds only 1 core is utilized (max 4.7k GHZ) then it goes back to bounce between them until the test is finished. Monitored this behavior with RyzenMaster, and noticed in some youtube videos that people were having a consistent frequency on only 1 core for 90% of the time when running Cinebench SC test.
I tried messing with Motherboard power limits, scalar, lower/higher AutoOC score, lower/higher curve, literally tested almost every possible combination, I just cant bring SC score up & make it boost as well in games to what it should boost.

Anyone has any insight for this? Im open to every suggestion.
Thanks ! (Please excuse my english, its not my first language)
Try this step by step (read till end):
  • Disconnect from internet
  • Uninstall gpu driver DDU (clean and do not restart).
  • Uninstall all the processors (should be 12 on yours, also when it asks for restart, click on no) and the chipset in control panel on device manager like this:


  • Restart the pc to bios, and update to the latest bios again. Then go to bios again after update and load default or optimized settings, find HPET or High Precision Event Timer and disable that, then save and exit.

  • boot up to windows and install the latest Chipset driver, reboot and go to power plan and connect to internet.

  • Install the latest nvidia driver.

    *do this all offline until reboot after installing chipset driver, also you may reboot to bios after all of this to set the XMP (and previous settings you did). Download needed files (highlighted word) before doing step 1, do the step by orders.

  • Run cmd as admin, then do chkdsk /x /f /r, after that do sfc /scannow

  • And check windows update (and optional updates) if there is any and install them (except chipset in optional update). Enable hardware accelerated graphics scheduling (available in the latest windows update) in graphics settings and reboot, it should be like this:



  • Make sure the psu connected to the gpu is 1 pcie cable per 1 slot (use main cable, not the branches/split) like this:
 
Hello,
My specs are:
-Ryzen 5 5600x cooled with Arctic freezer duo 34
-Mag Tomahawk B550
-16G DDR4 Patriot viper B-Die 3200 cl 16 18 18 36 overclocked to 3600 cl 18 20 20 40
-Gtx 1070 TI
-Case well ventilated with 5 case fans but rather bad cable-management
-Windows 10 latest version + updates

So, long story short, I just upgraded my Cpu + mobo from 2600x & Tomahawk B350 to Ryzen 5 5600x and B550 Mag Tomahawk with updated bios (last stable version, latest version is a beta one for Win11).
I messed around for the last 3 days with overclocking Cpu + ram to get as much performance as possible for games (mostly because I love to tinker even tho I know performance improvements probably didnt worth the time spent) and I arrived at the following configuration, after probably 200+ restarts and different tested things:

CPU:
PBO advanced
Power limits manual: PPT 130W, TDC 80A, EDC 125A
Scalar: Auto
AutoOC max freq: +150Mhz
Temp throttle: Auto
Curve optimizer: Negative 25,24,25,15,15,25 (15 are for my best & second best cores).

Results:
- Cinebench R23 11820 average MC, 1556 SC
-4650 to 4700 mhz all core boost which sticks like glue 90% of the time in BF2042, Valorant, Cs go & Warzone ( all tests ran at 1080P, low settings)
-around 85-93% usage on PPT, TDC & EDC
-Temperatures:, 35-48C idle, 63-68C in games like BF2042,Warzone, 75-78C in Cinebench, Max 85C in Prime95 torture test, Vcore does not shoot up past 1.37V
-Stable after running 8 hours of Prime95 Blend test, 4 hours of torture test, several hours of cinebench R23 MT tests, 1 hour of OCCT most demanding settings.

My problem:

Why do I get a rather low and consistent single core score in Cinebench no matter the settings I change and why I see literally 0 SC boost in games above 4650-4700Mhz range, even tho I raised the maximum boost limit to 4.8 GHZ basically. I was expecting at least 1 core to boost for a few seconds at least to that Mhz, but it just does not happen, at all. (funny thing it boosts to 4.8 ghz only in loading screen when i first open BF2042 haha)
Also, if this helps, I noticed that when I run Cinebench SC test, frequency does not stay on a single core, it is shared between my 2 best cores, and only rarely for a few seconds only 1 core is utilized (max 4.7k GHZ) then it goes back to bounce between them until the test is finished. Monitored this behavior with RyzenMaster, and noticed in some youtube videos that people were having a consistent frequency on only 1 core for 90% of the time when running Cinebench SC test.
I tried messing with Motherboard power limits, scalar, lower/higher AutoOC score, lower/higher curve, literally tested almost every possible combination, I just cant bring SC score up & make it boost as well in games to what it should boost.

Anyone has any insight for this? Im open to every suggestion.
Thanks ! (Please excuse my english, its not my first language)
Try this step by step (read till end):
  • Disconnect from internet
  • Uninstall gpu driver DDU (clean and do not restart).
  • Uninstall all the processors (should be 12 on yours, also when it asks for restart, click on no) and the chipset in control panel on device manager like this:


  • Restart the pc to bios, and update to the latest bios again. Then go to bios again after update and load default or optimized settings, find HPET or High Precision Event Timer and disable that, then save and exit.

  • boot up to windows and install the latest Chipset driver, reboot and go to power plan and connect to internet.

  • Install the latest nvidia driver.

    *do this all offline until reboot after installing chipset driver, also you may reboot to bios after all of this to set the XMP (and previous settings you did). Download needed files (highlighted word) before doing step 1, do the step by orders.

  • Run cmd as admin, then do chkdsk /x /f /r, after that do sfc /scannow

  • And check windows update (and optional updates) if there is any and install them (except chipset in optional update). Enable hardware accelerated graphics scheduling (available in the latest windows update) in graphics settings and reboot, it should be like this:



  • Make sure the psu connected to the gpu is 1 pcie cable per 1 slot (use main cable, not the branches/split) like this:
 
...
Power limits manual: PPT 130W, TDC 80A, EDC 125A
Scalar: Auto
AutoOC max freq: +150Mhz
....
What i found when tuning OC for my 5800X is a high AutoOC boost frequency can harm performance in Cinebench benchmarks, I limit to 100Mhz. That coupled with settings PPT, TDC and especially EDC can significantly performance in counter-intuitive ways, at least from the perspective of how my 3700X worked.

Basicly, you want to tune performance with EDC. I set TDC and PPT to just over the rated values, maybe 5...watts or amps as it were) for my CPU (yours is a 65W TDP processor, so ppt=88, tdc=60, edc=90). Then tune EDC to get the performance balance you want.

A higher EDC should boost your single best cores more heavily for lighter workloads (like gaming). But just like a long distance runner setting a bad pace the processor tires out early with long, multi-thread workloads. So I'm running an EDC just under the rated value and getting the best CB20 MT scores (more important for me). Going the opposite would help ST scores and gaming, but don't go too much. Just a little bit at a time and test as when you hit the limit it drops fast.

When set up right on a heavy all-core workload you processor is sitting right on the EDC limit you set but never hitting the TDC or PPT limit. That way you can tune EDC to do what you want.

Also, don't set too high a boost frequency as it too can heat the processor sooner, the same as too high an EDC. Tuning EDC and boost frequency is usually the last thing as it's highly iterative.

Also reducing the curve by 5-10 points or so on the 'worst' cores might help ST at the expense of long MT performance.

On my 5800X, where I settled gives me highly reliable 4900Mhz boosts in games. I could get reliable 5000-5050Mhz boosts by raising EDC 10A or so, increasing the boost over-ride to +200Mhz, and adding in a scalar of 5 to give it more volts. But temps go to heck in CB20 MT and performance suffers. It's a trade-off process.

The principle is: ST needs more voltage and high(er) EDC current to allow boosting to higher frequency, but that hurts long-term MT performance. Give it more voltage only by tweaking the curve, per core. Don't change Vcore (leave it in auto) and very small scalar if any.

OH YES...forgot one thing. You want to test for stability with a light, bursty workload. That's because of how curve optimizer works: it lowers voltage more at the high frequency end, when it's lightly loaded, than it does it at the low frequency end. You'll know you've got a core curve too low when something like a virus scan with MSDefender or just clicking around in Windows frenetically opening and closing apps and dragging windows around causes a crash.
 
Last edited:
Reactions: Koekieezz

Hiper1995

Commendable
Apr 2, 2019
21
1
1,515
0
What i found when tuning OC for my 5800X is a high AutoOC boost frequency can harm performance in Cinebench benchmarks, I limit to 100Mhz. That coupled with settings PPT, TDC and especially EDC can significantly performance in counter-intuitive ways, at least from the perspective of how my 3700X worked.

Basicly, you want to tune performance with EDC. I set TDC and PPT to just over the rated values, maybe 5...watts or amps as it were) for my CPU (yours is a 65W TDP processor, so ppt=88, tdc=60, edc=90). Then tune EDC to get the performance balance you want.

A higher EDC should boost your single best cores more heavily for lighter workloads (like gaming). But just like a long distance runner setting a bad pace the processor tires out early with long, multi-thread workloads. So I'm running an EDC just under the rated value and getting the best CB20 MT scores (more important for me). Going the opposite would help ST scores and gaming, but don't go too much. Just a little bit at a time and test as when you hit the limit it drops fast.

When set up right on a heavy all-core workload you processor is sitting right on the EDC limit you set but never hitting the TDC or PPT limit. That way you can tune EDC to do what you want.

Also, don't set too high a boost frequency as it too can heat the processor sooner, the same as too high an EDC. Tuning EDC and boost frequency is usually the last thing as it's highly iterative.

Also reducing the curve by 5-10 points or so on the 'worst' cores might help ST at the expense of long MT performance.

On my 5800X, where I settled gives me highly reliable 4900Mhz boosts in games. I could get reliable 5000-5050Mhz boosts by raising EDC 10A or so, increasing the boost over-ride to +200Mhz, and adding in a scalar of 5 to give it more volts. But temps go to heck in CB20 MT and performance suffers. It's a trade-off process.

The principle is: ST needs more voltage and high(er) EDC current to allow boosting to higher frequency, but that hurts long-term MT performance. Give it more voltage only by tweaking the curve, per core. Don't change Vcore (leave it in auto) and very small scalar if any.

OH YES...forgot one thing. You want to test for stability with a light, bursty workload. That's because of how curve optimizer works: it lowers voltage more at the high frequency end, when it's lightly loaded, than it does it at the low frequency end. You'll know you've got a core curve too low when something like a virus scan with MSDefender or just clicking around in Windows frenetically opening and closing apps and dragging windows around causes a crash.
Hello, thanks a lot for the reply !
So basically ur saying i will get higher fps/better 1% and 0.1% lows in games with a better ST instead of a consistent MT performance? ( talking about games like warzone,BF2042 which seem to boost on all cores preety well). I tried meddling a bit with EDC/PPT AND TDC values, TDC is always at around 60-70% usage in Cinebench, PPT is at around 85-87% and EDC is at around 93%.
When i tried to raise the EDC limit by 10 for example, i noticed score dropped consistently in Cinebench. When i set Motherboard limits (around 220 EDC) i was getting even lower scores for some reason.
Also, from my understanding, boost override setting should only increase the limit that cores can boost to, if power/temperatures permit it, but to me in games seems to do nothing. With 0 Mhz and +200 mhz on the same curve, both cinebench and ingame frequencies remained the same, hugging like glue 4650-4675 without boosting atleast for a second to the theoretical now max 4850 mhz. Temps permitted it(65*C), power limits allowed it, voltage also allowed it judging by graphs, so why does it not boost ?
 
Hello, thanks a lot for the reply !
So basically ur saying i will get higher fps/better 1% and 0.1% lows in games with a better ST instead of a consistent MT performance? ( talking about games like warzone,BF2042 which seem to boost on all cores preety well). .
....
I think it's largely true, but it can vary considerably from game to game. Mostly, I don't worry about FPS but core clocks in-game. That's because the games I play are all GPU limited @1440p for me anyway: Ghost Recon, RDR2, Cyberpunk, etc.

One thing to remember about games: even the most heavily threaded games (like Cyberpunk for instance) really rests it's performance on one thread, maybe two. It's THAT thread that limits the CPU's input to FPS. It's usually lightly loaded but not as light as just clicking around in windows or a virus scan. I look for at least one or two cores to be able to boost freely at light bursty loads when I test and that gives the best gaming performance.

I think you're mostly right about core performance boost: it's obvious only with light threaded, bursty workloads. Heavy loads of any threading overtax power, current, or thermals just too much so it starts pulling back and it's effect is limited to nil. That's not to say it doesn't try, it's just that it can't if it's temp is too high. And, of course that's where cooling shines with Ryzen: keep it cool, it keeps boosting longer to max clocks, and at higher mid clocks frequencies.

Also be sure to set up HWInfo64 to watch clocks properly. You can with Afterburner in-game but it doesn't catch the max-clock boosts so well. Those boosts are very short in duration.
 
Last edited:

DimkaTsv

Proper
Nov 7, 2021
156
23
115
15
I messed around for the last 3 days with overclocking Cpu + ram to get as much performance as possible for games (mostly because I love to tinker even tho I know performance improvements probably didnt worth the time spent) and I arrived at the following configuration, after probably 200+ restarts and different tested things
Sorry, toxic mode on:
Just 3 days for CPU AND RAM? Ram alone will take more than week to find stable values
Toxic mode off . . .

  1. Why do I get a rather low and consistent single core score in Cinebench no matter the settings I change
  2. why I see literally 0 SC boost in games above 4650-4700Mhz range, even tho I raised the maximum boost limit to 4.8 GHZ basically. I was expecting at least 1 core to boost for a few seconds at least to that Mhz, but it just does not happen, at all. (funny thing it boosts to 4.8 ghz only in loading screen when i first open BF2042 haha)
  3. Also, if this helps, I noticed that when I run Cinebench SC test, frequency does not stay on a single core, it is shared between my 2 best cores, and only rarely for a few seconds only 1 core is utilized (max 4.7k GHZ) then it goes back to bounce between them until the test is finished. Monitored this behavior with RyzenMaster, and noticed in some youtube videos that people were having a consistent frequency on only 1 core for 90% of the time when running Cinebench SC test.
  4. I tried messing with Motherboard power limits, scalar, lower/higher AutoOC score, lower/higher curve, literally tested almost every possible combination, I just cant bring SC score up & make it boost as well in games to what it should boost.
Anyone has any insight for this? Im open to every suggestion.
  1. Idk, seems about right. Your CPU isn't top of the mountain in terms of power per volts too. And it isn't manual overclock
  2. Easy. You use Auto OC - basically overclocked PBO. And CPU choose by itself what frequency it will limit itself, based on load and thread count. Lower load/thread - higher frequency
  3. Also easy... Compared to Intel (well maybe besides their thread director in Alder lake's) Ryzen thread management on low count is just... better realised, i guess. Usually windows counts cores one by one from top to least effective. But CPPC allows CPU to say load state and tells Windows that it have 2x top performance cores - 1 and 2. And windows divide 1thread load between these 2 to reduce consumption and increase efficiency on these loads.
Wanna see standard single core boosts - go into BIOS and disable CPPC and CPPC preferred cores. Easy
4. Why? Personally my 5600x have hard limit at 108W PPT, 138A EDC and 73 A TDC... But i just set up 91W (actually just 65*1.4, which makes exactly 40% more... i like round numbers) / 128A / 72A because after that performance gain is so miniscule.

P.S. My settings -
PBO curve - negative 30-20-20-25-25-30
PBO limit +200 mHz.
PBO offset -0.066V

And here is mine performance in CB 23... All core boost is 4605-4625 for time of test.
Test was on hot boot though. PC was on for 40 hours already. Cold boot can enchance results a bit. I will most likely get up to 11950 in multicore and up to 1600 in single core... it is RNG though.
Here is also my CPU-Z result. https://valid.x86.fr/ig9e21
 
Last edited:

DimkaTsv

Proper
Nov 7, 2021
156
23
115
15
When set up right on a heavy all-core workload you processor is sitting right on the EDC limit you set but never hitting the TDC or PPT limit. That way you can tune EDC to do what you want.
Not with 5600x. It can hit 90 stock EDC and even 120, and still hit 76W stock TDP... And even 88W TDP.
But in 5600x case, you rarely want more than 120 EDC... As well as more than 90W TDP. Just plain inefficient.

There are 2 types of load though... EDC limited and PPT limited.
One is light multithread load... you can see an example in CPU-Z.
And another is really hard load. You can see this one in Prime95.
You can see both in y-cruncher, depending on preparation / calculation phases
 
....
There are 2 types of load though... EDC limited and PPT limited.
....
That's what I'm talking about.

If pushed too hard early on the processor heats up early and stops boosting as high later on in heavy processing, so CB20/23 scores go down and performance for productivity and content creation work suffers.

But that doesn't really matter for gaming, so pushing harder (by opening up PPT and TDC) is probably OK as the higher boosting (better for FPS if CPU limited) is usually very quick and doesn't heat up the CPU too much.

As I said, I can get 5000-5050Mhz boost clocks on my 5800X by opening up PPT and TDC, raising the core boosting to 200Mhz, and lifting the EDC. But CB20/CB23 suffers and it generates tons of unecessary heat in the process...with temps bouncing off my 90C platform limit. Other than being cool to see, 5050 boost clocks aren't helpful so I prefer to lower PPT and TDC and boost clocks and tweak EDC to get the best CB20 scores, which is very helpful.
 
Last edited:

DimkaTsv

Proper
Nov 7, 2021
156
23
115
15
CB20/CB23 suffers and it generates tons of unecessary heat in the process
But... they will never boost to max frequency in something like CB load.
My limit is 4850 (well 5600x for ya), and CB loads only up to 4625.
Well... at least with decent cooling solution you have no problems with heat on 5600x, so... no problems with rizing limits to oblivion. I had never yet hit 90C even close... max i had was 85-86 before i made some REALLY intense voltage optimisation. Basically my voltages rn are tuned down to 1 point in PBO CO and down to 1 threshold (0.006V) as global VCore offset

Tbh, i can just remove all limits, as CPU itself have hardcap for me for some reason (108W TDP, around 135A EDC and 73A TDC as i mentioned before)... maybe because i am not playing with PBO health management. Still will not hit 90C.
Actually idk why it hardcaps, but with my tuning my 40+ hours semi-idle voltage (browser, Black Desert in background, few benches, work for uni, tracker, Discord etc...) and i have 1.102V average voltage. Even by my wishes this is ACTUALLY low. But this average comes from hitting 0.8-1.25V range. Actually this is compared to 1.25+ idle without offset, just curves + PBO overclock

Maybe after i buy new GPU (well, who knows when, they are insanely priced where i live), i will look for 5800X... But why? When i have such an pricelessly good CCD in 5600X

Btw, fyi, when you set PBO overclock value, your curves shift up. So you may even wanna lower them a bit. That can help with keeping temps in check
And to do it, for example, i had perfect set of curves with known voltages (without PBO OC) for load before. So i just runned some tests, checked voltages before/after, set offset, made some tests again until they hit values i know were stable. Why you need a test? because offset reduce VCore domain voltage, and it is not equal to what cores will take. So i needed to align it so values were equal to less OC but no offset.
Actually getting to specific offset value took me not that much time as i expected... Around 1 hour.
 
Last edited:
But... they will never boost to max frequency in something like CB load.
...
Exactly...but it still generates tons more heat for me. So I try not to push up EDC any more than needed to get the best CB20 MT scores. Which surprisingly is also around 120A, the same as a 5900X...the same as a 5600X. At least the ones that some of the YouTubers have tweaked their curve optimizer.

I can't fathom why that is as I'm positive a single core can't draw anywhere close to 120A even short term. There's a lot more going on with the boost algorithm than just this. Like the 'hard caps' you're running into not really easy to explain by the facts I have.
 
Last edited:
It is just that CB always somehow limited to 120A load... i only see more in Linpack preparation phase or CPU-Z
Prime95 can also generate similar wonky results. I'm really not interested in tuning for Prime95...or for Linpack either I'm sure. That's because they DO generate wonky results, but also because they're 'unreal'. That is: they don't represent real-world processing work loads. CB20/23 does.

Prime95 and Linpack are useful for de-rating a system...to work with limited cooling, in an ultra-compact case with low airflow, or in a confined cabinet for instance.
 

DimkaTsv

Proper
Nov 7, 2021
156
23
115
15
Prime95 can also generate similar wonky results.
Nah, that one 100% don't hit EDC... ever
It will always hit TDP though. Also questionable real work load with CB as well... Not that you are rendering video all the time.
Some people actually use these apps. Also they are good for stability check. Fast and simple
 
...
Some people actually use these apps. Also they are good for stability check. Fast and simple
I do to, but I'm not hard-over for P95 stability. Just because it can't even do 15 min's of Prime95 or Linpac doesn't mean it won't work all day long encoding videos...or gaming. I'm more interested in 60 min's of CB23....or running a HandBrake encoding queue that runs an hour or so for stability testing. Or just Folding@Home for a day or so.

But it's all pretty irrelevant: if set up right with PBO and curve optimizer you won't crash with CB, P95, Linpac or F@H (unless cooling is completely inadequate.) And if set up wrong you're more likely to crash just clicking around in windows, or other light bursty loads. That's the way Curve Optimizer works.
 

DimkaTsv

Proper
Nov 7, 2021
156
23
115
15
I'm more interested in 60 min's of CB23
Believe me... 10 run test of 4 thread (or maybe 6 i guess for 5800x) load is fastest way to find CPU instability, because if you missed even by 0.012V, it will crash REALLY fast. Like actually on test 1-3 it will crash if CPU is unstable
Idk why, but 4 thread loaded Prime95 is just monster to find errors... Specifically 4 threads as it will load main 4 cores
At least 5-10 minute test is much faster than 60 min of CB...
Frankly that's why i went away from memory OC... When single test is 5-6 hours to check for that single error you try to remove for a week is... notorious (btw hadn't fixed it in the end... reverted OC).
Any instability can cause corruption of system or data... So better be safe than sorry
Also... if you pass single core load and full 100% load - doesn't mean you stable)
2 and 4 core loads have much higher precision, and much less limitation, so they crash much more easily overall.
 
Last edited:

ASK THE COMMUNITY