Question PC randomly reboots while gaming, but not during stress test ?

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
May 3, 2025
29
2
35
Hi.
I built my PC a year ago, and have been experiencing this problem ever since. After a while in game, sudden black screen appears, and reboots into windows afterwards. The only clues I’ve found are a critical error (41 - Kernel-Power) in the Event Viewer and a critical hardware error (193 - LiveKernelEvent) in the Reliability History.

Here are my specs:
MB: MSI PRO B650-s wifi
CPU: AMD Ryzen 5 7600X
GPU: NVIDIA GeForce RTX 4070
RAM: Corsair 32GB KIT DDR5 5600MT/s CL40 Vengeance RGB Grey EXPO
Cooler: Endorfy Fera 5
Storage: Apacer AS2280Q4X 2TB
PSU: GIGABYTE P750GM

Just a few additions:
PC has been to 2 different repair shops, both times returned with no reboots happening.
I have already replaced PSU. I measured a peak CPU temperature of 96°C, but before the reboot, temps are usually around 85°C, so it doesn’t seem like an overheating issue.

I would appreciate any tips or suggestions.

NOTE: this is my second thread on this topic, becuase the first one has died already. Hope thats okay.
 
Last edited:
I bough a 5060ti, and an ADATA XPG CORE REACTOR II VE(A tier PSU). They both didn't change anything.
Then the issue is with more rare instances. Either MoBo VRMs or main electricity grid issue.

To "fix" MoBo VRMs - only new MoBo can do that.
Now, your current MoBo has decent 12 main phase VRM (in 12+2+1 configuration), but VRM components (especially capacitors) do age in time. And since VRM will get hot as power is delivered to CPU, it could be due to overheating VRMs as well. Especially since your VRM chokes doesn't have any kind of heatsink covering them what-so-ever.
So, if you go with new MoBo, get the one that has heatsinks covering whole VRM and not just MOSFETs. And preferably the one which has more main VRM phases as well (e.g 14, 16, 20 etc). Since more VRM phases means better power dispersion within components and individual components won't get as hot, among other benefits.

Main electricity grid issues are remedied by the UPS.
Preferably line-interactive, true/pure sine wave. I can explain in-depth about UPSes if you want.
 
Overheating does make sense, because it took much longer to reboot now, then before when my room temperature was 5 - 10C higher. I got the same damaged mb laying around, it has bent pin on cpu socket. Do you think it is worth trying it with the risk of damaging my cpu or should I just get a new one with VRM heatsink?
 
I got the same damaged mb laying around, it has bent pin on cpu socket.
When CPU socket has bent pin and it is confirmed that MoBo is toast, how do you plan to use that MoBo? Build won't power on or POST.

Do you think it is worth trying it with the risk of damaging my cpu or should I just get a new one with VRM heatsink?
Better to get new MoBo that, for the very least, has proper VRM cooling on it.

Here's nice Q&A about VRMs: https://www.lenovo.com/us/en/glossary/vrm/

If you are unsure about new MoBo VRM cooling or how many phases it has, post the make and model here. I can help you out with this.

Also, do note that while initially it may seem that MoBo has good VRM heatsinks, e.g on Intel side, AsRock B760 Pro RS/D4,
specs: https://www.asrock.com/mb/Intel/B760 Pro RSD4/index.asp

But on closer inspection, VRM heatsinks cover the chokes only partly.
MoBo review (in French): https://www.cowcotland.com/articles...ck-b760m-pro-rs-d4-du-matx-pour-pas-cher.html
 
Hi.
I built my PC a year ago, and have been experiencing this problem ever since. After a while in game, sudden black screen appears, and reboots into windows afterwards. The only clues I’ve found are a critical error (41 - Kernel-Power) in the Event Viewer and a critical hardware error (193 - LiveKernelEvent) in the Reliability History.

Here are my specs:
MB: MSI PRO B650-s wifi
CPU: AMD Ryzen 5 7600X
GPU: NVIDIA GeForce RTX 4070
RAM: Corsair 32GB KIT DDR5 5600MT/s CL40 Vengeance RGB Grey EXPO
Cooler: Endorfy Fera 5
Storage: Apacer AS2280Q4X 2TB
PSU: GIGABYTE P750GM

Just a few additions:
PC has been to 2 different repair shops, both times returned with no reboots happening.
I have already replaced PSU. I measured a peak CPU temperature of 96°C, but before the reboot, temps are usually around 85°C, so it doesn’t seem like an overheating issue.

I would appreciate any tips or suggestions.

NOTE: this is my second thread on this topic, becuase the first one has died already. Hope thats okay.

Try dropping ram speed to 5200
 
When CPU socket has bent pin and it is confirmed that MoBo is toast, how do you plan to use that MoBo? Build won't power on or POST.


Better to get new MoBo that, for the very least, has proper VRM cooling on it.

Here's nice Q&A about VRMs: https://www.lenovo.com/us/en/glossary/vrm/

If you are unsure about new MoBo VRM cooling or how many phases it has, post the make and model here. I can help you out with this.

Also, do note that while initially it may seem that MoBo has good VRM heatsinks, e.g on Intel side, AsRock B760 Pro RS/D4,
specs: https://www.asrock.com/mb/Intel/B760 Pro RSD4/index.asp

But on closer inspection, VRM heatsinks cover the chokes only partly.
MoBo review (in French): https://www.cowcotland.com/articles...ck-b760m-pro-rs-d4-du-matx-pour-pas-cher.html
MSI MAG B650 TOMAHAWK WIFI is my favourite, 14 + 2 + 1 phases and heatsink looking like covering whole chokes part.
 
MSI MAG B650 TOMAHAWK WIFI is my favourite, 14 + 2 + 1 phases and heatsink looking like covering whole chokes part.
Do note, no PCI-E 5.0 support what-so-ever with this MoBo.

Other good mid-range B650 MoBos include:
* AsRock B650 Steel Legend WiFi (best mid-tier MoBo)
* AsRock B650E PG RIPTIDE WIFI
* Gigabyte B650 Aorus Elite AX V2
* MSI B650 Gaming Plus WiFi

Further reading: https://www.techspot.com/bestof/amd-b650-motherboards/
 
Do note, no PCI-E 5.0 support what-so-ever with this MoBo.

Other good mid-range B650 MoBos include:
* AsRock B650 Steel Legend WiFi (best mid-tier MoBo)
* AsRock B650E PG RIPTIDE WIFI
* Gigabyte B650 Aorus Elite AX V2
* MSI B650 Gaming Plus WiFi

Further reading: https://www.techspot.com/bestof/amd-b650-motherboards/
My gpu has PCI-E 4.0. So now it's between MSI MAG B650 TOMAHAWK WIFI and MSI B650 Gaming Plus WiFi. Will the 2 extra main phases make a difference with temps?
 
Last edited:
My gpu has PCI-E 4.0
Yes, but for future use, RTX 50-series and newer uses PCI-E 5.0. But is backwards compatible with PCI-E 4.0.
Though, any M.2 drive that is PCI-E 5.0 is wasteful to buy (e.g Samsung 9100 Pro). So, best to stick with PCI-E 4.0 M.2 drives (like 990 Pro).

Will the 2 extra main phases make a difference with temps?
Yes.

How much? Difficult to tell.
But the more VRM phases there are, the more the power is spread out, whereby VRM components won't get as hot.

Overall, with VRMs , the more - the better.
 
Yes.

How much? Difficult to tell.
But the more VRM phases there are, the more the power is spread out, whereby VRM components won't get as hot.

Overall, with VRMs , the more - the better.
Impossible to tell without a crystal ball. VRMs are notoriously overbuilt for the layman for longevity already, getting anything beyond a middle of the road board with extra VRMs is only going to hurt the bank account, in my opinion.
 
Impossible to tell without a crystal ball.
Well, if you build two systems, with identical components and same cooling, where one uses 12 main phase VRM and another uses 14 main phase VRM MoBo, then you can do tests to see how hot the VRMs get on each build. Where you could see temp difference. But how much it would be, as i said, difficult to tell.

getting anything beyond a middle of the road board with extra VRMs is only going to hurt the bank account, in my opinion.
Same goes to any Core i7/i9 or Ryzen 7/9 CPU. Same with any GPU that costs more than ~300 bucks. Or DDR5 RAM with 6400+ MT/s. Since those high-end components have terrible value. Yet, people still buy them. And there's plenty of market for them.

And some go so far, by buying extremely expensive components, regardless the price. E.g there must be a reason why MSI has made Godlike MoBos for many years now, where MoBo alone costs $1000-$1500.
Or those nutcases that buy $300+ KB, while they could've been just fine with $11 KB (like yours truly :pt1cable:).
 
Last edited:
  • Like
Reactions: helper800
Well, if you build two systems, with identical components and same cooling, where one uses 12 main phase VRM and another uses 14 main phase VRM MoBo, then you can do tests to see how hot the VRMs get on each build. Where you could see temp difference. But how much it would be, as i said, difficult to tell.
It's not about the heat generated, it's about the perceived benefit that less heat generated would give which is said to be VRM longevity. You would have to get at least 10s of 2 boards that are identical except some arbitrary VRM phase count difference to have a valid test. The test would probably last 2 decades before the boards start failing, then you have to do a lot of analysis on the boards to see if there is any statistical relevance to the longevity of the board and if the VRM can be confirmed as a primary contributor to said longevity.
 
And some go so far, by buying extremely expensive components, regardless the price. E.g there must be a reason why MSI has made Godlike MoBos for many years now, where MoBo alone costs $1000-$1500.
Or those nutcases that buy $300+ KB, while they could've been just fine with $11 KB (like yours truly :pt1cable:).
As somebody with a 5090 and two 300 dollar keyboards, I take offense to that. 😆
 
You would have to get at least 10s of 2 boards that are identical except some arbitrary VRM phase count difference to have a valid test. The test would probably last 2 decades before the boards start failing, then you have to do a lot of analysis on the boards to see if there is any statistical relevance to the longevity of the board and if the VRM can be confirmed as a primary contributor to said longevity.
If VRM count wouldn't matter that much, if any at all; all MoBos would have, at best, 4 main VRM phases. Since adding more phases means more components and added cost. Yet, there are MoBos out there with 16, 20, 24 and even 26 main VRM phases. Where the trend usually is: the more high-end MoBo is, the more VRM phases it has.

So, VRM phase count still matters. And a lot.

Also, testing VRM doesn't take decades since when system is unstable due to lack of VRMs or their overheating, you'd see it off the bat.
 
I went with MSI B650 Gaming Plus WiFi and I got the same results within the same time. I also tried UPS as you suggested, and got the same results, but it was an old one with 650VA, so not sure if that is valid for the test.
 
I also tried UPS as you suggested, and got the same results, but it was an old one with 650VA, so not sure if that is valid for the test.
Make and model of the UPS is? Since as with PSUs, there are varying quality of UPSes out there as well.
Though, 650VA is little. Maybe 350W UPS, which is too less to back up your PC under load. So, UPS may not have been even engaged.

I went with MSI B650 Gaming Plus WiFi and I got the same results within the same time.
It is getting down to slim pickings as of what the issue could be. 🤔

Recap:
New, good quality PSU - no fix.
New GPU - no fix.
New MoBo - no fix.
UPS - no fix. (Given that you have proper UPS, but without make and model, i can't say that.)

That leaves only RAM and CPU.

Btw, did you sort out the thermals issue?


In terms of RAM, download and run memtest86,
link: https://www.memtest86.com/

Guide to install and use it: https://www.memtest86.com/tech_creating-window.html

1 full pass (all 15 tests) is bare minimum. 2 full passes are better while 4 full passes is considered acceptable.

Since it takes a while, best to let it run overnight.
1x 8GB ~1h per 1 full pass. ~4h for 4 full passes.
2x 8GB ~2.5h per 1 full pass. ~10h for 4 full passes.
2x 16GB ~5h per 1 full pass. ~20h for 4 full passes.

If there are no errors - RAM is sound.
If there are errors - replace the RAM.

For absolute testing of RAM, 32 full passes are needed (due to test #7).
Further reading: https://www.memtest86.com/tech_individual-test-descr.html
But no-one in their right mind is going to do 32 full passes, since it takes forever + then some. Especially when you have a lot of RAM in the system. Consensus in the tech support is that 4 full passes are enough.


For other possible fixes: format OS drive and make a clean Win installation. This will get rid of software issues.


It could be, that it comes down to faulty CPU.
Though, symptoms point towards power delivery issue, especially the ID41. But there, the chain is now covered and those components should work without issues (electricity grid/UPS, PSU, MoBo, GPU). Leaving only two, RAM or CPU.
Now faulty CPU can cause ID41 as well, but since CPUs are robust, they are the least suspect item in usual troubleshooting.

So, do test the RAM and see if it has any errors or not. 4 full passes will do.
If no dice, try wiping the OS. It won't cost you anything but it may fix the system, given that there is some software corruption that causes all this.
If still no dice, i'd look towards new CPU.
 
If VRM count wouldn't matter that much, if any at all; all MoBos would have, at best, 4 main VRM phases. Since adding more phases means more components and added cost. Yet, there are MoBos out there with 16, 20, 24 and even 26 main VRM phases. Where the trend usually is: the more high-end MoBo is, the more VRM phases it has.

So, VRM phase count still matters. And a lot.

Also, testing VRM doesn't take decades since when system is unstable due to lack of VRMs or their overheating, you'd see it off the bat.
My point is that past a certain amount of phases of 60a+ VRMs, there is little to no benefit, and if you say there is, I would ask for the proof. Of course having a minimum level of performance VRMs is important, but how much benefit is there past that, and at what cost are those benefits worth? That's why a midrange board is usually more than enough VRMs for any socketable CPU on the motherboard.
 
My point is that past a certain amount of phases of 60a+ VRMs, there is little to no benefit, and if you say there is, I would ask for the proof.
At 5 min mark is the proof;
Or watch the whole video.

View: https://www.youtube.com/watch?v=DOHkJtUkj8Y


Probably not the proof you were looking for, but more VRM phases = more stable power for CPU. And for high-end chips, especially OC, you want as stable power as you can get.

Of course having a minimum level of performance VRMs is important, but how much benefit is there past that
Each additional phase stabilizes the voltage PWM signal for CPU.

There could be a point, where X amount of main phases has diminishing returns and isn't cost effective, e.g like it is with RAM transfer speeds (e.g up 3200 MT/s for DDR4 and up to 6400 MT/s for DDR5). But testing out VRM phases isn't as easy as testing out RAM transfer speed. Because power delivery greatly changes between each individual CPU model.

With testing, once can find sweet spot of VRM phases for each individual CPU but it would take a lot of hardware and testing. And to my knowledge, no such in-depth testing has been made thus far. But maybe in the future, someone does it, who has the means for it.

and at what cost are those benefits worth?
Value (price) is completely subjective.

You see little to no value for MoBos with more than 8 or 12 main phase VRM. Yet, you see great value in most expensive GPU (RTX 5090). Easy $2000. More like $2500.
I, in the other hand, see value in high VRM phase count, since it is to do with power delivery. Yet, i see 0 value in power hogging, $1000+ GPUs.
 
  • Like
Reactions: helper800
At 5 min mark is the proof;
Or watch the whole video.

View: https://www.youtube.com/watch?v=DOHkJtUkj8Y


Probably not the proof you were looking for, but more VRM phases = more stable power for CPU. And for high-end chips, especially OC, you want as stable power as you can get.


Each additional phase stabilizes the voltage PWM signal for CPU.

There could be a point, where X amount of main phases has diminishing returns and isn't cost effective, e.g like it is with RAM transfer speeds (e.g up 3200 MT/s for DDR4 and up to 6400 MT/s for DDR5). But testing out VRM phases isn't as easy as testing out RAM transfer speed. Because power delivery greatly changes between each individual CPU model.

With testing, once can find sweet spot of VRM phases for each individual CPU but it would take a lot of hardware and testing. And to my knowledge, no such in-depth testing has been made thus far. But maybe in the future, someone does it, who has the means for it.


Value (price) is completely subjective.

You see little to no value for MoBos with more than 8 or 12 main phase VRM. Yet, you see great value in most expensive GPU (RTX 5090). Easy $2000. More like $2500.
I, in the other hand, see value in high VRM phase count, since it is to do with power delivery. Yet, i see 0 value in power hogging, $1000+ GPUs.
I have a 240hz 4k OLED gaming monitor and I wanted to experience it to it maximum potential for the time. I was originally not going to get a graphics card upgrade at all, but I had the opportunity for an MSRP Zotac card and took it. It was 2369.99 USD. I definitely won't need to upgrade again for another 5-6 years. I also run my 5090, as I have said before, at 80% power and have since undervolted it. It's much more efficient now then it was at stock for what that is worth.

Additional VRM phases past a point do not affect stock CPU performance and the premise that the motherboard or the CPU will last long with additional phases past that point is dubious at best. If anything, more phases may mean more faults because of manufacturing inconsistencies of the phases themselves or the additional manufacturing for the PCB design of the motherboard.

Do not get me wrong, more VRM phases is generally "better," however, there is little in the way of data for measurable long term benefits. The benefits are questionable and the costs are real. A 5090 is measurably better than other cards, and that's the difference in our stances. More stable power for the CPU does not increase performance in the context of the boards we are referring to and at stock settings. If you want to OC the CPU, by all means get that Asrock 2x12+2+1 110A SPS VRM setup on the taichi lite and go bananas, but to presume it will do any better than a motherboard at half the cost for long term durability is speculation at best.
 
More stable power for the CPU does not increase performance in the context of the boards we are referring to and at stock settings.
It's not all about increasing performance or getting the best performance. With power delivery, it is more about system stability if anything.

Same goes with PSUs as well. Sure, mid-tier system may be fine with mediocre quality PSU but lesser quality PSUs have issues of producing high ripple and/or keeping voltage within specs and that causes system instability as well.

Or in gaming wise, high FPS matters 0 if you have stutters due to system instability.
Would you rather have 240+ FPS and stutters every 2-3 seconds?
Or ~120 FPS but no stutters?

OS wise, BSoD every day vs BSoD once a year, if even that?
System stability is important. Often more important than actual performance.

If anything, more phases may mean more faults because of manufacturing inconsistencies of the phases themselves or the additional manufacturing for the PCB design of the motherboard.
This is a possibility. But this is so with all components that are more complex than their lesser complex counterparts.
E.g RTX 5090 has more components on it's PCB than e.g RTX 5060 Ti, thus more points of failure, whereby it may not last as long as it's cheaper alternative.

But here's a thought: is mini-ITX MoBo the most durable MoBo, while E-ATX MoBos (especially dual-CPU MoBos) have the worst durability? Solely by their complexity and component amount? 🤔

however, there is little in the way of data for measurable long term benefits. The benefits are questionable and the costs are real.
Yeah, there hasn't been any in-depth longevity tests made on VRM phase counts. But so is with CPU/GPU undervolt. Many people do it to get lower temps, but there isn't any long term studies if it affects the durability of the CPU/GPU or if there are any other downsides when starving the chip of power.
 
It's not all about increasing performance or getting the best performance. With power delivery, it is more about system stability if anything.

Same goes with PSUs as well. Sure, mid-tier system may be fine with mediocre quality PSU but lesser quality PSUs have issues of producing high ripple and/or keeping voltage within specs and that causes system instability as well.

Or in gaming wise, high FPS matters 0 if you have stutters due to system instability.
Would you rather have 240+ FPS and stutters every 2-3 seconds?
Or ~120 FPS but no stutters?

OS wise, BSoD every day vs BSoD once a year, if even that?
System stability is important. Often more important than actual performance.
Are you claiming that motherboard VRMs, and PSUs without tight voltage control contribute to BSODs? I have never heard this in my career. VRMs and PSUs either work or they do not. VRMs that run very hot can cause CPU performance losses, but to my understanding usually just cause stutters, they do not contribute to any proveable BSOD in my experience. PSUs are also extremely binary. They either deliver the power requested or they do not. Better ripple control PSUs, to my understanding, reduce the odds of damage to hardware using them. I have never heard of a PSU that has bad ripple causing a system BSODs, and then swapped with a good PSU and the BSODs are solved. I have fixed thousands of PCs, but if true, just means I have learned something new. Please provide examples of these particular instances if you can.
 
VRMs and PSUs either work or they do not. VRMs that run very hot can cause CPU performance losses, but to my understanding usually just cause stutters, they do not contribute to any proveable BSOD in my experience.
Please provide examples of these particular instances if you can.
Sure;

Can a failing power supply unit (PSU) lead to BSoDs, and how can I test it?

Yes, a failing PSU can cause BSoDs or system instability.
Source: https://www.lenovo.com/gb/en/glossary/bsod/

Given that Lenovo is both OEM and SI, i don't think they have empty claims on their website.

Also;
InvalidError said:
While it is possible for a dying PSU to cause BSODs from excessive supply noise, it is more common for failing PSUs to cause the system to randomly shutdown or reboot when hardware detects an invalid state somewhere.
Source: https://forums.tomshardware.com/threads/can-a-psu-cause-blue-screen-of-death.3325543/#post-20300936

Said by one of our knowledgeable mods.

And it has been even outright asked,
topic: https://forums.tomshardware.com/threads/can-a-psu-cause-bsod.1368371/

Another topic too, where PSU used was one of the worst ones jonnyguru has seen in ages,
topic: https://forums.tomshardware.com/threads/psu-bsod.3750319/

Topic where i was part of, where OP had BSoD issues. But after going with new, good quality PSU, BSoD issues went "poof",
topic: https://forums.tomshardware.com/threads/constant-blue-screens-no-dump-logs.3817621/

And lastly, another topic where BSoDs were fixed by replacing the PSU with good quality unit,
topic: https://forums.tomshardware.com/threads/need-help-to-diagnose-the-cause-of-bsod.3455140/

So, yes, PSU can and has created BSoDs.
Same is true with poor VRMs. Besides overheating, poor VRMs struggle with power stability. And that instability affects CPU directly, whereby resulting in BSoD. (If you read what Darkbreeze said in the last topic i linked, the "solution" reply.)

VRMs and PSUs either work or they do not.
When it comes to PSUs, there are far more going on than just working or not working.

Voltage regulation is a BIG deal. None of the PSUs, no matter how great, can constantly output +12.00V, +5.00V and +3.30V. Instead, the voltage fluctuates based on the components demand and it's up to PSU to keep up with the demand, while keeping the voltage levels as close to the 12/5/3.3V as possible. Where, the tighter the voltage regulation is - the better the PSU is (usually).

Better ripple control PSUs, to my understanding, reduce the odds of damage to hardware using them.
High ripple increases the temps of caps and if the temps increase would be e.g 10C, that can cut the caps lifespan by 50%. High ripple can also cause system instability (just like loose voltage regulation) and in turn, cause BSoD among other things.

I have fixed thousands of PCs, but if true, just means I have learned something new.
In PC repair/troubleshooting scene, one never knows all the culprits. Instead, there is always room to learn more. :)

E.g i was on the understanding that when CPU is faulty, PC won't POST. But when PC does POST and boots to OS: CPU, MoBo and RAM are fine.
But lately (year or two ago) i learned that even a faulty CPU can boot to OS, but would throw BSoDs. (I tried to look up that topic where i learned that but couldn't find it, since i have so many replies in TH forums.)
 
  • Like
Reactions: helper800
Sure;


Source: https://www.lenovo.com/gb/en/glossary/bsod/

Given that Lenovo is both OEM and SI, i don't think they have empty claims on their website.

Also;

Source: https://forums.tomshardware.com/threads/can-a-psu-cause-blue-screen-of-death.3325543/#post-20300936

Said by one of our knowledgeable mods.

And it has been even outright asked,
topic: https://forums.tomshardware.com/threads/can-a-psu-cause-bsod.1368371/

Another topic too, where PSU used was one of the worst ones jonnyguru has seen in ages,
topic: https://forums.tomshardware.com/threads/psu-bsod.3750319/

Topic where i was part of, where OP had BSoD issues. But after going with new, good quality PSU, BSoD issues went "poof",
topic: https://forums.tomshardware.com/threads/constant-blue-screens-no-dump-logs.3817621/

And lastly, another topic where BSoDs were fixed by replacing the PSU with good quality unit,
topic: https://forums.tomshardware.com/threads/need-help-to-diagnose-the-cause-of-bsod.3455140/

So, yes, PSU can and has created BSoDs.
Same is true with poor VRMs. Besides overheating, poor VRMs struggle with power stability. And that instability affects CPU directly, whereby resulting in BSoD. (If you read what Darkbreeze said in the last topic i linked, the "solution" reply.)


When it comes to PSUs, there are far more going on than just working or not working.

Voltage regulation is a BIG deal. None of the PSUs, no matter how great, can constantly output +12.00V, +5.00V and +3.30V. Instead, the voltage fluctuates based on the components demand and it's up to PSU to keep up with the demand, while keeping the voltage levels as close to the 12/5/3.3V as possible. Where, the tighter the voltage regulation is - the better the PSU is (usually).


High ripple increases the temps of caps and if the temps increase would be e.g 10C, that can cut the caps lifespan by 50%. High ripple can also cause system instability (just like loose voltage regulation) and in turn, cause BSoD among other things.


In PC repair/troubleshooting scene, one never knows all the culprits. Instead, there is always room to learn more. :)

E.g i was on the understanding that when CPU is faulty, PC won't POST. But when PC does POST and boots to OS: CPU, MoBo and RAM are fine.
But lately (year or two ago) i learned that even a faulty CPU can boot to OS, but would throw BSoDs. (I tried to look up that topic where i learned that but couldn't find it, since i have so many replies in TH forums.)
I will start by saying I do not have the time to go through the links right now, but I will post my opinion on the text provided in your post first, before going through the examples. Thanks for putting in the work for the sources, I appreciate it.

Unexpected shutdowns from a PSU is not a BSOD. If the PC shuts down due to inadequate power or sufficiently stable power, no BSOD is seen and there is no time for a dump. Again, PSUs are either working sufficiently and the PC is in an on state, or they cross the line of working insufficiently and the PC shuts down nearly instantly with no time for a BSOD. I guess I should have clarified what I meant by a BSOD. An unexpected shutdown is not a BSOD, a BSOD is the visual debug screen in blue that creates a dump file to the PC before restarting or powering down. As far as the VRMs are concerned, who is to say that the "poor VRMs are struggling" versus them being faulty. I would argue that the motherboard was faulty rather than the VRMs being so poor at voltage regulation compared to the demands of the socketed CPU that the system throws a BSOD. If the voltage regulation is so poor that a BSOD would trigger, safety mechanisms would cause a shutdown long before the PC takes its time creating a dump file and showing a debug message visually to the user.
 
  • Like
Reactions: Aeacus
Make and model of the UPS is?
Trust, no idea what model it is. It may be this one, it looks exactly like it, mine just has 650VA:
I am expecting poor quality, it was bought about 15 years ago.

That leaves only RAM and CPU.
I did ran one full test in the past. But correct me if I'm wrong, shouldn't trying one RAM stick at a time rule it out? I did that, and it crashed on both sticks individually.
Btw, did you sort out the thermals issue?
Repasting the cooler again and removing foil from M.2 NVMe heatsink, which I haven't removed before, did decrease temps by a little bit, but did not help with crashes. However, I did some more testing, and it is almost certainly something to do with temps. Decreasing my room temperature and adding more coolers did slow and even remove the crashes. I am actively monitoring temps in hwinfo, and they are fine. Now I've ruled out the VRM overheat. Isn't there something else that can be overheating?

I reinstalled my os 2 times throughout the year.
 
Last edited: