Question DRIVER_POWER_STATE_FAILURE (BSOD) ntoskrnl.exe ?

Oct 12, 2023
27
3
35
Hi!

I've been facing issues with my PC recently, and I can't seem to pinpoint the cause. My PC has been shutting off the past few days regularly at around the timestamps 23:30pm till 1:30am whilst gaming.
The issue has pursued me across two Windows installations now as I tried to reinstall my Windows to see if my issues would disappear, as I have been having frequent stuttering problems in games, and thus I think this bluescreen is some sort of indication as of why my system is having these instability issues... and maybe the minidump can lead somebody to a cause? Cause I can't get my head around it.

My setup:
OS: Windows 11 Pro
GPU: MSI RTX 4080 Suprim X
CPU: Ryzen 7800X3D
RAM: G.Skill Trident Z5 Neo RGB F5-6000J3040G32GX2-TZ5NR
MOBO: MSI x670-P Pro
PSU: Corsair RMX Shift 1200W

I've tried the following:
• Clean install of Windows 10 & 11
• Updated all drivers, and tried rolling back some drivers to pin-point the problem
• Run the latest BIOS revision by MSI
• Disabled EXPO to see if the instability miraculously disappears but it hasn't.
• And this: https://www.tomshardware.com/how-to/fix-driver-power-state-failure-error


I want to add that my PC's temperatures are fine, 68c GPU and 65-74c whilst gaming, though under heavier load situations the CPU can run up to 84c under a Be Quiet AIO.

If anybody with the skill of debugging my minidump file here can potentially see a cause of what hardware in my case is causing this I would greatly appreciate it, the instability has been giving me headaches and not knowing what could be the cause makes it even more frustrating as this system isn't old at all, the GPU is only a recent addition! :/

Here's a link to my minidump as this is the first time my PC created one for a while, though those blackscreen freezes that suspiciously look a lot like a bluescreen crash have been happening more often with no dumps created if anymore get created I can upload them: https://drive.google.com/file/d/1wJcD6_goY9H6boGHHUxLXwZyzOPDOobI/view?usp=sharing
 
Last edited:
Welcome to the forums, newcomer!

Clean install of Windows 10 & 11
Where did you source the installers for the OSes?

Run the latest BIOS revision by MSI
BIOS version for your motherboard?

MOBO: MSI x670-P Pro
Is this the motherboard in your build;
?

Updated all drivers, and tried rolling back some drivers to pin-point the problem
Can you walk us through how you did so? Did you use DDU, then later installed the drivers in an elevated command?
 
Welcome to the forums, newcomer!

Clean install of Windows 10 & 11
Where did you source the installers for the OSes?

Run the latest BIOS revision by MSI
BIOS version for your motherboard?

MOBO: MSI x670-P Pro
Is this the motherboard in your build;
?

Updated all drivers, and tried rolling back some drivers to pin-point the problem
Can you walk us through how you did so? Did you use DDU, then later installed the drivers in an elevated command?

Hey, thanks.

• The installers are sourced from the Microsoft Media Installation tool from their official website, I wouldn't download my OSes anywhere else.

Links:

• Bios version: 7D67v1A

• Correct that is my motherboard.

To answer your last question, indeed I am using DDU, I uninstall the graphics drivers within safemode and then reinstall them within the regular windows environment, ensuring that my PC isn't connected to the network so it doesn't auto-download drivers from the internet, this issue persisted through the newest and oldest drivers available for my card.

Uninstalling any other drivers I've done through the method of using device manager, and or through their uninstaller tools (chipset) for example, currently they are all using their latest updates as well as firmwares. I've also tried a few older BIOS revision which didn't work thus I updated it back.

Since this error is so sporadic and as well time-based around the same timestamp as mentioned in my main post, I am hoping somebody can debug the dumps I left behind in my main post within the link of that google drive, there would be 4 dumps there with their corresponding BSOD, which of two are DRIVER_POWER_STATE_FAILURE


This issue persisted on my Windows 10 installation and Windows 11 installation right now, I have installed Windows 11 as people recommended to try it out to see if it would help my issues.
 
Last edited:
Hi!

I got another dump file available, as it just happened again the same type of crash though this time it gives no error at all, just created a dump file. Again same process name ntoskernel.exe


I really hope somebody can help me out with these dumps, and pinpoint which device is failing in my PC cause this can't be a software problem, and it's been getting worse slowly.
 
First off (and before you have another BSOD) can you please copy the file C:\Windows\Memory.dmp to a temporary location (in a temp file perhaps) in case we need it later. That's the full kernel dump for the most recent BSOD, it will be overwritten if another BSOD occurs. The bugcheck for this BSOD is a DPC_WATCHDOG_VIOLATION bugcheck with an argument 1 value of 0x1, and they can only be fully debugged with a kernel dump. That's why I'm asking for it to be saved - although I may not need it.

The other four dumps you uploaded earlier point in two entirely different directions. One is easy to analyse and the other less so. First the easy one...

Two of your dumps are DRIVER_POWER_STATE_FAILURE bugchecks. These occur because a device took too long completing a power transition (from a low power idle state to a high power running state, or vice-versa). In the dump we get both the IRP (Interrupt Request Packet) of the failing power transition and the device object address of the failing device...
Code:
DRIVER_POWER_STATE_FAILURE (9f)
A driver has failed to complete a power IRP within a specific time.
Arguments:
Arg1: 0000000000000003, A device object has been blocking an Irp for too long a time
Arg2: ffffcf01deb10050, Physical Device Object of the stack
Arg3: ffffa78848d97178, nt!TRIAGE_9F_POWER on Win7 and higher, otherwise the Functional Device Object of the stack
Arg4: ffffcf01e52748a0, The blocked IRP
We can use a sort of shorthand to display both the IRP and the device involved by using the !devstack command on the device object address in argument 2...
Code:
15: kd> !devstack ffffcf01deb10050
  !DevObj           !DrvObj            !DevExt           ObjectName
  ffffcf01e52268d0  \Driver\partmgr    ffffcf01e5226a20  InfoMask field not found for _OBJECT_HEADER at ffffcf01e52268a0

  ffffcf01e53a1080  \Driver\disk       ffffcf01e53a11d0  InfoMask field not found for _OBJECT_HEADER at ffffcf01e53a1050

  ffffcf01e520fda0  \Driver\EhStorClassffffcf01e5217ba0  InfoMask field not found for _OBJECT_HEADER at ffffcf01e520fd70

> ffffcf01deb10050  \Driver\storahci   ffffcf01deb101a0  Cannot read info offset from nt!ObpInfoMaskToOffset

!DevNode ffffcf01de9dc4e0 :
  DeviceInst is "SCSI\Disk&Ven_&Prod_CT1000MX500SSD1\7&2ff5684&0&010000"
  ServiceName is "disk"
At the top is a summary of the IRP. You can see that the drivers involved in the IRP are related to storage drives, so we know that the device failing the power transition is a storage drive. At the bottom is the key data from the device node (obtained via the device object). You can see that the device is a disk (Windows calls all HDD and SSD devices a disk) and that it's identity is CT1000MX500SSD1. That's a Crucial MX500 1TB SATA SSD.

These two dumps then suggest that there has been a problem with that SSD handling a power transition. That may indicate a problem with the drive, but it may also simply be that the drive has no power transition capability. Try setting the "Turn off hard disks" value to 0 in the power options. That will stop Windows trying to put them in a low power state.


The other two dumps - and probably the latest one too - point at a graphics problem. Two of these three dumps are VIDEO_MEMORY_MANAGEMENT_INTERNAL bugchecks with an argument 1 value of 0x17. That indicates that a graphics card command unexpectedly failed and for unknown reasons. Both dumps clearly show a graphics operation in progress and both show references to nvlddmkm.sys, your Nvidia graphics driver.

The problem here is going to be either a driver failure (in nvlddmkm.sys) or a graphics card hardware error. The version of nvlddmkm.sys that you're running dates from August 2023 and may not be current...
Code:
0: kd> lmDvmnvlddmkm
Browse full module list
start             end                 module name
fffff800`690b0000 fffff800`6ca1f000   nvlddmkm   (deferred)            
    Image path: nvlddmkm.sys
    Image name: nvlddmkm.sys
    Browse all global symbols  functions  data
    Timestamp:        Sat Aug  5 01:45:28 2023 (64CD7F88)
    CheckSum:         0386F181
    ImageSize:        0396F000
    Translations:     0000.04b0 0000.04e4 0409.04b0 0409.04e4
    Information from resource tables:
Look first for an updated driver and download it. Also download DDU and use that tool to uninstall all traces of earlier graphics drivers (it will reboot the system) and then install the latest driver. See whether that helps.

The most recent dump is DPC_WATCHDOG_VIOLATION bugcheck with an argument 1 value of 0x1. This means that all the DPCs that were running collectively ran for too long (a DPC is a Deferred Procedure Call, they form the back-end of device interrupt processing and their code is part of the device drivers). As I've mentioned, we need a kernel dump to debug those fully, because the minidump only contains status for the failing processor.

That siad, and since we already suspect nvlddmkm.sys or the graphics card, the failing processor in this minidump is also a graphics operataion and it calls nvlddmkm.sys. Here's the call stack from that dump...
Code:
7: kd> knL
 # Child-SP          RetAddr               Call Site
00 ffff9681`73df9c88 fffff801`0ec28859     nt!KeBugCheckEx
01 ffff9681`73df9c90 fffff801`0ec280c1     nt!KeAccumulateTicks+0x239
02 ffff9681`73df9cf0 fffff801`0ec26151     nt!KiUpdateRunTime+0xd1
03 ffff9681`73df9ea0 fffff801`0ec25b7a     nt!KeClockInterruptNotify+0xc1
04 ffff9681`73df9f40 fffff801`0ecae6dc     nt!HalpTimerClockInterrupt+0x10a
05 ffff9681`73df9f70 fffff801`0ee1489a     nt!KiCallInterruptServiceRoutine+0x9c
06 ffff9681`73df9fb0 fffff801`0ee15107     nt!KiInterruptSubDispatchNoLockNoEtw+0xfa
07 fffffe08`44b0dd70 fffff801`5086c172     nt!KiInterruptDispatchNoLockNoEtw+0x37
08 fffffe08`44b0df08 fffff801`508825a0     nvlddmkm+0x9bc172
09 fffffe08`44b0df10 00000000`000007d2     nvlddmkm+0x9d25a0
0a fffffe08`44b0df18 fffffe08`00000000     0x7d2
0b fffffe08`44b0df20 00000000`00000653     0xfffffe08`00000000
0c fffffe08`44b0df28 fffff801`0ed21e54     0x653
0d fffffe08`44b0df30 00000000`00000050     nt!KiExitDispatcher+0xb4
0e fffffe08`44b0e2e0 ffffab82`ac53bb20     0x50
0f fffffe08`44b0e2e8 00000000`00000001     0xffffab82`ac53bb20
10 fffffe08`44b0e2f0 ffffab82`ac53bb20     0x1
11 fffffe08`44b0e2f8 00001ac0`00000001     0xffffab82`ac53bb20
12 fffffe08`44b0e300 ffffab82`add23202     0x00001ac0`00000001
13 fffffe08`44b0e308 00000001`00006c85     0xffffab82`add23202
14 fffffe08`44b0e310 00000000`0000000a     0x00000001`00006c85
15 fffffe08`44b0e318 fffff801`5086b9c3     0xa
16 fffffe08`44b0e320 00000002`caa10000     nvlddmkm+0x9bb9c3
17 fffffe08`44b0e328 00000000`00000000     0x00000002`caa10000
You can see nvlddmkm.sys being called repeatedly (which is normal) but you can also see a lot of garbage call addresses (the ones without symbols). I'm inclined to suspect that these garbage addresses are coming from nvlddmkm.sys (mostly because we already suspect it).

If I need toi look deeper into this dump I'll ask you to upload the saved kernel dump, buit we can hold off for now because I suspect that clean installing the latest Nvidia driver with DDU may solve this problem too.
 
  • Like
Reactions: Sterrenstoof
If I need toi look deeper into this dump I'll ask you to upload the saved kernel dump, buit we can hold off for now because I suspect that clean installing the latest Nvidia driver with DDU may solve this problem too.

I can't seem to find the memory.dmp file, is it located in the windows folder? It seems to be missing.

It's kinda as I've suspected, I just got this RTX 4080 a few weeks ago and been having problems with micro-stuttering in games almost immediately after, I have indeed reinstalled the driver multiple times using DDU older and newer driver revisions and even did this for the latest released a couple of nights ago hoping it would fix it just to be met with the same videocard BSOD yesterday, and the DRIVER_FAILURE the day before.

That CMT in my case is a old SSD, and apparently is also connected to the wrong port on my mobo it's sitting on the ASMEDIA port and I don't know if this could be the reason that it fails, I'll backup it's files asap and get it replaced as it's an aging device and I much rather have a larger NVME drive replace all the current storage I have.
 
From what you're saying, and having tried multiple drivers with DDU each time, I would RMA the 4080. I think the problems have to be with the card. Use the dumps and your trying various drivers with DDU as evidence.

What you say about the SSD makes perfect sense. I don't think the 0x9F BSODs are indicative of an imminent drive failure - unless the System log contains many bad block messages for the drive? - but backing it up is always wise. An NVMe is going to be so much faster.

The lack of the memory.dmp file isn't important in this case, but be sure that your dump settings for 'Write debugging information' is set to 'Automatic memory dump' and that the 'Overwrite any existing file' box IS checked. That will ensure that a kernel dump is written as well as a minidump (there is only ever one kernel dump stored so you don't have to worry about drive space). There are other scenarios in which a kernel dump is required to fully diagnose the problem.
 
Last edited:
  • Like
Reactions: Sterrenstoof
From what you're saying, and having tried multiple drivers with DDU each time, I would RMA the 4080. I think the problems have to be with the card. Use the dumps and your trying various drivers with DDU as evidence.

What you say about the SSD makes perfect sense. I don't think the 0x9F BSODs are indicative of an imminent drive failure - unless the System log contains many bad block messages for the drive? - but backing it up is always wise. An NVMe is going to be so much faster.

The lack of the memory.dmp file isn't important in this case, but be sure that your dump settings for 'Write debugging information' is set to 'Automatic memory dump' and that the 'Overwrite any existing file' box IS checked. That will ensure that a kernel dump is written as well as a minidump (there is only ever one kernel dump stored so you don't have to worry about drive space). There are other scenarios in which a kernel dump is required to fully diagnose the problem.
I appreciate your in-depth analysis of my minidumps, and giving the answer I was looking for because this issue has been present for quite a while now and has been getting worse.

I'll replace the SATA SSDs, with a new NVME 4TB, and use my Kingston Fury as my main drive, seeing that drive is about 4 months old maybe? And is the fastest drive I have. I will send back my RTX 4080 kinda weird that such a new device can have so many problems, unlucky purchase I suppose, if they require any dump files I got them all backed up my Google Drive now :)


I'll keep you up to date here on the process once I get to it.
 
  • Like
Reactions: ubuysa
Update 1:

System has it's GPU replaced, I still had my old one laying around which is just from last generation and had some graphical glitches, though nothing like massive stuttering before but will have to test if this issue re-occurs on this card.

Crucial SSD is still disabled, and will stay disabled till I order a new NVME drive to replace my current SATA storages.

I'll upload any BSODs straight back into this topic in any next update if they occur.
 
  • Like
Reactions: ubuysa
Update 2:

Still haven't received a BSOD, or blackscreen though facing my micro stutter issue just as bad as I did on the 4080, this was actually never as bad as it's been right now.. coincidentally it feels identical to the other graphics card and this thing hasn't had issues before with stuttering, I can't remember having them beside the artifacts I had at times.

Those power transitions that also affected the SSD, is it possible that PSU can be the cause after all? This SSD has worked fine for years on a different PSU and the current PSU I have was the replacement I've gotten for the 4080 which is I think around 6-7 weeks old, HWMonitor doesn't show any strange behavior on voltages sometimes it's a bit lower on the 12v, and sometimes a bit higher but I think this is normal behavior.. checking my windows logs shows 9 times Kernel 41 (63) power error, which of 5 created a minidump of a BSOD, the ones I put here before.. but the other 4 times have never created a log.

I haven't turned my PC off like before, I always put it to sleep cause I remember having black screens at cold bootup not reaching the Windows loading part, which caused the system to just restart itself in the middle of a boot..

I have even begun considering a motherboard replacement cause I don't believe one way or another that it could be the PSU, but that's why I am asking this here, could it be that the PSU despite outputting normal readings in HWMonitor, cause all of this mess afterall? Or is my thought process ridiculous..?
 
That you're still having stuttering means that the graphics card was likely not the root cause. Also, the earlier dumps pointing in two different directions is (and was then) a concern. I don't believe in there being two separate problems at the same time, so these are problems are most likely linked somehow.

The error 41 message is written at startup, it only indicates that Windows wasn't shutdown properly the last time. It's got nothing to do with power, they chose that description (I assume) to indicate that the power went off before Windows had shutdown. That there were no dumps written suggests a hardware cause that crashed Windows in a way that the kernel didn't detect.

I feel sure that this is a hardware problem. One way to confirm that is to start Windows in Safe Mode. In Safe Mode only critical Windows components are loaded and (usually) no third-party drivers are loaded. It's a stripped-down minimal Windows system. Its only purpose is to see whether this stripped-down Windows system will crash or BSOD, if it does then you can be pretty certain that you have a hardware problem. Of course, if it won't crash in Safe Mode then it's a software problem and we have tools that will help us find out what.

In Safe Mode you won't be able to do any useful work. Many of your devices will not function properly (or at all) - your display will be low-res for example, because you'll be using only the Windows basic display driver. You do need to use the PC as much as you're able to in Safe Mode in order to try and make it crash or BSOD.

I would suggest that you start Windows in Safe Mode without networking at first because that loads no networking drivers. Then try Safe Mode with networking later on.
 
  • Like
Reactions: Sterrenstoof
The system hasn't had a bluescreen since the GPU was removed, and the SSD was disabled. Though the instability lasts.. I know it's a good method of testing to see if any hardware is failing, it's just that there's gotta be a different way to do this than go down the safe-mode route as I want to just play something in my free time.

I'll leave this here, this is my latest LatencyMon report, and I had walked away from the PC without it doing anything graphical, the PC screen had fallen asleep and when I proceeded to wake the screen up I think this spike happened? Or it happened prior.

View: https://imgur.com/a/4344DmE


It's a ridiculous high spike, and I had spikes into the 2500-3000 on the 4080 too whilst gaming, while I do suspect it might be normal under gaming load?? As I am not that experienced when it comes to latency, could it be that this is the culprit to all my system hitches?
 
It's highlighting the graphics driver as the longest running DPC, that's probably related to the graphics instability you're still getting. There does seem to be something that's not right in the graphics area still. Is it possible to try the graphics card in another PCIe slot?

i just re-read your first post and seen that you have already clean installed both Win10 and Win11, was that from bootable media, deleting existing UEFI partitions (via custom install)? If it was, then Safe Mode is probably pointless because clean installing Windows and still having the issue proves you have a hardware issue.

If you clean installed some other way please try installing from bootable media, choose a custom install and delete all UEFI partitions. Sel;ect (highlight) the unallocated space that results and then click the Next button. The installer will create the correct partition structure and install Windows.
 
Yeah I always use bootable media (USBs) to do my installs, and format my drives that I install windows on, anything I really need I back up on a separate drive and return but those are usually personal files and nothing that's running software.

Gonna see what GPU-Z says, cause this might be that PCI bug I've seen online even though I doubt it.

I really don't get it, and tbh I am starting to think it's the motherboard, I'll have to see if my GPU even fits in the bottom slots they have became so huge.
 
@ubuysa I am currently uploading a BSOD file dump to my Google Drive, it happened again with the 3080 too this time.

I also am uploading the memory dump along with this, I have no idea what it shows but I was playing on GTA5 while it happened.

Seeing that the issue is definitely not graphics card itself, yet it blames the card makes me speculate things but I am probably replacing all my SSDs this weekend, and coincidentally that will mean I will have to do yet another Windows installation so that comes out perfect meaning I'll have another fresh install of Windows to test on, if coincidentally that solves the issue that be great, otherwise my eyes are all at the motherboard afterwards.

 
Last edited:
Latest BIOS update with AGESA 1.0.0.8 so far has lowered the DPC spikes, still hitting a 1000 while gaming and idk if that's normal.

Yes, I am trying a BETA BIOS update from MSI for my board, and thus far the experience has been slightly better.
 
The problem in that latest dump - the 0x133 with an argument 1 value of 0x1, indicating that a collection of DPCs ran for too long - needs the kernel dump to debug properly, and you provided that so thanks!

Without going into boring details we use the kernel dump to dump the event trace buffers and then writing out the DPC/ISR event log entry as an etl file that the Windows Performance Analyser can read. In there we look at the DPC/ISR times and sort the collection of DPCs on total run time, the one at the top is the longest running. Here's the WPA output for your kernel dump...
The DPC at the top is nvlddmkm.sys with a run time twice as long as the next longest running DPC (tcpip.sys). The problem here is clearly graphics related - and we already know it's a hardware issue.

You've tried multiple drivers using DDU each time. You've tried a different graphics card (using DDU again). And you still have the problem, so we need to look at other hardware. That leaves the motherboard or the PSU as the most likely - and the only way to test those is by swapping them I'm afraid.
 
Last edited:
  • Like
Reactions: Sterrenstoof
The problem in that latest dump - the 0x133 with an argument 1 value of 0x1, indicating that a collection of DPCs ran for too long - needs the kernel dump to debug properly, and you provided that so thanks!

Without going into boring details we use the kernel dump to dump the event trace buffers and then writing out the DPC/ISR event log entry as an etl file that the Windows Performance Analyser can read. In there we look at the DPC/ISR times and sort the collection of DPCs on total run time, the one at the top is the longest running. Here's the WPA output for your kernel dump...
The DPC at the top is nvlddmkm.sys with a run time twice as long as the next longest running DPC (tcpip.sys). The problem here is clearly graphics related - and we already know it's a hardware issue.

You've tried multiple drivers using DDU each time. You've tried a different graphics card (using DDU again). And you still have the problem, so we need to look at other hardware. That leaves the motherboard or the PSU as the most likely - and the only way to test those is by swapping them I'm afraid.

I'll keep an eye out for a little, the latest BIOS that includes AGESA 1.0.0.8 has decreased these spikes in LatencyMon heavily, I have seen one 4000 spike in 3 hours of excessive usage within the user latency part, in comparison to spikes up to 5000 within a time span of 10 minutes of LatencyMon before.

DPC spikes have barely hit 1000, from Nvidia's driver. (under gaming load)

If this issue continues to exist, the mobo will come to replacement and if it still persist probably the PSU, though it'd be the last thing I'd expect to be faulty atm, unless they delivered a faulty PSU out of the box. :/

Appreciate your help!
 
Last edited:
I woke up in the night thinking about your problem. Don't worry, there's no additional charge for out of hours consultations! 🤣

Whilst nlddmkm.sys is the longest running DPC there (by far) the DPCs for tcpip.sys and Wdf01000.sys do run longer than Microsoft recommend - 100 microseconds. Wdf0100.sys runs for almost 600 microseconds and tcpip.sys for over 600 microseconds. That got me wondering whether the games in which you have problems access the Internet in real time? If so, it's possible that a networking problem is slowing the graphics operation down, making nvlddmkm.sys run for too long and the collectively the cause a 0x133 BSOD?

The tcpip.sys driver is a Windows driver, but lower down in the driver stack will be your third-party network adapter driver. Check that the latest driver for your network adapter is installed. Better still, try a different adapter. If you connect via WiFi try connecting via cable (even if it means moving the PC temporarily). If you connect via cable then try connecting via WiFi. If it's a networking issue, swapping adapters should make the effects go away.

The Wdf01000.sys driver is the high-level Windows Driver Foundation driver, a whole host of third-party drivers are written based on the WDF libraries and any one of these lower down in the driver stack could be causing problems that might end up slowing nvlddmkm.sys down, especially if they are in the network stack too (and thus delaying tcpip.sys). Try temporarily disabling (unchecking) all third-party drivers in the network adapter properties box, especially things like security products etc.

I might be barking up the wrong tree here. I might even be barking in the wrong forest! But if the problem is not where you're looking then it must be somewhere else...
 
Last edited:
  • Like
Reactions: Sterrenstoof
🤣 it can't be good that it makes you wake up from your sleep!

Yeah the games I play are usually always online, I barely play anything Singleplayer beside Spiderman and Uncharted and RDR2, out of those 3 games RDR2 has the biggest micro stutter issue whilst being the most optimized of the bunch soo..

I am actually gonna be updating the drivers well attempt too! I've disabled the wifi feature of my motherboard since I'm a cable user (glassfibre) and I prefer the full 1 gigabit, but yeah could attempt it once I get the new SSD in place with Windows 10 back on the system.

I fetch my drivers from the Realtek site: https://www.realtek.com/ja/componen...0-1000m-gigabit-ethernet-pci-express-software as my motherboards vendor hasn't got updated revisions yet on their site. The one from MSI's site faces the same issue at least.

UPDATE: I got the latest version 😀
 
Ah. Generic drivers, such as those from Realtek, may not always be best on every platform. I'm not sure what revisions the motherboard vendor's driver doesn't have, or whether you need them(?) but the motherboard vendor's driver will certainly be fully compatible with your motherboard.

As a test, I would be tempted to uninstall the Realtek driver and try the motherboard vendor's driver....
 
Yeah you got a point, I'll retry the drivers with this latest BIOS I have maybe now it solves the issue? It's worth a second shot

This is also a long shot, but can it be that USB devices like mice & keyboard could cause some sort of hitches and increase the latency too?? I been having hitches in my mouse even on wired connection, and my keyboard has this weird behavior with it's volume controller sometimes just controlling the volume. (I did update my Keyboards firmware recently whilst I already had the issues to see if it solved it but it hadn't)

So could it be that my USB ports are sending out some sort of errors as well? I know AMD has been having issues with USB ports for generations now since Ryzen 1, 2, 3, 5, and now even 7 it's not in every case the culprit but seems to be in many to be a cause too.

I am also removing the SATA SSD that's currently on the ASMEDIA controller, read on MSI forums and on Reddit that this controller on some motherboards is junk and even when you disable it within the BIOS it could still cause all kind of issues, haven't had time for it yet but will today.
 
Last edited:
Do you have the chipset drivers from your motherboard vendor's website installed? Again, when having problems you want the board vendor's drivers installed.

From what you're saying now, I would be more likely to suspect the motherboard. Or the PSU. 😎
 
Last edited:
Just installed the ones from the MSI site, and so far in the first 5 minutes of gameplay I haven't noticed any difference, still get the occasional stutter.

Removed the SATA SSD that gave the initial BSOD for this topic, so that one is out of the loop entirely and can't cause issues anymore.

I'll keep it up to date here, my new SSD hasn't arrived yet and when it does my last SATA will be removed too soo the system only has NVME to rely on.. but even when I elaborate that the PSU could be the cause I just get strangely looked at. This PSU replaced my old PSU with the reason that the old one had a few issues of it's own.
It be ridiculous if this new unit is faulty.... it's the last thing I'll replace if I have too but if it really is the issue I'll get pissed lol.