How to fix CrowdStrike BSODs in three minutes — fix requires manual changes, but they are simple

Blows my mind banks and a whole bunch of others are just applying updates across the board without first testing them in house to ensure they will not cause issues.

I mean, they were lucky this was not a virus or backdoor that slipped it's way into the update process.

Totally unacceptable that companies just update their machines without checking anything and just blindly trust updates.
 
Doesn't work for me. Fiance works for Roger's Communications remotely. I can't get her laptop to boot into safe mode at all. I was given the recovery key and Admin password from their IT to try, but no luck.

no access to the BIOS as of yet ( to get into Boot Options). I didn't get that password from them. A level 2 tech should be calling at some point, but not expecting it to be today.
 
Last edited:
Totally unacceptable that companies just update their machines without checking anything and just blindly trust updates.
Most companies don't have the resources to check every single update for every server or workstation in a dev environment. Do you know how many updates that would be? Especially for Crowdstrike which can be updated more than once a day. Their updates are transparent and happen in the background without user interaction, which makes that even harder. You'd need guys on staff where that's all they do.
 
This is why I always have a bootable usb with a linux distro (usually Mint) on hand so that I can access my disks and files when Windows just won't work. In this case once you know which file to delete you can easily navigate to it and delete it without having to do the Windows recovery dance.
Can you do this also with Bitlocker ?
 
  • Like
Reactions: artk2219
Most companies don't have the resources to check every single update for every server or workstation in a dev environment. Do you know how many updates that would be? Especially for Crowdstrike which can be updated more than once a day. Their updates are transparent and happen in the background without user interaction, which makes that even harder. You'd need guys on staff where that's all they do.

For banking systems and airlines to be down and have to cancel 2,000 flights, just imagine the cost of that versus the cost of proper support staffing.

It's unacceptable and irresponsible to perform blind updates.
 
For banking systems and airlines to be down and have to cancel 2,000 flights, just imagine the cost of that versus the cost of proper support staffing.

It's unacceptable and irresponsible to perform blind updates.
My opinion is that the updating system should be changed grouping update with monthly frequency. And never going straight online without admin confirmation.
 
  • Like
Reactions: artk2219
My opinion is that the updating system should be changed grouping update with monthly frequency. And never going straight online without admin confirmation.
I don't think that would work unfortunately. There are a lot of exploits that get found in the wild, meaning already in use. If you switch to a monthly batch exploit users will just wait for the start of a new month to deploy their next exploit(s) to maximize the time they can use them.
 
  • Like
Reactions: artk2219
Sounds like it is much more complex if you have bitlocker on it.

Can you even think to be the guy who walks into a datacenter with cabinets as far as you can see with multiple server per cabinet and know you have to physically touch every device.....even if the fix is "easy"
It's been fantastic for sure, we've been throwing bitlocker keys and local admin logins around to everyone left right and center in plain text. The cleanup from this is going to be awesome. Eh, at least we've been able to get most people up and running. Well, as long as they had a working recovery partition. Otherwise it's the fun process of using cmd to unlock the boot disk, finding the windows volume drive letter, and deleting the affected crowd strike file with cmd, over the phone. Good good times, it's like the 90s all over again.
 
Last edited:
It's unacceptable and irresponsible to perform blind updates.
That's not a blind update -- CrowdStrike is an EDR which is similar to AV. What was updated was supposedly the equivalent of "definitions". The problem seems to be that the kernel driver they wrote did two things it shoudln't have:

1. It parsed downloaded definitions (so practically handling untrusted input in kernel ring 0 privilege)
2. It didn't have proper exception handling (like try/catch block) and crashed taking down the OS with it

So no, updating definitions (and even drivers if necessary) is acceptable, writing crappy code shouldn't be acceptable but it is not only acceptable but also paid well and made easy so that even idiots can do it nowadays with the help of various tools.

When we are at it, the CEO of CrowdStrike was the CTO at McAfee back in 2010 when they had their moment of fame with a similar issue. As long as incompetence is rewarded by giving even better positions with higher salaries instead of firing this is going to keep happening.