News Latest Windows 11 security patch might be breaking SSDs under heavy workloads — users report disappearing drives following file transfers, includin...

50gb+ to a drive with a certain controller that has over 60% usage does sound niche as far as the number of Windows Insiders who would have it goes, but given the size of the Windows Insider program it's difficult to imagine this wasn't triggered before wide release, especially given the SK Hynix P41 Platinum is one of the best drives you can/could get for your money.
 
I can say, the drive being filled up to 60% isn't a hard requirement. Just a few days ago, I was doing a swap from a 1TB 990 Pro (boot drive) to a 2TB 990 Pro, using EaseUS's clone software. First two times, it locked up the destination drive. It couldn't see the drive until I restarted.

I thought it might have been thermal locking because the drive was getting insanely hot, even with a copper plate on it (at the time, it was in an external enclosure). Thinking it was a thermal problem, I got a new external enclosure, specifically getting a lower end one, with the idea that it would hopefully bottleneck the process and lower the temp of the SSD.

It ended up working, but now I'm wondering if it wasn't just because of this; and I got lucky on the third try.
 
Only 3 things to say to this…

backup

BackUp

BACKUP
Backing up a drive might cause the problem.

Now I have to wonder if that's what happened to a 512GB 2.5 inch Crucial MX500 SATA SSD that got bricked after I tried cloning a Windows install from another drive to it. Non recoverable, I can't even see the drive when booting into repair mode from a Windows install disk now.
 
  • Like
Reactions: phenomiix6
Backing up a drive might cause the problem.

Now I have to wonder if that's what happened to a 512GB 2.5 inch Crucial MX500 SATA SSD that got bricked after I tried cloning a Windows install from another drive to it. Non recoverable, I can't even see the drive when booting into repair mode from a Windows install disk now.
Thankfully SSDs don't seem to die on me, so I never got to try this method myself, essentially a certain power cycle sequence: seems pretty safe to try on a drive you've given up as dead.
 
I believe the most likely cause is overheating. Some people have cases with limited air flow around the SSDs. These high end M.2 drives need a heatsink and air flow across it. When it gets too hot during the file transfers it overheats and is no longer visible to Windows. They reboot, the drive cools off and can be accessed again. If it gets too hot, the drive is ruined and no longer usable even after reboot.
Just a theory.
 
  • Like
Reactions: iLoveThe80s_90s
I had this happened to my main drive late last week. It's a 4TB Samsung Pro 990. It's not recoverable (at least not through the the Win11 recovery methods, tried for hours. The data is there but won't boot. Luckily I have a 2TB my preview drive with Win10 on it so swapped it out... still working on if I reformat my 4TB and do a fresh install or what. I have the files I need, just not sure what to do with that drive now... probably will wipe and start over, but may wait until there is a fix as wiping and starting over will just repeat the Win11 bug.
 
I also had this happen with a 4TB Samsung Pro 990 recently. Thought originally it was a separate SSD I was cleaning, copying large file sets to my main drive... Then I get flakiness, random reboots and drive not found. Full power off, power back on, and I got it back... but now I'm worried about losing it on a World of Warcraft update that rebuilds the CASC files or a new game install.

This sort of thing is really frustrating. Not because things should never ever break - but because the default consumer Windows 11 feels less like a stable OS and more like a rolling release Linux distro... but without real benefits of a rolling release Linux distro. I mean, Fedora pushes a new version every 6 months... but I have to approve that upgrade, and things breaking is rare. Heck, I've been playing around with CachyOS for a few things, and it seems more reliable. That's, frankly, terrifying: Windows was positioned for years as a reliable, compatible experience... and it's not there.

Microsoft has been trying to separate apps from core Windows (which is good), but jamming AI and telemetry out the wazoo. They don't seem to have a clear vision for what Windows is - there's rearranging the deck chairs on the Titanic, like replacing the Blue Screen of Death with the Black Screen of Death and killing the smiley... but I can't think of a substantial period where I've thought 'the Windows 11 experience is improving'. And if it's not improving, but changing and breaking? That's not the computer I want.

Windows has struggled with HW bugs like this, but also other system-level components. Windows HDR is a mess. NTFS is still the default filesystem, with ReFS only really being recommended for data drives.

There are bright spots. WSL is great. They're slowly crawling towards a package manager with winget (UnigetUI is a great UI for it). Windows Terminal is a huge improvement, and while there are better terminal apps (like Tabby), if I could get Windows Terminal on Linux, I'd strongly consider it for my go-to. But these aren't enough if the base OS is just... flaky.
 
  • Like
Reactions: jlake3
I’m fine because I haven’t installed the update yet, but it’s concerning that Microsoft still hasn’t said anything about this issue. It isn’t mentioned in the known issues for KB5063878. Is a fix being worked on ?
 
When I have to push my ssds move 1.5tb or more of data... so far so great...
Enterprise hardware has more controllable burst when write. My toshiba has very friendly speeds only write 800mb/s the seagate one barely hits 2000mb/s... Last time I have moved tons of data on windows 11 I see the memory take huge chunks of data... before write 700 to 800 mb ... maybe people are loosing data on memory pool.
 
For the past several years there has been a bug in Win11, across several different builds, where copying files starts out great and then performance tanks (1-2GBs drops to 50-60MBs) and CPU kernel times would jump to 100%. The CPU spike was the telltale sign of this bug vs. Cache exhaustion, but you could only see it if you turned on kernel time reporting in task manager. Once the problem starts, it takes a reboot to restore performance. After a reboot you'd get 10-20GB of good copy performance before it would tank again. It doesn't matter if its reading or writing, local drive to drive, or across the network. It's been very frustrating in an environment where we upgraded desktops to 10Gbs NICs because the users deal with very large files.

I just tested a system where the bug was consistent, and it doesn't happen any longer, and it has the Aug patch. So, my conclusion is MS is trying to fix something that's been very difficult to fix, and broke something else in the process.

Edit to add: this CPU spike behavior was the same regardless of core count. It even happened on an AMD workstation with 96 cores. All 96 cores would spike to 100% when performing file copies. That was one big clue we weren't dealing with cache exhaustion.
 
Could the same be true of copying large amounts of data to a new USB drive? I tried to upgrade my 512GB USB to a TB drive, and the copy kept failing, until the drive was no longer readable. The drives were Samsung.
 
50gb+ to a drive with a certain controller that has over 60% usage does sound niche as far as the number of Windows Insiders who would have it goes, but given the size of the Windows Insider program it's difficult to imagine this wasn't triggered before wide release, especially given the SK Hynix P41 Platinum is one of the best drives you can/could get for your money.
I mean, install a few games and you have likely met this scenario.