News HDD component supplier ‘collapses’ and 600 jobs will be lost, say reports

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
There are just to many markets where cost per GB is king and data reliability is paramount. Until nand based solutions can address those issues fully, things will not change.
But it's the cost per gb what will rise as hdd demand falls, because fixed costs will have to be distributed between a lower volume of sales.
And that rise will cause further collapse in demand. It's a vicious circle.
 
But it's the cost per gb what will rise as hdd demand falls, because fixed costs will have to be distributed between a lower volume of sales.
Just how much do you believe demand will fall? Do you even know what HDD sales volumes are currently at or what the NRE costs for them are?

I think we all understand the economics of volume manufacturing, here. Some of us also understand there are certain things HDDs do that NAND just can't. That sets a floor on HDD sales volumes, which then serves to keep your "vicious circle" scenario from happening.
 
When you experienced it - what kind of flash was it, and about how long did it take for data loss to occur?
It was observed using the badblocks tool, on Linux. The failed blocks hadn't been written since the drive was manufactured. It was a Micron-branded TLC drive. I think the drive was 3-4 years old, at the time.

The other issue I had was an eReader with 32 GB of storage. I went about 6-8 months without using it, because I had left it at the office during the pandemic, and when I turned it back on all of my books and PDFs that I copied onto it had disappeared. It was made in late 2017, I think. So it's probably TLC. Of course, early TLC chips weren't as good as the TLC chips we have today.

I bought an external QLC drive (Crucial X6) a couple of years ago, before knowing the increased danger of bit rot with QLC. I probably won't do that again. : P I don't keep anything on it that's not backed up, and try to charge it at least once per week if I haven't used it. The longest it has gone unplugged was about two weeks, and I didn't notice any loss. I have no idea what to expect, though. Can loss occur in a month? A week?
If external SSDs were that temperamental, I think people would be screaming about it. I'd wager you could get more than a year, but probably not much. If it were me, I wouldn't go more than a couple months without powering it on for several hours. If you want to be sure the drive is healthy, use a tool that's designed to do a "surface scan" of hard disks - does Windows still have chkdisk?

If you're experiencing bit rot to the point of data loss, what should happen is that you get read errors, when trying to access files where some blocks have gone bad. The drive could also go into read-only mode, if you continue accessing it (but what choice do you really have)?

The mistake I made with that Micron drive is I ran badblocks (a surface scan tool for Linux) before I ran fstrim on it. If the drive had ever been trimmed, then badblocks shouldn't have tripped over those unallocated blocks, because the drive would know they have no real data and would've just returned zeros. Because I stupidly left badblocks running after it started to hit errors, it used up the drive's pool of spare blocks. The only way I could continue using the drive was by using a Micron tool to shrink the usable capacity, which then replenished the pool of spare blocks.

Sorry I can't give more details, but it was about 2 years ago.
 
Last edited:
  • Like
Reactions: AgentBirdnest
Status
Not open for further replies.