News Backblaze: HDDs Tend to Fail Before Three Years of Operation

Status
Not open for further replies.
Seagate disks when have click automatic trow in the bin... western digital and toshiba drives last longer too long to be good. Here i reuse toshiba drivers on ps3 some drivers are when the ps3 launch lol
 
The headline totally misinterprets the statistics. Hard drives that fail quickly, tend to fail quickly. That's all they really found.

They didn't test the hard drives that last 10 years, and most do that. And, of course, we're ignoring what causes most of these hard drive failures anyways: bad power supplies.
 
  • Like
Reactions: Lafong and SSGBryan
The only drives I ever had fail on me were Seagates. Never again!!!!

WD Red (CMR) drives in my NAS still going strong after 10+ years.
That reminds me of how I heard Google sourced its data center drives, keep in mind this is like 10 year old info. Their interpretation of data was that either disks were bad and failed quickly, or they didn't fail for 10+ years. So they believed they could get cheaper storage with greater reliability sourcing used drives for data centers.

So for a while, that's where the Gmail unlimited storage came from.
 
The only drives I ever had fail on me were Seagates. Never again!!!!

WD Red (CMR) drives in my NAS still going strong after 10+ years.
I had multiple Maxtor drives die. When Seagate bought them, I knew to never buy a Seagate. I always bought WD Blacks until old-school SSDs and then M.2 drives became the craze. WD missed the bus since SSDs became commonplace.
 
  • Like
Reactions: gggplaya
I tend to grow out of a drive before they die, I have several on a shelf in the 1-2TB range. I still have a decent number of 4TB drives with the longest sitting at 55,000 hours currently. I have quite a few drives decently over 30,000 hours. Overall I have had great luck. Oh crap I forgot about the 1TB Hitachi in my desktop..its the champion for me 79037 hours.... Its so reliable I pretty much forget about it...lol
 
Constant On/Off Power cycles for normal Civilian Operations don't help.

Keeping the drives in one state, either On or Off is better for longevity.
What if the on/off cycle is off for a week, on for a few hours? Is it still better to keep them spinning or to let it spin down for that week? I have a NAS where I dump files once a week. I have it spin down after an hour of inactivity and it rarely sees activity otherwise.
 
What if the on/off cycle is off for a week, on for a few hours? Is it still better to keep them spinning or to let it spin down for that week? I have a NAS where I dump files once a week. I have it spin down after an hour of inactivity and it rarely sees activity otherwise.
Turning a computer on is the equivalent of putting it through a power surge.

Best to leave them on period.
 
The headline totally misinterprets the statistics. Hard drives that fail quickly, tend to fail quickly. That's all they really found.

They didn't test the hard drives that last 10 years, and most do that. And, of course, we're ignoring what causes most of these hard drive failures anyways: bad power supplies.
Can you expand what you mean by bad power supplies? I've thought about that as a cause of failure for sensitive electronics, but never really considered it a source of hdd failure.
 
It's not clear to me how BB determines a drive failure. On any "enterprise" array I've worked on, the system will predicitly fail drives way before the platters grind to a halt. I believe most of that determination comes from SMART data, and I think that's likely tunable to some extent. My point is, perhaps Seagate is throwing red flags earlier and is therefore giving a warning data loss occurs.

That's just me playing devils advocate to make a point, because frankly I've had way more Seagate failures that any other manufacturer.
 
Can you expand what you mean by bad power supplies? I've thought about that as a cause of failure for sensitive electronics, but never really considered it a source of hdd failure.
Basically, power always spikes when you turn on a device. I forget the exact numbers, but ATX specifications say it can spike like 10% above the typical voltage. So if your hard drive is designed for 5V, it can spike up to like 5.5V. But you'll get more than that out of bad power supplies, and less out of good ones.

Overvolting damages components. That's why the off/on cycles matter, or at least part of it.
 
Last edited:
What if the on/off cycle is off for a week, on for a few hours? Is it still better to keep them spinning or to let it spin down for that week? I have a NAS where I dump files once a week. I have it spin down after an hour of inactivity and it rarely sees activity otherwise.
Better to keep them On/Off for as long as you can.

Pick a state (On/Off), keep it that way.

Don't go turning it On/Off every 10-30 minutes.

I usually leave my HDD's on running 24/7/365 like a server.

It's fine and the power draw is minimal.

It also doubles as a light weight room heater.
 
  • Like
Reactions: dalauder
What if the on/off cycle is off for a week, on for a few hours? Is it still better to keep them spinning or to let it spin down for that week? I have a NAS where I dump files once a week. I have it spin down after an hour of inactivity and it rarely sees activity otherwise.
The most common point for electronics to fail is during "turn on". That's when all components have to come from cold to full operation in a fraction of a second. A good rule of thumb is to turn everything on in the morning and leave it run until you are totally finished for the day.
 
Status
Not open for further replies.