Nero

Distinguished
Oct 19, 2003
233
0
18,680
Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

Does anyone know on average how many years hard drives last? Hard
drives that take a lot of use like gaming, video editing and stuff
like that? I know a hard drive can die after a year the same way it
can die after many years, but I'm just trying to find out an average.
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

Previously Nero <nero@savingtheworld.net> wrote:
> Does anyone know on average how many years hard drives last? Hard
> drives that take a lot of use like gaming, video editing and stuff
> like that? I know a hard drive can die after a year the same way it
> can die after many years, but I'm just trying to find out an average.

HDDs should last several years under heavy load, if they are cooled
well. If they run hot, they will die young even if not used at all.

The average is in the manufactuerers documentation in the form of the
MTBF (Mean Time Between Failure) and the component lifetime.
The second is the time the MTBF holds, i.e. after that the failure
rate may increase.

Arno
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

Nero <nero@savingtheworld.net> wrote:

> Does anyone know on average how many years hard drives last? Hard
> drives that take a lot of use like gaming, video editing and stuff like that?

The use has no effect on the life at all. All that matters is the time its on
for.

> I know a hard drive can die after a year the same way it can
> die after many years, but I'm just trying to find out an average.

Most find that drives are replaced because
they are too small well before they die.

You'd think that that wouldnt be so true of say 160G drives,
but I'm finding that its still true, mainly because I have replaced
the VCRs with hard drive and still upgrade them before they die.

I havent actually had a drive die in more than a full decade
and plenty older than that one are still going strong.
 

Nero

Distinguished
Oct 19, 2003
233
0
18,680
Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

On Sat, 30 Jul 2005 14:26:00 +1000, "Rod Speed" <rod_speed@yahoo.com>
wrote:

>Nero <nero@savingtheworld.net> wrote:
>
>> Does anyone know on average how many years hard drives last? Hard
>> drives that take a lot of use like gaming, video editing and stuff like that?
>
>The use has no effect on the life at all. All that matters is the time its on
>for.

Is that right, I had heard a hard drive takes more wear and tear from
turning it off and on, and it's better to leave them on. Although I
must admit most of the hard drives I've had die are the one's I've
left on all the time, lol (mostly the ones in my tivo). Good thing I
now have my system to hibernate after a couple of hours.
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

Nero <nero@savingtheworld.net> wrote
> Rod Speed <rod_speed@yahoo.com> wrote
>> Nero <nero@savingtheworld.net> wrote:

>>> Does anyone know on average how many years hard drives last? Hard
>>> drives that take a lot of use like gaming, video editing and stuff like
>>> that?

>> The use has no effect on the life at all.
>> All that matters is the time its on for.

> Is that right, I had heard a hard drive takes more wear and
> tear from turning it off and on, and it's better to leave them on.

Correct. But leaving the drive off lasts even longer.

> Although I must admit most of the hard drives I've had die are
> the one's I've left on all the time, lol (mostly the ones in my tivo).

LIkely they arent being cooled properly.

> Good thing I now have my system to hibernate after a couple of hours.

I dont hibernate at all myself except with the dinosaur
in the kitchen that can go for days not being used.
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

Rod Speed wrote:

>Nero <nero@savingtheworld.net> wrote:
>
>> Does anyone know on average how many years hard drives last? Hard
>> drives that take a lot of use like gaming, video editing and stuff like that?
>
>The use has no effect on the life at all. All that matters is the time its on
>for.

Clueless.
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

Just to clarify what Arno wrote: MTBF and average life time are
unrelated.

Failure rates of disk drives (or any mechanical devices) follow a
"bathtub" pattern. The initially high failure rate (DOA, infant
mortality) drops quickly (within hours/days) to a baseline, which stays
more or less constant (generally for years under proper usage and
conditions), then starts to rise slowly due to wear.
Now the MTBF is the inverse of the failure rate in steady state, i.e.
at the baseline of the bathtub curve. It is only the height of the
bathtub bottom, not the length of the bathtub that determines MTBF. The
length is the lifetime, which is rarely specified (because
manufacturers would get into too much hot water if they did.) Generally
the warranty period is a good indication of the expected lifetime since
manufacturers like to brag about their warranty but don't want to spend
much on warranty replacements of course.

You can have a very high MTBF, i.e. a very reliable drive, but it may
wear out within a year (deep but short bathtub.) Conversely, you can
have a population that dies slowly but constantly over 20 years (long
but shallow bathtub - low MTBF but high average life.) By definition,
when the devices start to wear out, you're out of the steady state and
those failures don't count against the MTBF.

MTBF is usually monitored by ORT (Ongoing Reliability Test) processes
that take a sample of new drives (after burn in to get rid of infant
mortality) and run them for a couple of weeks or months, so reliability
is only proven for that long. Experience, component data and field data
are used to extrapolate from there on. If some new component is
introduced (which is the case for practically any new product), there
may be surprises if the component doesn't meet what the supplier
promised, or if the designers specified a component that's not quite
right for the intended function, or if there are interactions that
nobody foresaw. Happens all the time.

Just remember that wear generally rises with the exponential of the
temperature. Switching a drive on generates a lot of heat from friction
and inrush current.
I concur with Rod. The only drives that died on me over the last couple
of years were Notebook drives. No surprise there (bleeding edge + power
cycles + abuse). But I still have a couple 2 and 4 G SCSI drives
running in my Linux boxes. They must be mid-90's vintage.

Ralf-Peter
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

Nero wrote:
> Does anyone know on average how many years hard drives last?

Are modern consumer hard drives likely to last as long as older drives did?
They're now cost reduced and not built to the same standard IMO.

I`ve seen many >10Gb drives fail in under a year, but older (<4gb) seem
to go on long after they're obsolete.

> Hard drives that take a lot of use like gaming, video editing and stuff
> like that?

So long as there's adequate cooling, the usage pattern shouldn't affect
drive longevity.

How cool is cool enough?

Check the manufacturers specs, but a general rule: keep it under 45C

--
Mike
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

RPR <rohbeck@yahoo.com> wrote

> Just to clarify what Arno wrote:
> MTBF and average life time are unrelated.

> Failure rates of disk drives (or any mechanical devices) follow
> a "bathtub" pattern. The initially high failure rate (DOA, infant
> mortality) drops quickly (within hours/days) to a baseline, which
> stays more or less constant (generally for years under proper
> usage and conditions), then starts to rise slowly due to wear.
> Now the MTBF is the inverse of the failure rate in steady state,
> i.e. at the baseline of the bathtub curve. It is only the height of the
> bathtub bottom, not the length of the bathtub that determines MTBF.

It isnt even that. MTBFs are purely calculated, not measured
and bear no real relationship to actual average life at all.

> The length is the lifetime, which is rarely specified (because
> manufacturers would get into too much hot water if they did.)

It isnt even possible to specify it. The best that can be done
is to see what the average life turns out to be, and thats quite
difficult to quantify, and that data is useless by then anyway
because you cant buy the drive new anymore anyway.

> Generally the warranty period is a
> good indication of the expected lifetime

Like hell it is. 3 years was close to universal with mass market
commodity drives. Then most manufacturers decided that the
inevitable cost of the extra 2 years of warranty prevented them
from being as aggressive on the price of the drive, so most of
them changed to 1 year instead for that reason.

Now we have seen Seagate decide to offer 5 years in the hope that
that would increase their sales volume enough to come out ahead.

> since manufacturers like to brag about their warranty but don't
> want to spend much on warranty replacements of course.

Its much more complicated than that.

> You can have a very high MTBF, i.e. a very reliable drive, but it may
> wear out within a year (deep but short bathtub.) Conversely, you can
> have a population that dies slowly but constantly over 20 years (long
> but shallow bathtub - low MTBF but high average life.)

And you can get some duds like the IBM 75GXPs and the
60GXPs and the Fujitsu MPGs which died like flys and the
MTBF had absolutely no predictive value on that what so ever.

> By definition, when the devices start to wear out, you're out of
> the steady state and those failures don't count against the MTBF.

You clearly dont have a clue about MTBF at all.

> MTBF is usually monitored by ORT (Ongoing Reliability Test)
> processes that take a sample of new drives (after burn in to
> get rid of infant mortality) and run them for a couple of weeks
> or months, so reliability is only proven for that long.

MTBFs arent even measured at all like that.

> Experience, component data and field
> data are used to extrapolate from there on.

Wrong again.

> If some new component is introduced (which is the case for
> practically any new product), there may be surprises if the
> component doesn't meet what the supplier promised, or if
> the designers specified a component that's not quite right
> for the intended function, or if there are interactions that nobody foresaw.

And some drives like the IBM 75GXPs and the 60GXPs can turn
out to have a fundamental design problem that doesnt show up
for quite a while in the field and the MTBF is completely irrelevant.

> Happens all the time.

Doesnt actually happen that often with hard drives.

> Just remember that wear generally rises
> with the exponential of the temperature.

Thats utterly mangled as well.

> Switching a drive on generates a lot
> of heat from friction and inrush current.

And that is why the power cycles are
specified separately in the datasheets.

> I concur with Rod. The only drives that died on me over the last
> couple of years were Notebook drives. No surprise there (bleeding
> edge + power cycles + abuse). But I still have a couple 2 and 4 G
> SCSI drives running in my Linux boxes. They must be mid-90's vintage.

I've got some IDEs of that size in dinosaurs that dont warrant
anything bigger, tho I did bin the 2G because its irritatingly slow.
The 4G is a bit marginal in that area, but since it only affects the
boot time and I dont normally boot, just return from hibernation,
I havent bothered to replace it with something faster.
 

nick

Distinguished
Dec 31, 2007
994
0
18,980
Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

On Tue, 2 Aug 2005 07:51:51 +1000, "Rod Speed" <rod_speed@yahoo.com>
wrote:

>RPR <rohbeck@yahoo.com> wrote
>
>> Just to clarify what Arno wrote:
>> MTBF and average life time are unrelated.
>
>> Failure rates of disk drives (or any mechanical devices) follow
>> a "bathtub" pattern. The initially high failure rate (DOA, infant
>> mortality) drops quickly (within hours/days) to a baseline, which
>> stays more or less constant (generally for years under proper
>> usage and conditions), then starts to rise slowly due to wear.
>> Now the MTBF is the inverse of the failure rate in steady state,
>> i.e. at the baseline of the bathtub curve. It is only the height of the
>> bathtub bottom, not the length of the bathtub that determines MTBF.
>
>It isnt even that. MTBFs are purely calculated, not measured
>and bear no real relationship to actual average life at all.


The MTBF is the mean time between failure. It's an indication on how
often an administrator with a large number of drive ought to change a
drive. Let's say the MTBF is 10 000 hours and he has 1000 drives, then
every 10 hours one drive is expected to fail, on average
It ought to be mesure with a very large number of drive running at the
manufacturer, unfortunately not for a long time ...

Nick
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

Mike Redrobe <mike@redrobe.net> wrote
> Nero wrote

>> Does anyone know on average how many years hard drives last?

> Are modern consumer hard drives likely to last as long as older drives did?

Depends entirely on what you call 'older drives'

> They're now cost reduced

Nope.

> and not built to the same standard IMO.

Bullshit.

> I`ve seen many >10Gb drives fail in under a year, but older (<4gb) seem to go
> on long after they're obsolete.

Quite a few of those drives failed too.

>> Hard drives that take a lot of use like gaming, video editing and stuff like
>> that?

> So long as there's adequate cooling, the usage pattern shouldn't affect drive
> longevity.

The main exception to that is power cycles.

> How cool is cool enough?

> Check the manufacturers specs,

I wont run at the max of those myself.

> but a general rule: keep it under 45C

But short term exceeding that but still less than 50 is fine too.
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

Nero wrote:

> Does anyone know on average how many years hard drives last? Hard
> drives that take a lot of use like gaming, video editing and stuff
> like that? I know a hard drive can die after a year the same way it
> can die after many years, but I'm just trying to find out an average.

That's not "a lot of use". "A lot of use" is a datacenter where a few racks
full of drives are getting hammered 24/7.

The drive will last however long it lasts--typical failure modes in the real
world are electronics failures that have no relation to usage patterns and
physical crashes that again have no relation to usage patterns. Wear of
the bearings on the head positioning mechanism is seldom a factor, and that
is the only thing that would be affected by heavy use differently than by
light use.

--
--John
to email, dial "usenet" and validate
(was jclarke at eye bee em dot net)
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

Nick <nick@no-domain> wrote
> Rod Speed <rod_speed@yahoo.com> wrote
>> RPR <rohbeck@yahoo.com> wrote

>>> Just to clarify what Arno wrote:
>>> MTBF and average life time are unrelated.

>>> Failure rates of disk drives (or any mechanical devices) follow
>>> a "bathtub" pattern. The initially high failure rate (DOA, infant
>>> mortality) drops quickly (within hours/days) to a baseline, which
>>> stays more or less constant (generally for years under proper
>>> usage and conditions), then starts to rise slowly due to wear.
>>> Now the MTBF is the inverse of the failure rate in steady state,
>>> i.e. at the baseline of the bathtub curve. It is only the height of the
>>> bathtub bottom, not the length of the bathtub that determines MTBF.

>> It isnt even that. MTBFs are purely calculated, not measured
>> and bear no real relationship to actual average life at all.

> The MTBF is the mean time between failure.

Duh.

> It's an indication on how often an administrator with
> a large number of drive ought to change a drive.

Nope.

> Let's say the MTBF is 10 000 hours and he has 1000 drives,
> then every 10 hours one drive is expected to fail, on average

Wrong.

> It ought to be mesure with a very large number of drive
> running at the manufacturer, unfortunately not for a long time ...

It aint even that.
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

Hello,

>> The MTBF is the mean time between failure.
>
> Duh.
>
>> It's an indication on how often an administrator with
>> a large number of drive ought to change a drive.
>
> Nope.
>
>> Let's say the MTBF is 10 000 hours and he has 1000 drives,
>> then every 10 hours one drive is expected to fail, on average
>
> Wrong.
>
>> It ought to be mesure with a very large number of drive
>> running at the manufacturer, unfortunately not for a long time ...
>
> It aint even that.

http://en.wikipedia.org/wiki/Failure_rate

Roman
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

Previously Mike Redrobe <mike@redrobe.net> wrote:
> Nero wrote:
>> Does anyone know on average how many years hard drives last?

> Are modern consumer hard drives likely to last as long as older drives did?
> They're now cost reduced and not built to the same standard IMO.

> I`ve seen many >10Gb drives fail in under a year, but older (<4gb) seem
> to go on long after they're obsolete.

Actually I think that the overall reliablility has dramatically
increased. However it seems to me that a decade ago or more drives
had a tendency to die young or very old, while today they sort
of die all the time. Also people have much more of them, which
certainly affects perception.

Arno

>> Hard drives that take a lot of use like gaming, video editing and stuff
>> like that?

> So long as there's adequate cooling, the usage pattern shouldn't affect
> drive longevity.

> How cool is cool enough?

> Check the manufacturers specs, but a general rule: keep it under 45C

I would advise < 45C when under heavy load. That comes down to
something like 20C...35C under no load, depending on you set-up.

Arno
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

>> MTBF is usually monitored by ORT (Ongoing Reliability Test)
>> processes that take a sample of new drives (after burn in to
>> get rid of infant mortality) and run them for a couple of weeks
>> or months, so reliability is only proven for that long.
>MTBFs arent even measured at all like that.
That's how we verify MTBF at my employer. I get the ORT reports every
month and convey the info to my customers.
How do you guys do it, assuming you also work in the mass storage
industry?

Ralf-Peter
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

RPR <rohbeck@yahoo.com> wrote:

>>> MTBF is usually monitored by ORT (Ongoing Reliability Test)
>>> processes that take a sample of new drives (after burn in to
>>> get rid of infant mortality) and run them for a couple of weeks
>>> or months, so reliability is only proven for that long.

>> MTBFs arent even measured at all like that.

> That's how we verify MTBF at my employer.

'verify' isnt the same thing as the original calculation of MTBF.

> I get the ORT reports every month
> and convey the info to my customers.

Separate matter entirely to the MTBF which doesnt change in the datasheet.