Class-Action Lawsuit Against Seagate Built On Questionable Backblaze Reliability Report

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
It isn't just consumer drives that are crap. I have personally had several Seagate drives of various sizes fail often with firmware failure. I work with servers. We started getting Seagate drives dropping like flies on large SANs from Dell. We were replacing them every couple of months and rebuilding the RAID. Dell was using "enterprise" level Seagate drives and they were also failing. From talking with Dell engineers they basically admitted that it was a firmware issue. How about that, the same issue that was on their consumer models. About that time I went to Dell training HQ in Roundrock and their VM training servers had gone completely down due to the SANs drive failures. Imagine that.
 


The article is not covering reliability rates of all drives it is covering a law suit that is based on spotty information from one company. Why include useless information that has no bearing on this?

The point of the article was to expose a frivolous lawsuit utilizing very dodgy statistics, not to talk about the reliability rates of all hardware.
 


Lucky you. We have nothing but Dell PowerEdges and SANS and our drives are a mix between WD, Seagate and Hitachi. Whatever they decide to use. We have had all three die out.

There is no way you have that level of failure rate, that is too consistent. The odds of failure are the same for everyone and unless you are just that unlucky you would not have Seagates keep dropping.
 
Wow. I would place as much faith in the BackBlaze report on HDs, that I would on a "real life" failure test on shock absorbers/struts that was driving cars around a track with speedbumps every 100 yards for 100 laps a day.

The only thing BackBlaze did right was not take their state of the art mounting system and invert it, remounting the drives when they fell on the floor.

Its a good thing they didnt show you the gerbil wheel power generation systems so you wouldnt suspect power issues.
 
On a side note, I recently trashed a 1TB Barracuda that was coming up chkdsk, then not recognized after boot. I don't know how old it was. I replaced it with a 2TB Barracuda that was in a Lacie External drive that I wasnt using anymore. I will see how that goes. My Samsung Spinpoint has been solid for a while. No major biases, but I dont think of Seagate as my first choices for purchases, instead WD.
 


Amusingly enough, the company I work for supports several SOHO Linux servers that run software (mdadm) RAID 1 with Blue, Red and Black drives. (Won't talk about Green since we don't buy them because of that annoying power saving feature.) Blues and Reds both hold up in the RAID just fine... yet each Black (I know of three) had to be sent for warranty at least twice by now. Never touching those drives again. (Still prefer WD over Seagate though, free courier pickup and drop-off beat going to a service center by a long shot)
 

What are you trying to cherry pick here? Nearly all the 2TB and 3TB drives were installed in Pod 2.0, yet of those 10 models only one of them hit above 10% first year failure rates. Only two models hit above the 7-8% three year failure rate. BOTH of the models were manufactured by Seagate. Now the 10% failure rate on the 2TB Seagate isn't too bad but the almost 30%?! failure rate on the 3TB Seagate is horrid. Now you'll scream about it being the Pod's fault, but if that were truly the case then you'd see growth year over year. I mean, if it's really vibration as the #1 source of the problem then failure rates should increase over time as the vibration induced failure chance increases over time and is coupled with the normal failure.

If you analyze the data you'll see that most of the models follow that trend of a slow rise year over year for failure rates. However there are a few anomalies. The Seagate 3TB XT failure rates actually DROP year over year which would be in stark opposition to the vibration theory (the Toshiba 3TB also trends like this). The second anomaly is the Seagate 3TB 7200.14 which increases in the second year and the DROPS significantly in the third year. Also, when you look at the first year failure rates (highest failure window for poorly manufactured drives) Seagate owns the #1,2,4 spots with Toshiba stealing away #3.

It's really hard to argue the Pod as the source of all the problems when specific drives are the outliers. If more models failed as bad as the Seagates I'd be willing to buy the vibration theory. However, as another commenter pointed out the drives WITHOUT head parking features that protect against things like vibration and bumps actually performed the BEST in the study! Just further evidence the vibration theory is simply being pushed by Seagate fans or apologists in an attempt to disqualify the failure rates.

If you want to argue the study is faulty you need a better theory like other commenters...such as normal consumer drive features accelerating drive failure in a server environment. That said, with pricing similar between WD and Seagate for economy drives why would you not go with a drive that performs at least as well in a 'pristine' environment and better in an 'abuse' environment. That's what we call a win-win situation by not buying certain Seagate models.


Pins hold a physical position but they do NOT secure a drive against vibration. It's impossible because the pin has to be significantly smaller than the screw hole or you'd risk damaging the screw threads permanently and you have to allow for alignment errors of holes. In fact, because the pin is round there is a very very small point of contact that is round which means the pin design is going to magnify drive vibration. Sure that vibration won't affect the SATA connector but we've already disqualified that notion as the failure point because of the year over year failure rates.


Do you know anything about electronics and individual components? Power cycling failure is common everywhere because electronics are more susceptible to surges of power than steady state operation. But hey, I'll quote Google's Failure Trends in a Large Disk Drive Population report.
"higher power cycle counts can increase the absolute failure rate by over 2%"

Guess what failure rates Google saw?
Early failure spikes in the first 3 months, dropping a bit by the end of the first year. Then increasing in year two and three. Funny, very similar to the Backblaze data. The percentages? Around 2% at year one and around 7.5% at year three (reading off a graph). Again, similar to Backblaze so the Pod design must not play a huge factor in the cause for failure.

Another choice quote "Failure rates are known to be highly correlated with drive models, manufacturers and vintages. Our results do not contradict this fact." So yes, some drives are just worse than others, Backblaze was just the first to dime them out.
 


Relevance ? If you read the beginning pages, you will find the minimum thresholds required for inclusion in the reporting. As long as the quantities sold are in enough quantity to be statistically reliable, what is the relevance of how "popular" each one was ? The basis of this lawsuit is that the drive in question failed to sustain an acceptable rate of reliability ... it was not a popularity contest.

For the claimant to "make a case", he has to prove that the drives in question failed at a rate higher than the accepted norm in the industry. Whether one manufacturer sells 20,000 drives, 100,000 drives or 100 million drives will be irrelevant.

As for cars.... if you want to make a comparable analysis, you had to weigh the failure rate of a single component in the car.





The problems here are:

1. You have Dell (or whomever) basing their buying decisions on whichever HD manufacturer submits the low bid. Again, as it's not manufacturer name but model line that is the better quality indicator, is it a low quality line or a higher quality line ? Opinions on reliability are rarely based upon actual data> How many "discussions" have we rad here on these forums arguing whether Laptop Brand A is better than laptop Brand B, and visa versa ? And the reality is that neither A or B actually make a laptop, both A and B buy their laptops from the same ODM and they are made in the same factory using the same component suppliers.

2. IBM's testing (circa mid 90s) showed that a 10C increase in temps can result in a reduction in HD life of up to 50%. As with Backblazes hokey rubberband setups, one has to wonder whether adequate air throughput is being provided in these areas.

But while the proximate issue in this case will be performance of consumer drivers in consumer (not sever) environments, the elephant in the room is the decision by Backblaze to use drives equipped with the "head parking" protection feature in a server environment and the belief by many that this somehow has relevance to consumer drives in consumer applications.

Again, this goes to the brand loyalty issue whereby opinions are often held based upon anecdotal experience without any basis in statistical fact and, like political commentary, anything that supports that unsubstantiated opinion, applicable or not, is given great significance while anything contrary, no matter how unassailable, is summarily dismissed. That's how we wind up with silly lawsuits like this.
 


Using the Backblaze info is not relevant to the case then. They are using consumer drives in a server environment, not their intended use. That is one of my points.

My other point is that no one drive company is superior. Each has had their own lines that fail horribly and each has had their own lines that do very well.

That was my point, that it is another frivolous lawsuit.
 
I have installed and used all types of hard drives in all sorts of workloads for several decades. This general "Seagate drives are bad" mentality comes from people who don't understand what they're doing. Seagate drives are perfectly fine, I've had some last 10 years, and only one that died an early death. I knew an admin that ran compute clusters using Seagate drives elusively because the warranty was painless and if you lose a cluster you're getting lots of angry phone calls. I've also used WD and Maxtor drives without any problems, and this is in things like DVRs, servers, and workstations. The only drives I ever had issues with are Hitachis.

I've found that when people blame a manufacturer it's usually that the HDD is dying because of another fault in the system that isn't being diagnosed. As an example we had a server that killed a Seagate, WD, and Hitachi. Each drive lasted about 2 weeks before it died. Normally drives get replaced with the same type and you would blame the drive, turned out the drives were fine it was the PSU that was killing the drives.

So those claiming any brand is better than another really hasn't got a clue.
 

There is some validity to your statement but you're doing the same thing you're accusing everyone else of doing...displaying anecdotal evidence as fact. You're saying Seagate is perfectly fine because the few you've handled didn't die at an abnormally high rate. The real problem sits with particular models, and manufacturers do mess things up for an entire product line. This isn't a Backblaze theory either, Google confirmed this with their data as well, and so have other drive studies.

Point in case, back in 2009 the Barracuda 7200.11 had a massive firmware issue. This issue persisted for over a year in manufacturing before Seagate admitted it. To top it off, the issue also affected three other models as well. They did finally offer a firmware upgrade but that was after millions of drives were manufactured and placed in the wild.

Does that mean every Seagate drive is bad? No, in fact Seagate always used to be a leader of reliable and quality hard drives. However, in the great hard drive buyout period both WD and Seagate bought other companies and continued using those third party facilities for quite a while. Seagate had the misfortune (or maybe stupid) mistake of buying Maxtor who had a rather spotty track record as a manufacturer. In the end, when you want to be a name brand and sell drives off your name, you have to deal with the consequences when you make a bad product. The best solution is probably not taking that drive from the new factory you just bought and slapping your label on it without an engineering review, quality control review, and product testing.

You also don't make a good case for your admin friend running Seagate clusters. The whole reason he ran them was the "painless warranty" which is kind of amusing. "I buy Seagate because I know they fail, and Seagate knows they fail, so I get hassle free swapouts."

The difference between clusters, static storage, and datacenters like Google and Backblaze is that only the datacenters offer significant enough numbers to find trending. Even something like a cluster could easily be skewed by getting a good or bad lot from the manufacturer. Short of that the only other option is crowd sourcing things like reviews on drives to check for certain models that appear problematic.
 


And Google once released a study showing that "temperature and activity levels were much less correlated
with drive failures than previously reported". So who should we trust? Sure, things should run as cool as possible, but I am very hesitant to believe that 10 C can make that much of a difference for a modern consumer HDD.




I disagree. First of all, I'll still take WD over Seagate any day even if Seagate *was* as reliable as WD (they aren't) simply because of the free in-warranty courier service which Seagate does not offer. Second, throughout the last 3 years, 90% of our support tickets related to failing hard drives were Seagates in a hurry to kick the bucket within a month after their warranty ended. None of them were 7200.11 series. And while the WD drives I've had to sent for warranty usually had "soft" issues like bad sectors (or even just pending ones that would refuse to be remapped OR written to) or dropping from RAID for no apparent reason (and subsequently failing WD diagnostics - but not SMART), Seagate drives died with scratching sounds, failing motors or, in one case, a smoking PCB. Can I blame any other component in the system? Hell no, we use quality PSUs (mostly Corsair CX430), proper mounts, quality electrical setups (UPS everywhere). The only good Seagate drive I know is the one in my personal rig that's been holding up like a boss since I've bought it in 2011 before I knew anything about the market. Still won't make me consider another drive from them ever again.
 
What are you trying to cherry pick here? Nearly all the 2TB and 3TB drives were installed in Pod 2.0, yet of those 10 models only one of them hit above 10% first year failure rates. Only two models hit above the 7-8% three year failure rate. BOTH of the models were manufactured by Seagate. Now the 10% failure rate on the 2TB Seagate isn't too bad but the almost 30%?! failure rate on the 3TB Seagate is horrid. Now you'll scream about it being the Pod's fault, but if that were truly the case then you'd see growth year over year. I mean, if it's really vibration as the #1 source of the problem then failure rates should increase over time as the vibration induced failure chance increases over time and is coupled with the normal failure. If you analyze the data you'll see that most of the models follow that trend of a slow rise year over year for failure rates. However there are a few anomalies…..

This is not cherry picking – It is refusing to accept partial data as fact. If there is data split out among chassis revisions, where is it? Have you downloaded the data? I have. There is no indication of which drives are where in the data, and when company representatives responded to my questions they indicated there is no data on which pods the drives failed in.

We aren’t talking about normal vibrational stresses on these components that build up over time - this is almost catastrophic, which could explain the massive losses in the early stages of deployment. If the problem stems from placing the entire weight of the drive on the SATA connector, and then vibrating it, we would expect the devices that cannot sustain that type of ridiculous condition to fail very quickly.

This could be due to the method that the PCB is connected to the SATA port, etc, or different materials employed in the design. If one manufacturer has a more robust connector than another, that is great news, but it is irrelevant to an end user. There could even be variations between the same products, it would depend upon the manufacturing and materials, which for all we know could vary.

The SATA port is not designed to be the mount point for an HDD in any case, so if that is the source of even 5 failures they are irrelevant. Perhaps we should hit them hammers and add those drives to the list as well? At the very least, the base configuration and installation should be sufficient and fall within manufacturer guidelines to be considered as a point of comparison.

This article is coverage of the lawsuit. I employed one easy-to-understand example, and picture, that laypeople can understand easily. As the article states, this is one among many many concerns with the techniques employed by Backblaze. The article does not state that vibration is the only reason to be skeptical of the results, there are many. Vibration is just one of the likely sources of failure. In either case, your statements have done nothing to remove unnatural amounts of vibration as one of the likely culprits behind the failure rates.

It's really hard to argue the Pod as the source of all the problems when specific drives are the outliers. If more models failed as bad as the Seagates I'd be willing to buy the vibration theory. However, as another commenter pointed out the drives WITHOUT head parking features that protect against things like vibration and bumps actually performed the BEST in the study! Just further evidence the vibration theory is simply being pushed by Seagate fans or apologists in an attempt to disqualify the failure rates.

It is very easy to argue that the pod is a source of the problem. The company itself has stated, specifically, that its pod 3.0 has fewer failures than pod 2.0, specifically because of reduced vibration. That is their comment, not mine - they already stated the painfully obvious, but they cannot quantify it by releasing data that shows what the actual statistical impact was.

Also, drives from other manufacturers also failed well above the normal rate, which is why you will see every single vendor, even those that fared better than others, distancing themselves from these reports. There are no winners. I have inquired with the other companies in the tests when the reports first began, and none have anything to say. No Comment. Don’t you think that the “winners” would be happy to point out just how relevant the data is?

None do, because this is the some of the most technically inaccurate data one can gather short of hitting them with hammers.

Pins hold a physical position but they do NOT secure a drive against vibration. It's impossible because the pin has to be significantly smaller than the screw hole or you'd risk damaging the screw threads permanently and you have to allow for alignment errors of holes. In fact, because the pin is round there is a very very small point of contact that is round which means the pin design is going to magnify drive vibration. Sure that vibration won't affect the SATA connector but we've already disqualified that notion as the failure point because of the year over year failure rates.

Pins secure the SATA port from vibration, and also prevent the SATA port from bearing the weight of the drive, as stated in my post. I’m not sure of your point here.

Who is the “We” behind your statement in the last sentence here? Your statements have done nothing to remove the possibility that intense vibration is one factor behind these failures.

Do you know anything about electronics and individual components? Power cycling failure is common everywhere because electronics are more susceptible to surges of power than steady state operation.

I asked if you had read Backblaze’s opinion of the high rate of failure (that they assume is higher than normal, as everything in their environment is). I did not make a statement that power cycling does not adversely affect failure rates, but I did include a theory that can be magnifying the issue.

Another choice quote "Failure rates are known to be highly correlated with drive models, manufacturers and vintages. Our results do not contradict this fact." So yes, some drives are just worse than others, Backblaze was just the first to dime them out.

I agree that some drives are worse than others, and I don’t think anyone has claimed that they are all similar in failure rates. The crux of this argument is that there are so many factors in the flawed deployment, with some being so blatantly obvious as to be mind-numbing, that we should question the results.
 

All I see is a list of maybes and what ifs of why the Backblaze data might be flawed. Yet, I see ZERO data supporting Seagate to be a reliable brand, specifically the models with horrid failure rates in the Backblaze data. All Seagate has to do is out their own failure rates from datacenters to put Backblaze in their place because it's not like they don't know these numbers. If Seagate had solid data proving the high failure rate models are in fact excellent they would have released it already and they haven't. Backblaze also has consumers backing their 'problem' models with poor ratings at stores.

Until new data presents itself the Backblaze figures are the most reliable and accurate available of hard drive reliability information that identifies models and brands. It's up to the manufacturers to show their drives are a good product. Why do you think other datacenters don't release numbers on specific drives? Because Seagate/etc all give them a hefty discount to NOT report that information.
 


There is hard evidence that the Backblaze environment is flawed, I even provided you a simple picture of just one reason. Even by the company's own admission the data is not applicable outside its own use case.

By the company's own admission the chassis is flawed - what part of that people do not understand, when they print it in black and white...man, I don't get it.

I did not proclaim that Seagate is more or less reliable than anyone. It is possible they are better, worse, or just the same as some other vendor, but we need real data to quantify who is better or worse - not this.

I agree that the vendors hide data, I call that out all the time, and the few that do release the data I applaud them and even inject any released failure data into articles.

I have publicly denounced the lack of vendor provided field failure rate data, and also publicly applauded the few who do provide it (Intel, OCZ). But, as I have stated before, and will again, I don't believe that we should accept obviously failed data in the absence of good data.

No one wants to see another Google report more than I do. They know how to conduct a study, and actually use screws instead of rubber bands.

I think the public's willingness to embrace the Backblaze data stems from either just misinformation (which its really easy to be informed, just read their blog), or more likely it is just the backlash against the vendor silence on field failure rates. Personally, for those that don't release data, they are probably getting what they deserve. But none of them in the Backblaze list do, so its a culture of silence.

However, the data stickler in me says that I have to point out the flaws. I refuse to copy/paste someone's press release without taking a deeper look. People always want their source of news to look deeper, but sometimes put forth an incredible backlash when an outlet actually takes the time to investigate, as opposed to just copy/pasting hogwash. Go figure.
 


I still do not see how a scenario that HDDs were not meant for is considered reliable and accurate information on a HDDs reliability. A consumer drive does not ever belong in a server or data center environment much like you wouldn't run a consumer CPU with non-ECC RAM in a server or data center environment.

Could you imagine seeing consumer non-ECC RAM put into an environment it was not designed for, not even mounted properly to boot, and then using those failure rates? DO you think it would make sense? What about looking at performance of ECC RAM in consumer applications? Guess what, it would perform well lower than consumer RAM because ECC causes it to slow down a lot.

That alone invalidates these results. The drivers were not properly mounted, they were used in a scenario they were not intended to use and the company has even made statements and avoided releasing other data.

I am not per say defending Seagate, I am calling out the lawsuit that is trying to use very sketchy data to make money for nothing.

I had a customer, a Dentist, at a shop I worked at had 4 different drives fail, one was his desktop, two in his server and one external he used as a backup, all at the same time. It sucked because he lost important client information but that can happen. They were all WD drives. Does that mean he should use the backblaze data to sue for loss? No, he was just unlucky.
 
"They were all WD drives. Does that mean he should use the backblaze data to sue for loss?"

--You bring up one of the most relevant points of all. Technically, if this data were worth anything, you could sue ALL of the vendors using it. No one lived up to their specs, according to this data. The problem isn't the drives in this case, its the environment.
 


This is another "enterprise" nonsense. ECC is an overrated excuse for overpriced RAM modules that most servers don't need. Bad example, but goes well along the lines of "reasons" that Seagate put up...

 


So I assume that you have worked on data centers where the information has to be 100% correct then? Most high servers and most company servers require ECC. Most lower end servers, small ones, do not.

But hey, go ahead and argue with years of IT professionals and people who design this stuff for a living.

My point is that using a product in unintended ways will return unintended results. Sure you can do Baja style racing with a Honda Civic. It will break faster than it would in its normal intended use, normal street driving.

I get it. You have it out for Seagate because the Dell servers you have had Seagates that failed. As I said, I have quite a few servers, hell I am at one of our smaller sites and I have 3 720xds with 12 HDDs each and a SAN with 12 HDDs. If I go up to my office I have 5 servers, 2 SANs and a ironport. I have had Seagates, Hitatchis and WDs fail in them. Doesn;t mean it is more relevant because again the data being used for this lawsuit is flawed data.
 

Sure, use case. If I test 50,000 cars by running them over potholes even though the manufacturer doesn't recommend it I expect there to be higher than normal failure rates (just like the drives). However, when a couple of the models fail at exponentially higher rates than the others you can reasonably conclude they aren't manufactured as well. A lot of people prefer to have items that are simply made better even if it exceeds their normal use. Because why not if the price is the same.


You don't get it because you think people are just being stupid. What you don't realize is that they see a product that can take more abuse and they make the smart choice to purchase that product because it has a higher chance of lasting past the warranty period resulting in possible resale which saves them even more money. Or maybe like Backblaze they're running an ad hoc raid setup with cheap drives that aren't designed for servers. A great comparison for this would be Intel's K processors which are unlocked for overclocking. A lot of people buy a K and never overclock. Why? By your logic they're just being stupid. However, those people look at the price, see it's virtually identical and buy the K because it's got better thermal limits than the binned non-K CPUs. Safety, piece of mind, call it what you will, but it's hardly people just being stupid.


For the public it's more often vindication of the same reviews they left on stores for failed drives. For once it's not a small sample size and anecdotal evidence, it's a large volume study even if it's not laboratory conditions.

For me I don't see the vibration in the chassis as a big deal and I think it's very comparable to a lot of home scenarios. There are plenty of slide in hard drive docks designed for the home user to backup their computer and the home user is going to be moving the drive in and out far more often than in a server scenario. External enclosures can also have the same logic applied as they're moved and jostled. Plus, let's say it is related to stress on the SATA connector. Every time you connect a cable or remove it you place stress on the connector and PCB. Why wouldn't you want the one that holds up the best?

Think about it this way, I give you two options...
A) $100 performs functions A, B, C and withstands 10 pounds of force
B) $100 performs functions A, B, C and withstands 15 pounds of force

Now you should probably never run into a situation where you apply more than 8 pounds of force. So product A should be fine but why not get product B instead? It does the same thing, but it's just a little bit tougher. I mean, you probably won't ever apply more than 8 pounds of force but what about that one time you slip and apply 11 pounds?
 


An external drive is a different use case and designed for different uses. As I said, there are drives that are for certain uses. The Seagate drives in consumer desktop models do not have, nor need, head parking since most consumer desktops do not move and the HDD is mounted properly, using screws or a HDD bracket not the SATA connector.

A external HDD is expected to be moved so it is designed with that in mind, although it doesn't give a 100% it will never die.

A WD Green should never ever be used in a RAID but if Backblaze used them in their setups in RAID and the drives kept dropping does that mean the drives are faulty or that they were using the drives for an unintended purpose? It is the latter because it was taking a low power consumer drive and using it in an unintended situation.

If you take a drive and do not mount it properly in a situation it was not designed for it will fail, and I would expect a HDD that does not have head parking since it is designed to be in a stable non moving environment to fail in a situation where it was not designed to be used.

I find it interesting since you guys will run rampant when TH does a review and it doesn't fit your standards but this data seems to not be off to anyone. Everyone just accepts it even though there are too many variables that are just off.
 

Who is you guys? You act as if you're different but you want to discount the Backblaze data (in fact looking for any possible loophole) even though Seagate themselves (or anyone else) can't provide data to contradict the report. You come off as a Seagate fan, or you have an axe to grind against Backblaze.

Was the test a lab test? No
Did it put drives through torture test type conditions? Yes
Was it fair to all the drives involved? Yes (at least in the same storage capacities)

And that last one is really the key. Seagate fans can bemoan that the conditions were abusive and try to poke holes in the data but the fact remains that the certain Seagate models couldn't stand up to the competition. Why buy an inferior product even IF you might not duplicate the torture test conditions. An inferior product is still an inferior product.

But hey if you buy on New Egg at least you'd get a buyer beware from customer reviews. 3 eggs across 1511 reviews, and 35% of those users giving the drive 1 egg. Yeah the Barracuda LP ST32000542AS got a 3 egg rating as well. Consumers know, Backblaze knows, and some people just live in denial.
 
The backblaze 'study' cannot be considered scientifically rigorous and instead has simply confirmed biases in the consumer community. This lawsuit should be thrown out as entirely frivolous and without merit.

From a scientific point of view this is similar to the crazies who claimed the MMR causes autism. That was based on a similarly totally flawed 'study' that also lacked all scientific rigor.

Tbh, people who bring these kinds of garbage class action lawsuits should have to pay costs when they are thrown out. This kind of crap needs to be discouraged - its a waste of court resources and lawyers fees.
 


I discount any method that tries to utilize a product for unintended purposes, yes. I will not accept any of the data for consumer grade drives in a enterprise grade situation. It is an unrealistic situation for the HDDs and was done improperly as well in the mounting of said HDDs.

I also don't trust "user" reviews on sale sites because the reviews can be due to the site themselves. I have gone through bad reviews for products to find 1 Egg reviews due to FedEx/UPS not delivering on time or the price being too high and nothing to do with the actual product itself or the products performance.

My only point is that there is no one perfect drive manufacture. I just had a drives fail in one of my servers at work. It is a WD. I have WD, Toshiba and Seagates in my servers and have had drives from all of them fail, none more than the other.

This lawsuit is frivolous and a waste of peoples time and money. It serves no purpose to anyone and is trying to utilize faulty data.

 
Status
Not open for further replies.