Can Heterogeneous RAID Arrays Work?

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

homerdog

Distinguished
Apr 16, 2007
1,700
0
19,780

Absolutely.
 

Eloquent

Distinguished
Feb 6, 2004
7
0
18,510
Uh... no it couldn't "easily take a week" to replace the drive. If you can't get something locally, you can newegg it overnight or 2 day.

Some of use use the enterprise warranty on our drives. Which is why it's annoying that Seagate's web site now states "advanced replacement with a credit card is no longer available, so you'll have to send in your drive first." The shipping they give is ground, or pay $25 for second-day air. So, if I ship it ground from CA to the east coast, and they ship it ground from the east cost to here, that's being rid of the drive for two weeks. If all I did was buy a new drive when one starts to fail, I wouldn't worry about the warranty.

As you may know, modern drives actually are designed to "wear out" because there are some lubricated surfaces that have a certain wear time, which basically gives you the MTBF rating. Of course, the real lifetime may be nothing like the MTBF. Whether a given mechanism is any good or not is only understood years after that generation comes out, and if you buy two modern-generation drives from the same manufacturer at the same time, you run the risk of there being an unknown flaw. It's my data, I'd rather not take that chance.

Anyway, if you "never" lose more than one drive at a time, within a warranty replacement period, then RAID 6 isn't necessary. However, people who have run RAID 5, and then lost a second drive while the first drive is offline (a window of between a few days and a few weeks), now run RAID 6.
 

NIB

Distinguished
Apr 28, 2004
39
0
18,530
All of which use the same friggen disks

What disks? What brand? Are you sure they are not rebranded seagate/wd/whatever?

RAID will run degraded for as long as it takes to replace the drive, which depending on circumstances will vary from a half hour or so to maybe 10 days for a home user who needs to mail it in and not get cross-shipping.

The larger the disks become, the more time it takes to reconstruct the data of the failed disk to a spare disk(especially when the system is online and is taking normal work load). Performance isnt increased at the same rate size is increased in disks. So now the time window in which you might get 2 failed disks on the same raid is a lot longer than it was 5 years ago.

Dont forget that it isnt only how much time you take to replace a disk on a raid but also how much time is needed to reconstruct the data on that disk.
 

boonality

Distinguished
Mar 8, 2008
1,183
0
19,310
NIB, those disks could easily be rebranded or even refurbished, vendors do what they can to squeeze us as customers, I'm certainly not doubting your statement at all, in fact I would tend to say that you have a very valid point. As far as recontructing the disks though, we're only talking a matter of a few hours for even 1TB drives. The longest I've ever seen a mirror take to rebuild (on a hot swap) was about 12 hours and that was for a drive in a 2.2TB LUN, I don't know how big the disk was, but I don't think it was any bigger than 250GB as we're talking about 15K RPM SCSI disks in an old dell powervault. Most mirrors on say a 74GB 15K RPM SCSI take about an hour or two to rebuild after hot swapping a bad disk.
 

gzzz

Distinguished
Jan 4, 2009
1
0
18,510
Hi all,
I stumbled on this thread because I was running a RAID 1 with 2 Seagate 400GBs. 1 of them failed and when I RMAed Seagate gave me a 500GB drive.

Since I'm running a RAID 1 does that mean running hetero won't really affect my performance that much.
Reading the posts above it would seem running a hetero RAID 1 would probably be better?
 

TRENDING THREADS