Seagate 8TB Archive HDD Review

Status
Not open for further replies.
G

Guest

Guest
Wow, 8TB for about $250, seems great! I've used SMR drives. They are awesome if you write few/ read many. Also if your writes aren't that massive. But if you write a lot, after a while things crawl from 200 MB/s to 30 MB/s, and the head starts to move around every so many seconds from the buffer to a shingle and back.

In a couple of years, SSD's will come with 10TB+, but meanwhile this is a very good deal.

Also, power loss can lead to corrupted sectors with SMR, so you have to be more careful.

Five of these in raid-6 give 24 TB of storage space, very awesome.
 

PaulAlcorn

Senior Editor
Editor
Feb 24, 2015
781
144
11,160
0
Wow, 8TB for about $250, seems great! I've used SMR drives. They are awesome if you write few/ read many. Also if your writes aren't that massive. But if you write a lot, after a while things crawl from 200 MB/s to 30 MB/s, and the head starts to move around every so many seconds from the buffer to a shingle and back.

In a couple of years, SSD's will come with 10TB+, but meanwhile this is a very good deal.

Also, power loss can lead to corrupted sectors with SMR, so you have to be more careful.

Five of these in raid-6 give 24 TB of storage space, very awesome.
Actually, Seagate uses a section of the platter to back up the volatile cache, so these drives are less likely to experience data loss than a typical desktop HDD.
 
G

Guest

Guest
Wow, 8TB for about $250, seems great! I've used SMR drives. They are awesome if you write few/ read many. Also if your writes aren't that massive. But if you write a lot, after a while things crawl from 200 MB/s to 30 MB/s, and the head starts to move around every so many seconds from the buffer to a shingle and back.

In a couple of years, SSD's will come with 10TB+, but meanwhile this is a very good deal.

Also, power loss can lead to corrupted sectors with SMR, so you have to be more careful.

Five of these in raid-6 give 24 TB of storage space, very awesome.
Actually, Seagate uses a section of the platter to back up the volatile cache, so these drives are less likely to experience data loss than a typical desktop HDD.
Yeah, except I've experienced it. It happens when data is being transferred from the buffer (on the drive, that buffer track) to a shingled area and you get a power loss. It shows up as corrupted sectors, but they aren't. After you use the drive for a while, the corrupted sectors drop back to 0.

From some engineers:

1) A power loss when running with write cache enabled (i.e. normally - see hdparm for description) it can get a full track (~2MB or 500 sectors) of bad sectors - the old data was damaged when the overlapping track was written, but new data hasn't been written yet. Those sectors will stay bad until you re-write them with valid data.
2) I've heard from the guy we work with at Seagate that they were worried about how long the startup code could take under certain failure recovery situations, risking drive timeouts like the one you saw.
 

PaulAlcorn

Senior Editor
Editor
Feb 24, 2015
781
144
11,160
0
To my understanding the data is still held in the cache section of the platter until it is committed to the new band in the home location. this is why the sectors show up temporarily as corrupted, they corrupted sectors are in the home location. However, a copy of the data still exists in the media cache, and during idle time, or when the drive basically gets around to it, it will re-write the effected band, copying the valid data back from the media cache, thus 'repairing' the corrupted sectors. Did you experience permanent data loss, or just temporarily? Of course, things don't always happen as they 'should' in real life.
 
G

Guest

Guest
To my understanding the data is still held in the cache section of the platter until it is committed to the new band in the home location. this is why the sectors show up temporarily as corrupted, they corrupted sectors are in the home location. However, a copy of the data still exists in the media cache, and during idle time, or when the drive basically gets around to it, it will re-write the effected band, copying the valid data back from the media cache, thus 'repairing' the corrupted sectors. Did you experience permanent data loss, or just temporarily? Of course, things don't always happen as they 'should' in real life.
In my case, it was permanent. 4728 sectors were flagged as bad, on a 5TB model. Eventually, the sectors were reclassified as ok, but the data loss and corruption was permanent. I have overall data redundancy and checksums, so I didn't actually lose any data overall because of this.

The drive is marketed as "Archive HDD", and in that context, it's excellent. Most people don't write that much non-sequential data for extended periods of time. For most consumer use, this drive is a very good choice.

The persistent cache is about 20-25 GB, so actually, you can write relatively a lot of random data without any performance degradation.
 

Eggz

Distinguished
Sep 3, 2013
2,048
0
20,460
253
Are these reliable simply for offloading system drives, and then keeping in a safe until the next offload (maybe monthly)? I'm seeing reliability concerns, but I'm not sure whether they are tied to constant operation.

I just want to back up by RAID periodically to something that's offsite except during backups, and in the even I need to pull from an archive, during recovery.

Don't mind spending more on a helium drive from HGST if it means the data is safer, but is it?
 

JPNpower

Honorable
Jun 7, 2013
1,072
0
11,360
41


Meh, it has flaws. Less active usage may be better for reliability, but then again, constant operation constantly checks for errors so it would find errors more quickly. So as usual, the answer is to use multiple drives/services. Including onsite, offsite, online, etc.
 

Eggz

Distinguished
Sep 3, 2013
2,048
0
20,460
253


That can't be the end answer. There'd be an infinite regress of infinite backups: backup the backup, and then back that up, and then back that up. . . . and back that up too . . . .

What's a good stopping point? It seems like backing up working files is sufficient. What would warrant backing up archives outside enterprise applications?
 

MidnightDistort

Honorable
May 11, 2012
887
0
11,160
62
The way i back up my files is that i already have a 3TB drive (which is my main backup), the 2TB i use is when i need the most common files and i use 40-250GB drives as my main drives. They are older drives but at least if any of them die it won't be a huge loss. Doing a drive error check may be in order if you are concerned about data loss and backup drives should be well, used as backups meaning it should only be running for backing up data. At least that's how i see it. I had already lost a 120GB drive (died) and my 160GB drive no longer works properly so i can't count on that one for every day usage so eventually i will end up needing the 2TB for every day usage. Which means i'd like to get another high capacity drive which would be ideal as right now several 80GB drives are all that i am using right now and with more tv shows on my 2TB drive i will need additional drives. I like the easy access, no need to run the dvd.
 

JPNpower

Honorable
Jun 7, 2013
1,072
0
11,360
41


It's your choice. Drives are inherently unreliable. If your data isn't really important, 1 or no backups is probably enough. If you care, add more backups. It's just statistics at this point. There is no perfect system.

What do I do?: Two active sites (work/home) for important recent data, along with USB stick(s) and possibly cloud. Two passive sites (external HDD) for important old data, and 1 passive site for old lesser important stuff. Some long term data like photos and stuff uploaded to Backblaze online as well.
 

Eggz

Distinguished
Sep 3, 2013
2,048
0
20,460
253
I think I'll impose a "two-copy" rule.

I'll just backup my RAID until it's full and I need to offload. Initially, one backup will be enough. But once all 4 TB of my RAID 1 is full, I'll have to delete things from the RAID to make space, which will result in only the archived copy unless I back that up.

So for now,

System SSD + Data RAID + RAID Bakcup

But then once I need to offload and make space on the RAID, it will be:

System SSD + Data RAID + RAID Backup + Emergency Recovery Archive

System SSD = 750 GB

Data RAID = 2 x 4 TB in RAID 1 (for single drive fault tolerance)

RAID Backup = Single 4 TB Drive

Emergency Disaster Recovery Archive = An 8 TB drive to hold twice as much as the Data RAID

That should keep things pretty sage if I keep the Emergency Disaster Recovery Archive in a fireproof safe. There's nothing worth backing up on the SSD.
 

JPNpower

Honorable
Jun 7, 2013
1,072
0
11,360
41


Do you commute to work or something similar? Because if you do, I suggest only doing the "Raid Backup" once a month or so, and bringing that to a secure place at work to have a good "offsite" backup. Keep the EDR in a fireproof at home or whatever. In between the monthly or so backups, rely on USB flash drive(s).
 

Eggz

Distinguished
Sep 3, 2013
2,048
0
20,460
253
I think that will work, except I'm not sure any USB flash will have nearly enough space. My data source for backing up is 4TB, which is about half full.
 

JPNpower

Honorable
Jun 7, 2013
1,072
0
11,360
41


Go surf Amazon. 256gig monster thumb drives are being sold for peanuts. Do you generate more than 256 gigs of new data every few weeks to a month? I doubt it. But if so, use those 1tb portable HDDs which are also being sold for (relative) peanuts.
 

Eggz

Distinguished
Sep 3, 2013
2,048
0
20,460
253


I definitely could, but having 20 TB of drives should be fine, and I like the simplicity of having only 4 storage devices for the entire solution. I can see the USBs turning into a messy drawer where I'll probably lose drives.
 

JPNpower

Honorable
Jun 7, 2013
1,072
0
11,360
41


It's less secure though. The system you have protects only against random hard drive failure/corruption. Your immediate data is not safe from house catastrophe or power surges etc. All you have is that "fireproof" archive and no offsite.
 
Status
Not open for further replies.

ASK THE COMMUNITY