News WD Fesses Up: Some Red HDDs Use Slow SMR Tech

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Large, cost-efficient SSDs can't get here soon enough! (Of course, then I'll be equally disappointed to find out even the TLC SSDs are being swapped out for QLC - flash's SMR-equivalent)
If you want large and cost-efficient, there's no way you're getting TLC. TLC is quickly becoming limited to just the more expensive, performance-oriented drives.

And flash makers are already talking about going to 5 bits per cell.

Also, you need to be aware that ever-denser flash chips have ever-shorter power-off data retention times. If you use a NAS for something like periodic backups and mostly leave it powered off, you're not going to want SSDs in it.
 
Wish you would have taken just a lil more time to explain how shingled HD's work.
Definitely check out the other posts in this thread. Especially #13 and #17.

I still cant get past how its possible for a platter to have separate read and write "Tracks".
They don't. It's just that the read head can "focus" on a narrower area than the write head. So, it's like the write head has some "over-spray", to use spray paint as an analogy. So, the write of each concentric track slightly overlaps the tracks next to it. If you always write from the inside-out or the outside-in, then part of the track will remain accessible.
 
  • Like
Reactions: PaulAlcorn
That's the sequential write speed. Sequential writes are not the issue with SMR - it's random writes, especially small ones.

Basically, whenever you write even 1 byte in a given ~100 MB chunk*, you have read and re-write the whole thing (or, at least everything including & after the block you touched).

So, if you're just using these drives to hold photos, music, and other stuff that you write once and basically don't touch, it'll probably be fine. If you're using them to hold drive images, which is a perfectly sequential operation, then they'd be great. But, if you're using them for data involving lots of small writes and frequent modifications and deletions, performance is going to be in the toilet.

* Note: I haven't found good info on exactly how big the chunks are. It's drive-specific and I think manufacturers are not forthcoming about it. However, the info I've seen suggests chunk sizes on the order of 100 MB.

The myth that SMR drive is slow for random write is WRONG. If you heard about LBA re-mapping (which is also used by SSD), you will know that slow random write issue could be largely hid from customer. With LBA re-mapping, LBA is no longer fixed to a physical location. Instead, LBA is dynamically mapped to physical location. Giving a random write case, what is random is in LBA, but SMR drive could record those writing to adjacent physical location (PBA). This will make random write as fast as sequential write, even faster than CMR drive.
LBA mapping is with a cost. SMR need spare time to do house keeping. It need shuffle LBA in a way back to sequential (within large blocks like 100MB to 200MB). Without this step, sequential (LBA) read is like random read in physical world, performance will suffer.
 
Do you have (or are able to obtain) information if EFRX drives are out of production? Is it time for current owners of EFRX arrays to buy some spare drives to keep future use?

SMR need spare time to do house keeping.

Which means the drive can decide on its own will to begin houskeeping and start clicking just when I want to sleep...Thanks, but no thanks.
 
Do you have (or are able to obtain) information if EFRX drives are out of production? Is it time for current owners of EFRX arrays to buy some spare drives to keep future use?



Which means the drive can decide on its own will to begin houskeeping and start clicking just when I want to sleep...Thanks, but no thanks.

I don't think EFRX is out of production now. But I think it is good idea for current owner of EFRX want to keep some spare drives.
Device Managed SMR will begin housekeeping (garbage collection) when it sees fit. It is not problem for typical desktop, laptop or NAS users, because of low workload. TRIM command support is also important for SMR drives, so that the drive knows what data is no longer needed and space can be put back to the pool of overprovisioning.
 
  • Like
Reactions: peca89
Any which way, reluctant as I am, given that the performance can get bad enough in some circumstances to confuse the NAS into thinking that the drive is going bad, I am doing a return on the pair of WD drives I bought. $16 return shipping.

Or maybe I should've just stuck with them and kept RMAing them any time the NAS thought a drive was bad just because use of SMR was so slow.

I almost wish I had stalled a little longer still, I mean, I've put off this project for so long already. Irritating, to say the least. Even more so now that it's cofirmed that Seagate and Toshiba are also doing it.
 
The myth that SMR drive is slow for random write is WRONG.
I find it funny how you start out saying I'm wrong, as if to dismiss all of my concerns, but then proceed to enumerate a number substantial caveats.

If you heard about LBA re-mapping (which is also used by SSD), you will know that slow random write issue could be largely hid from customer.
Yes, I agree that it's too simplistic to state that random writes are slow. You're right that they can be hidden with remapping + garbage collection. I should rather state that sustained random writes are slow. So, it can't be used for something like a database server. And probably not even video surveillance, which is mostly sequential, but definitely 24/7.

LBA mapping is with a cost. SMR need spare time to do house keeping.
You're also right to point out that, prior to garbage collection, sequential reads of randomly-written data will be slow. However, this means the normal random-write penalty is merely shifted to reading - it doesn't disappear - at least, not without some idle time for the drive to get its * ahem * blocks together.

Thanks for the reply, but it in no way lets WDC off the hook for pulling this sneaky move and failing to let customers choose the drive technology that best suits their need and budget. As the noted problems with ZFS has exposed, SMR is still not right for all SOHO fileservers.
 
Last edited:
Do you have (or are able to obtain) information if EFRX drives are out of production? Is it time for current owners of EFRX arrays to buy some spare drives to keep future use?
You can always buy an enterprise drive, to replace a failed disk. I doubt we'll ever reach a point where every drive employs SMR. So, I wouldn't "stock up". Although, if your drives are currently 5400 RPM, this could mean you end up buying a more expensive and slightly hotter/louder 7200 RPM replacement.

I'll bet there aren't any physical differences between the SMR drives and current "CMR" models (other than the number of platters per TB). The SMR behavior is probably just enabled in firmware. So, manufacturers can continue to offer "CMR" to market segments requiring it and willing to pay the additional premium.

Which means the drive can decide on its own will to begin houskeeping and start clicking just when I want to sleep...Thanks, but no thanks.
Not only that, but when the drive nears full capacity, your write performance will tank. You can see this with SSDs that use a SLC/MLC buffer. Except, since we're talking about a hard disk, the performance will be far worse.
 
Last edited:
Or maybe I should've just stuck with them and kept RMAing them any time the NAS thought a drive was bad just because use of SMR was so slow.
That won't work. In order to get a return authorization, you have to run manufacturer-provided tests, which surely won't find fault with a drive that's still functioning according to their specs. Yes, this means you'll have to pull the disk from your NAS and install it in a Windows PC and run their chosen utility to see if they consider it faulty.

I am going through this process with another WDC drive that actually posted SMART errors.
 
  • Like
Reactions: King_V
Does anybody know if the WD Gold drives are still CMR?
I was getting ready to purchase 8 drives for a new RAID 10 build. I want to use WD Gold - 2Tb drives but now I'm unsure what to do.
Any thoughts on how to figure this out would be welcome.
The spec sheet on the RED Pro lists a "Interface Transfer Rate = 164 MB/s" while the GOLD sheet lists a "Data Transfer Rate = 200 MB/s".

(If I've posted this in the wrong place let me know and I will post it where it belongs. Thanks)
 
Does anybody know if the WD Gold drives are still CMR?
I was getting ready to purchase 8 drives for a new RAID 10 build. I want to use WD Gold - 2Tb drives but now I'm unsure what to do.
Any thoughts on how to figure this out would be welcome.
The spec sheet on the RED Pro lists a "Interface Transfer Rate = 164 MB/s" while the GOLD sheet lists a "Data Transfer Rate = 200 MB/s".

(If I've posted this in the wrong place let me know and I will post it where it belongs. Thanks)
WD Gold drives are 7200RPM drives. There are no 7200RPM SMR drives. That said, unless you're trying to win a competition for spending the most money on 8TB of usable space, your choice of 8 2TB drives in a RAID10 configuration doesn't make any sense. The only reason to choose such a configuration is because it's not your money and you are being told to buy this for a company, otherwise nearly $1000 for 8TB of space indicates bad planning.
 
Does anybody know if the WD Gold drives are still CMR?
I bought a 4 TB drive, in January (WD4003FRYZ ), that's definitely not SMR. It's been a good drive, so far.

As mentioned before, there are use cases that will never work for SMR - mostly enterprise-type workloads. So, if WDC ever does ship SMR Gold drives, I don't expect them to be subtle about it.

I was getting ready to purchase 8 drives for a new RAID 10 build. I want to use WD Gold - 2Tb drives but now I'm unsure what to do.
Uh, for one thing, I'd use fewer larger drives, unless you really need that amount of throughput. An array with fewer drives will be more reliable and possibly also cheaper.

I'm also a big advocate of RAID-6, if your controller or enclosure supports that. RAID-6 is definitely a better than RAID-5 + hot spare, because the latter setup gives you no redundancy during the rebuild. With RAID-6, you'll still have redundancy until you can replace the failed drive.

So, if you were planning on a RAID-6 with 8x 2 TB drives, then you'd need only 5x 4 TB drives for the equivalent amount of usable space (i.e 12 TB). For 8 TB of usable space, you'd only need 4x 4 TB drives - and you can lose any two of them, without losing data (the same is not true of RAID 10)

(If I've posted this in the wrong place let me know and I will post it where it belongs. Thanks)
You could post in the storage forum, but part of your question is directly inline with the subject of the article and we often have digressions way more off-topic, anyhow.
 
Last edited:
WD Gold drives are 7200RPM drives. There are no 7200RPM SMR drives.
Perhaps right now, but there's no fundamental reason for it. At any point, they could pull a similar sketchy move on some of the 7200 RPM models.

It might not be very likely, since 7200 RPM is more of a performance-optimized segment, where as lower-RPM models are more focused on optimizing GB/$, but who knows?

your choice of 8 2TB drives in a RAID10 configuration doesn't make any sense.
Yeah, depending on needs (e.g. throughput) and constraints (such as RAID levels supported by the controller), there are probably better options.
 
WD Gold drives are 7200RPM drives. There are no 7200RPM SMR drives. That said, unless you're trying to win a competition for spending the most money on 8TB of usable space, your choice of 8 2TB drives in a RAID10 configuration doesn't make any sense. The only reason to choose such a configuration is because it's not your money and you are being told to buy this for a company, otherwise nearly $1000 for 8TB of space indicates bad planning.

Thanks for your reply and observations. One point: This is for a server where a large number of VERY LARGE video files are stored/edited. My current RAID 5 array is 5.5TB and is running out of space, not to mention obsolete. Most of what's on the RAID 5 is soon to be completed and archived to make room for new work. that said we still need more space. I was going to use RAID 6, and may still, but RAID 10 seemed like a better solution. Cheers.
 
I bought a 4 TB drive, in January (WD4003FRYZ ), that's definitely not SMR. It's been a good drive, so far.

I'm also a big advocate of RAID-6, if your controller or enclosure supports that. RAID-6 is definitely a better than RAID-5 + hot spare, because the latter setup gives you no redundancy during the rebuild. With RAID-6, you'll still have redundancy until you can replace the failed drive.

So, if you were planning on a RAID-6 with 8x 2 TB drives, then you'd need only 5x 4 TB drives for the equivalent amount of usable space (i.e 12 TB). For 8 TB of usable space, you'd only need 4x 4 TB drives - and you can lose any two of them, without losing data (the same is not true of RAID 10)

Thanks for the info and for clarifying a misconception for me. In doing research for replacing my current RAID 4+ 1 Hot Spare, it seemed like RAID 10 was the way to go. Both for reliability and recovery and also to give me more room. My little video editing and production operation REALLY needs more data space for development. My original plan was a 7 disk RAID 6 array + 1 Hot Spare or 10 TB (if I did the math right). Either array would be managed by a MegaRAID 9361-8i and backed up daily. After reading your reply I'm going to rethink the whole thing. Thanks again.
 
Thanks for your reply and observations. One point: This is for a server where a large number of VERY LARGE video files are stored/edited. My current RAID 5 array is 5.5TB and is running out of space, not to mention obsolete. Most of what's on the RAID 5 is soon to be completed and archived to make room for new work. that said we still need more space. I was going to use RAID 6, and may still, but RAID 10 seemed like a better solution. Cheers.
Your video editing rig scratch drives and your storage drives should not be in the same array. You indicated that you have a method for archiving your work, so you don't need the level of redundancy of RAID 10 or 6. By trying to make one array do both scratch and storage, you're reducing the performance of your editing rig while unnecessarily driving up the costs with unneeded redundancy. Your scratch drive should be a large SSD or 2. Then you have one large local mechanical drive for current project files and then every night, new files on that drive get archived to another storage medium. You should also be maxing out your RAM.

If this is a server, and not the actually editing rig, if your network isn't 10gb it is likely to be the bottleneck, and not the storage array.
 
  • Like
Reactions: bit_user
Large, cost-efficient SSDs can't get here soon enough! (Of course, then I'll be equally disappointed to find out even the TLC SSDs are being swapped out for QLC - flash's SMR-equivalent)

Micron 5210 ION - QLC archival(cold storage) SATA drives are selling in the UK for $680+tax for the 7.96TB or $380+tax(£308+tax) for 3.96TB

That's twice the price of WD's enterprise drives (or three times the price of the REDs) and for 90% of use cases are a perfect fit (0.2DWPD @ 4kb random writes, or 0.8DWPD for large sequential ones, 5 year enterprise warranty). That makes these a good proposition for big (slow) coldish raid arrays.

If you need a section with greater endurance(editing, etc) then it's worthwhile setting up a separate storage pool for this purpose.

I was surprised when I found these - they're cheaper than Samsung's consumer 860QVO drives - and Micron's 5100/5300 SATA enterprise ranges for higher duty cycles aren't much more expensive
 
I still cant get past how its possible for a platter to have separate read and write "Tracks".

There aren't. It's to do with the size of the "spot" covered by the write and read HEADS

If you imagine a track layout edge on as '-o- -o- -o- -o-' then the '---' width is what the write head lays down and the 'o' is the width accessed by the read head

Shingling overlaps what's written and puts the 'o' adjacent to each other - just like singling roof tiles (which is where the name comes from) but the cost of that is that you can't rewrite an individual sector as you'll destroy the data on the track each side of it, meaning that you have to rewrite an entire data block if you need to rewrite a single sector.

From a usage point of view, the drive hides everything and translates whatever LBA you specify the data to be written to into wherever it wants to store the blocks. Essentially this makes DM-SMR drives much the same as VERY VERY slow SSDs in terms of getting at the data and how it's stored (where the location in the flash has nothing to do with the LBA - and in a further analogy, SMR drives do things like preening to optimise their internal layout during quiet times, just like SSDs do)
 
Your video editing rig scratch drives and your storage drives should not be in the same array. You indicated that you have a method for archiving your work, so you don't need the level of redundancy of RAID 10 or 6. By trying to make one array do both scratch and storage, you're reducing the performance of your editing rig while unnecessarily driving up the costs with unneeded redundancy. Your scratch drive should be a large SSD or 2. Then you have one large local mechanical drive for current project files and then every night, new files on that drive get archived to another storage medium. You should also be maxing out your RAM.

If this is a server, and not the actually editing rig, if your network isn't 10gb it is likely to be the bottleneck, and not the storage array.

So now that you think I'm some kind of idiot I'll go into more detail than I intended regarding my operation. I have 5 people who work with me. We all work on twin processor server / workstations. All AS Rock Rack with 128GB RAM, EVGA 2080 GPU's and 2 x 2TB Samsung 860's in each machine for development & editing. At the end of the day the work is uploaded onto the server where it is backed up over night. It's clunky, redundant and seems silly but I'm THAT paranoid. 2 copies of all work is kept on the server, the current file and the previous day's file. That's why I need data space on the server. No body works on the server. It would never take the strain. When I wrote this; "This is for a server where a large number of VERY LARGE video files are stored/edited. " I should have said 'stored then edited'. Not on the server but on the individual workstations.
Additionally I have 2 systems that are dedicated to nothing but rendering the final product. Oh, and one more thing, my network has adequate capacity. The server does do other duties but not on the video production side.

Thanks for your time.
 
So now that you think I'm some kind of idiot I'll go into more detail than I intended regarding my operation. I have 5 people who work with me. We all work on twin processor server / workstations. All AS Rock Rack with 128GB RAM, EVGA 2080 GPU's and 2 x 2TB Samsung 860's in each machine for development & editing. At the end of the day the work is uploaded onto the server where it is backed up over night. It's clunky, redundant and seems silly but I'm THAT paranoid. 2 copies of all work is kept on the server, the current file and the previous day's file. That's why I need data space on the server. No body works on the server. It would never take the strain. When I wrote this; "This is for a server where a large number of VERY LARGE video files are stored/edited. " I should have said 'stored then edited'. Not on the server but on the individual workstations.
Additionally I have 2 systems that are dedicated to nothing but rendering the final product. Oh, and one more thing, my network has adequate capacity. The server does do other duties but not on the video production side.

Thanks for your time.

I was in no way trying to imply you are an idiot. Not sure where that came from. The confusion comes in when there is no context for how your server is being used. Now that you've explained the setup better, It makes more sense. If the server is only being used to store the previous two days of work, then you really don't need RAID10.

8TB doesn't look like a big enough capacity increase over what you have to be worth your while. You don't want your drives anywhere near full as that will hurt performance. Dropping your existing 5.5TB of data on your new 8TB array will put it at about 70% capacity the day you install it. For the $960 you are proposing to spend on 8 2TB drives, you could buy five 8TB Seagate Exos drives and have 32TB of storage capacity in a RAID 5 array. If you don't need that much, you save money by buying smaller drives or fewer drives. You don't want a huge number of spindles in your RAID array as they'll generate more heat and more vibrations which can both lead to premature failures.
 
My original plan was a 7 disk RAID 6 array + 1 Hot Spare or 10 TB (if I did the math right).
If you're using RAID-6 and you're around it on a daily basis, I wouldn't bother with a hot spare. The only thing a hot spare would buy you is just the amount of time between when you notice the failed drive and have the opportunity to manually replace it. The chance of two more disks failing in that small window of time is probably negligible. You could still buy the extra disk, so you at least have it on hand.

Again, I'd recommend using fewer, larger disks. The biggest benefit of fewer disks is reliability. The two main downsides of that route are:
  1. Lower aggregate transfer rate.
  2. Longer rebuild time.
As for cost, if you go for the same usable capacity, I'm not sure whether it would even be more expensive.

Regarding rebuild times, I've observed between 9 and 11 hours to rebuild arrays using 4 TB 7200 RPM disks.

I think transfer rate of fewer, larger disks would still be more than adequate for video. Check the datasheets of the drives, to see for sure. Often, RAID controllers will deliver faster read performance by skipping the redundancy. So, read speeds of RAID 5 or RAID 6 will often approach that of RAID 0. Check your controller documentation (or other users' performance data), to be sure. If it matters, that is.
 
Last edited:
Your scratch drive should be a large SSD or 2.
Yeah, this is sound advice. I've done a little editing and a decent NVMe drive is night-and-day, as scratch storage. At least, for my purposes.

If your motherboard doesn't have any M.2 slots, you can get SSDs that plugs into the normal PCIe slots, or a PCIe carrier for one or more M.2 drives. Be careful about M.2 carriers for multiple drives, ensuring that all of the lanes it uses are active within the slot in which you install it (often, you'll see physical x16 slots with only x4 lanes active).

If this is a server, and not the actually editing rig, if your network isn't 10gb it is likely to be the bottleneck, and not the storage array.
This is also a good point.
 
Micron 5210 ION - QLC archival(cold storage) SATA drives are selling in the UK for $680+tax for the 7.96TB or $380+tax(£308+tax) for 3.96TB
I'm not finding any such drive that's qualified for "archival" or "cold storage". Their product brief doesn't say how long the data is retained without power, but probably much less than on magnetic storage (i.e. HDD). IMO, these are only suitable for near-line, at best - not cold storage.

I also wouldn't recommend them to anyone who doesn't need the performance advantages of mechanical disks. For video editing, they're not fast enough to be ideal scratch drives and I doubt you need that amount of sequential performance for archiving.
 
I just purchased a WD RED 6TB drive to add to my DROBO 5D Raid. The Drobo uses all 3TB RED drives from 2018 or earlier. It was running out of space and wanted me to expand it. After installing the drive into an empty raid bay, the Drobo went into a critical state and now cannot be accessed.

Marketing this drive the same as their previous WD RED NAS drives, when it's not compatible in RAIDS with their previous RED NAS drives is negligent and dangerous for users' data.

I'm now waiting on tech support tickets to see if I can find a solution to access my data again. You can read what happened in more detail on my reddit post: View: https://www.reddit.com/r/drobo/comments/g58m2l/added_5th_drive_to_my_5d_now_nothing_seems_to/
 
  • Like
Reactions: bit_user
I just purchased a WD RED 6TB drive to add to my DROBO 5D Raid. The Drobo uses all 3TB RED drives from 2018 or earlier. It was running out of space and wanted me to expand it. After installing the drive into an empty raid bay, the Drobo went into a critical state and now cannot be accessed.

Marketing this drive the same as their previous WD RED NAS drives, when it's not compatible in RAIDS with their previous RED NAS drives is negligent and dangerous for users' data.

I'm now waiting on tech support tickets to see if I can find a solution to access my data again.
Sorry to hear that.

Please let us know how it turns out, especially whether SMR was responsible for the problem.

Good luck.
 
Status
Not open for further replies.