News Western Digital launches 32TB hard drive in SATA and SAS flavors — Ultrastar DC HC690 delivers sequential performance up to 257 MiB/s

The article said:
Although a 3% performance loss doesn’t sound like much, it would eventually make a noticeable difference in a long read or write operation.
The problem is that you want performance to increase as drives get larger. Otherwise, it takes even longer to do things like scrub, rebuild, and back them up. The longer they take, the greater the chance there is of (another) failure happening, before they complete!

I had read that SMR wasn't good in raids which is how I am assuming a data center would be using it?
It's like a QLC SSD, where there's a low-density buffer. Once you exhaust that, write performance takes a major hit. Some RAID controllers don't like it when operations have super long latency and they treat such a drive as if it failed.

There are other ways to use these drives, in datacenter context. If you're doing object-based storage, you can use object-level replication, which allows each drive to be run as an independent storage volume.
 
  • Like
Reactions: evdjj3j
HDD sequential speeds are not increasing that much since 1TB days... luckily they are mostly being used for data hoarding so it doesn't really matter...
 
  • Like
Reactions: cyrusfox
HDD sequential speeds are not increasing that much since 1TB days... luckily they are mostly being used for data hoarding so it doesn't really matter...
They tend to increase at basically a square root of the density. The double-actuator drives have doubled the throughput, but basically act like 2 HDDs merged together in one package, where each head can only access half of the capacity. It also adds cost and requires OS support.

I already said what the problems are with it. If you actually care about the data on them, then you should care about things like scrubbing and rebuild times. Even for backups, it becomes problematic if you can't even complete a backup of your data before you already want to start the next one.
 
11 disks in 3.5"!
Why doesn't the industry try to introduce a new standard form factor, to expend the current constraints of high-capacity storage?
 
11 disks in 3.5"!
Why doesn't the industry try to introduce a new standard form factor, to expend the current constraints of high-capacity storage?
How many platters do you think it should have?

At some point, it makes sense just to increase the number of drives, because throughput doesn't scale by adding platters. Only one platter can be read/written at a time (except for dual-actuator drives). Also, putting too much data in a single unit just multiplies the cost of a hardware failure.

I'm not saying I think the current 3.5" form factor is optimal, but it's probably not far off.
 
Only one platter can be read/written at a time (except for dual-actuator drives).
Why aren't they in RAID0 together?

Consumers need to buy more disks. But manufacturers are obliged to increase the capacity of their units. It's the only way to make progress (and by improving performance too).
They can't say: “Well, we'll stop here, and you'll just have to buy more units”.

I just notice that they're always trying to add more and more platters in the same form factor constraint, to reduce magnetic cell size again and again, to increase density on the same platter size (isn't there the same pitfall as MLC=>QLC=>PLC when it comes to long-term data retention?), but never tried to increase case height or disc diameter by proposing a new standard.
At some point, won't they have to come to that, when they reach the technical limits of existing runways, without exploding costs?

Maybe 5"1/4?
 
Last edited:
Why aren't they in RAID0 together?
Probably because you can do that externally, if that's what you want. However, the way you get more IOPS out of it is actually by decoupling them. Also, I'm not sure hyperscalers use RAID. I think they prefer object-level storage with replication, in which case you're better off not RAIDing them.

manufacturers are obliged to increase the capacity of their units. It's the only way to make progress (and by improving performance too).
They can't say: “Well, we'll stop here, and you'll just have to buy more units”.
Everyone understands this, but conventional magnetic storage has basically hit a wall. Merely wanting higher density doesn't make it so.

Maybe 5"1/4?
Changing form factor means a lot of infrastructure also has to change. 5.25" would also spin slower and have a longer stroke time, both of which would hurt IOPS. I'm not so much arguing against it and pointing out the tradeoffs.
 
Probably because you can do that externally, if that's what you want.
Even SSD's controller reads and writes all its internal channels/flash chips simultaneously. Why not hdd's headers cannot do that? Its an easy way to improve performances of a single device. Unless there's a technical difficulty? I'm curious to find out.
 
Ok, I read here an interesting answer.
The tracks are too small to be exactly at the same position on all platters, on the same time.

The others answers perplexes me. A single microprocessor/cache can easily handle that, and amplifiers and other circuitries could be easily dupplicated with today's advanced technologies, if R&D wanted to move in this direction.
 
Even SSD's controller reads and writes all its internal channels/flash chips simultaneously. Why not hdd's headers cannot do that?
SSDs have about 1000x as many IOPS as HDDs. So, maybe for datacenter use cases, it was more important for them to get higher IOPS than to have the simplicity of both actuators acting like a single drive. The bandwidth is the same, either way.
 
Ok, I read here an interesting answer.
The tracks are too small to be exactly at the same position on all platters, on the same time.
Oh, I misunderstood the question. I thought we were still talking about dual-actuator drives.

Yeah, you can't have the heads read more than one platter at a time, like they said. I didn't know that until somewhat recently. It's unfortunate.

Back in 2010 or so, WD introduced this micro-actuator technology where the heads each contained a tiny actuator for fine tracking. I wonder if they were thinking that might enable concurrent read/writes on multiple platters.
 
SSDs have about 1000x as many IOPS as HDDs. So, maybe for datacenter use cases, it was more important for them to get higher IOPS than to have the simplicity of both actuators acting like a single drive. The bandwidth is the same, either way.
Read/write performances scales with capacity, until they reaches the bandwidth limits. The smaller capacity often has the half of read and write sequential speed than the size above. This is because only one chanel is in use.
I thought HDD also took adventage of all their heads to improve performance. It's unfortunate.
 
I had read that SMR wasn't good in raids which is how I am assuming a data center would be using it?
It's not good in RAID nor most other uses, basically most uses of SMR where it looks like a normal hard disk is a bad idea. Are these "SMR drive firmware emulating a CMR drive" units still being sold (given how stupid they are)? I have to admit I've not seen one for a while.

I expect that 99%+ of the SMR drives sold goes to data center and runs in "host-managed SMR" mode, where the host knows about and manage the SMR/zones rather than the disk firmware.

There's a number of usage scenarios where there's very little downside to SMR when operated this way and they get 10-25% density improvement which is the directly reflected into space and power which is a lot of the cost.

But most people will never see this because it doesn't make sense to spend all that time and money to set this up for small operators - and my guess is that "merely" 10,000 drives probably still counts as small!

But massive cloud datacenter and hyper-scalars? Some of them love them which is why new models keep on coming out, they're very much not for most users but the people who DO want them buy them in units of "how many shipping containers". And if one vendor doesn't provide them, the orders will go to one of their competitors...
 
  • Like
Reactions: Misgar and bit_user
Maybe 5"1/4?
Been there. Done that. I still have a full height 5.25in 30MB SCSI drive up in the roof.

iu




Modern 3.5 inch drives are small, especially compared with old 14 inch Winchesters.

iu
 
I had read that SMR wasn't good in raids
SMR is bad news for ZFS, e.g. as used in FreeNAS, TrueNAS, etc. This goes back to 2020 when Western Digital "sneaked" SMR technology into their so-called NAS drives, without disclosing this pertinent information in their data sheets. Seagate and Toshiba weren't totally immune from this subterfuge either.
https://www.servethehome.com/wd-red-smr-vs-cmr-tested-avoid-red-smr/

N.B. SMR does not affect all NAS operating systems equally, but resilvering a TrueNAS array with SMR drives can take days, or just a few hours with CMR/PMR drives. You probably already know this if you're using TrueNAS.
 
Last edited:
Back in 2010 or so, WD introduced this micro-actuator technology where the heads each contained a tiny actuator for fine tracking. I wonder if they were thinking that might enable concurrent read/writes on multiple platters.
You're right, I see here (from here) : two active heads are planned.
bt4CLb3LfiTR4igEXLcFuK.png

And yes, single, dual and triple stage actuation. I've heard that before.
 
  • Like
Reactions: bit_user