Last things first: you're thinking of home or small office networking, not datacenter. A HDD RAID can max 10 Gigabit network, sure. Not 100 Gigabit, much less 800 Gigabit. But, I didn't even mean bandwidth for its own sake.
The more important,
specific problem I'm talking about is how long it takes to rebuild redundancy, after a drive failure. The issue is that the longer it takes to do a rebuild, the greater the probability is that
another drive fails in the process. Eventually, the risk of multi-drive failures compounds to the point that the probability of array failures becomes non-negligible.
From what I've gleaned, it seems highly-scalable storage systems have largely moved away from RAID, and towards object storage & replication. You get more aggregate IOPS by decoupling individual drives, and replication mitigates against array-loss. However, the longer it takes to restore redundancy, the more replicas you need. By the time you get to 3 copies, you're paying a very high premium for reliability.
That said, I'm no storage expert, but I know how areal density works (i.e. throughput increases as the sqrt() of capacity), which means rebuild times will continually get worse, the more that HDD capacities scale. We got a 1-time boost with dual-actuator, but it's not obvious that we're going to see 3-actutor drives or more. I also think there are reasons the industry moved away from high-RPM drives and don't expect those to return.
So, by following the
one trend I think we can predict with a fair degree of certainty, I do think rebuild times might pose the greatest long-term risk to HDDs' role in scalable storage. There will probably always be a market for HDDs, but the key questions are how quickly it'll shrink, and by how much.
There's another issue that came to light, regarding recent deployments of HAMR drives, and that's sensitivity to vibration. As areal density increases, you'd naturally expect this to become a greater issue, as well: