News HDDs Will Be Extinct by 2028, Says Pure Storage Exec

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
As density increases, durability goes down. Not sure how much farther they can push it if they want Enterprise drives to last more than a few years.
As density increases so does the spare area. In the Enterprise space 1DWPD SSDs are classified as "read intensive." Even a 1.92TB "read intensive" drive has a write endurance of 3.5PB. Whereas an Enterprise grade 3.5" HDD has a write endurance of 550TB/year for 5 years. That comes out to 2.75PB of endurance. Increasing size only increases write endurance. Get to a 12.8TB "mixed-use" drive, or 3DWPD, and you have 70PB write endurance which is vastly more than a 3.5" HDD.
 

PBme

Reputable
Dec 12, 2019
62
37
4,560
You can already get 8TB Samsung SSD here https://www.amazon.com/dp/B089C3TZL9/

current price is $448 USD, down from about $660 roughly 6 months ago iirc

you can also find these same drives on ebay used for good prices as well sometimes.

currently, 4TB SSD's (SATA and M.2) are in the $240 USD ballpark, and 2TB drives are hitting $99. Hopefully in the next several years we might see the 8TB hit the $250 mark, and see more options available on the market too.

Keep in mind, that enterprise SSD's have had 16TB and 30TB capacities for a while now, in the U.2 form factor, but they cost many thousands of dollars even used.

I too look forward to replacing my HDD storage RAID with SSD's
Prices are correct but it is incorrect to use those prices to imply some pricing trend to predict further drops. SSD prices in the past few months have dropped significantly due to over production and companies are currently dumping them. That is a very temporary change in what has been a very slow price reduction in SSD's in the 5-6 years.

That 8TB SSD was in mid-high 700's for the past two years and dropped massively in that last month (and is currently $470 so it jumped up $20 in the past few hours). That is the same for all the drives, they have had big drops in the past month. That means that if you are in the market, you should buy now. But it is not something that should be used make predictions about what the price per TB will be a few years from now.

Even at this temporary dumping situation, even using the $450 price (that it isn't right now), I recently paid less for 36TB, including 10% tax (Seagate Exos 18tb). Would cost me $1,980 to get 32TB with these SSD's.

While SSD's will progress and should get cheaper, they haven't been doing so at a slower rate than large HDD's have. That price reduction rate that should continue now that even larger HAMR drives are starting to ship.

So while I too would love to have SSD's replace my large personal storage drives, unless there is some SSD breakthrough in the very near future, I do not see it happening for the foreseeable future.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
810
20,060
You know what would be nice, if they let us decide how many #-bits per cell we want on our storage and decide how much durability (P/E cycles) we want.

Don't decide for us that everything is going to be "4-bit per cell" or "5-bit per cell" so you can inflate the Storage Capacity #'s.

Give us consumers "The Choice" as to how we want it.

If you can do SLC-cache, why can't we have #-bit per cell up to a "Maximum Upper bit limit per cell". This way we, the end users, can decide how much P/E cycles we want to risk with our data.

I know, novel concept, but some people might want 1-bit per cell for reliability reasons.

And stop being cheap and shoving DRAM-less SSD, down our throats, spend that extra money and include the DRAM, we want it, we need it. It's worth it.
 
Last edited:
  • Like
Reactions: bit_user

bit_user

Polypheme
Ambassador
I'll disagree that the speed of HDDs is an issue. RAID configurations will max out whatever network connectivity you have available.
Last things first: you're thinking of home or small office networking, not datacenter. A HDD RAID can max 10 Gigabit network, sure. Not 100 Gigabit, much less 800 Gigabit. But, I didn't even mean bandwidth for its own sake.

The more important, specific problem I'm talking about is how long it takes to rebuild redundancy, after a drive failure. The issue is that the longer it takes to do a rebuild, the greater the probability is that another drive fails in the process. Eventually, the risk of multi-drive failures compounds to the point that the probability of array failures becomes non-negligible.

From what I've gleaned, it seems highly-scalable storage systems have largely moved away from RAID, and towards object storage & replication. You get more aggregate IOPS by decoupling individual drives, and replication mitigates against array-loss. However, the longer it takes to restore redundancy, the more replicas you need. By the time you get to 3 copies, you're paying a very high premium for reliability.

That said, I'm no storage expert, but I know how areal density works (i.e. throughput increases as the sqrt() of capacity), which means rebuild times will continually get worse, the more that HDD capacities scale. We got a 1-time boost with dual-actuator, but it's not obvious that we're going to see 3-actutor drives or more. I also think there are reasons the industry moved away from high-RPM drives and don't expect those to return.

So, by following the one trend I think we can predict with a fair degree of certainty, I do think rebuild times might pose the greatest long-term risk to HDDs' role in scalable storage. There will probably always be a market for HDDs, but the key questions are how quickly it'll shrink, and by how much.

There's another issue that came to light, regarding recent deployments of HAMR drives, and that's sensitivity to vibration. As areal density increases, you'd naturally expect this to become a greater issue, as well:
 

bit_user

Polypheme
Ambassador
As density increases so does the spare area.
That approach will naturally run out of gas. I don't know what the theoretical limit is on NAND cell density, but I believe it does exist, and a general trend we've seen is that smaller cells have had lower endurance and data retention times (i.e. requiring more refreshes, compounding the problem).

NAND cell designs have definitely improved, I think most recently with 3D NAND. However, the concern seems valid, if we're talking about long-term trends.

Get to a 12.8TB "mixed-use" drive, or 3DWPD, and you have 70PB write endurance which is vastly more than a 3.5" HDD.
I think HDD endurance isn't really an issue, because you tend to use them rather differently than SSDs.
 

bit_user

Polypheme
Ambassador
Prices are correct but it is incorrect to use those prices to imply some pricing trend to predict further drops. SSD prices in the past few months have dropped significantly due to over production and companies are currently dumping them. That is a very temporary change in what has been a very slow price reduction in SSD's in the 5-6 years.
This.

The way GB/$ works is like this: density increases, but so do production costs. To get more GB/$, you need density to increase faster than production costs. The big wins: (SLC -> MLC and 2D -> 3D) each produced step-change in pricing. However, those low-hanging fruits have already been picked.

Each additional bit they try to cram in a cell adds a diminishing amount of density. And increasing layer-count of 3D NAND also increases production time, which increases cost. The higher 3D layer counts get, the more linear the cost-increases become. So, that's unlikely to be a big source of cost-savings. And if you look at process node shrinks for density improvements, the cost of new nodes is increasing almost exponentially.

So, I'm not bullish on NAND GB/$ improving anywhere close to the rate we've seen, in the past.
 
Last edited:

LabRat 891

Honorable
Apr 18, 2019
70
53
10,610
I was doubting this bold statement by the exec, but this Pure Storage has some interesting origins/investors. They may well know more than we do about the situation at hand.

Pure Storage was founded in 2009 under the code name Os76 Inc.[2] by John Colgrove and John Hayes.[3] Initially, the company was setup within the offices of Sutter Hill Ventures, a venture capital firm,[2] and funded with $5 million in early investments.[4] Pure Storage raised another $20 million in venture capital in a series B funding round.[4]

The company came out of stealth mode as Pure Storage in August 2011.[5] Simultaneously, Pure Storage announced it had raised $30 million in a third round of venture capital funding.[6] Another $40 million was raised in August 2012, in order to fund Pure Storage's expansion into European markets.[7] In May 2013, the venture capital arm of the American Central Intelligence Agency (CIA), In-Q-Tel, made an investment in Pure Storage for an un-disclosed amount.[8] That August, Pure Storage raised another $150 million in funding.[9] By this time, the company had raised a total of $245 million in venture capital investments.[9] The following year, in 2014, Pure Storage raised $225 million in a series F funding round, valuating the company at $3 billion.[10]
Thanks Wikipedia, you're not always useless
 

bit_user

Polypheme
Ambassador
I was doubting this bold statement by the exec, but this Pure Storage has some interesting origins/investors. They may well know more than we do about the situation at hand.


Thanks Wikipedia, you're not always useless
When I read that description, I'm reminded that all-flash arrays were still pretty exotic, back then. Today, they're a mainstay of the storage industry and everyone is doing them. So, what those initial investors predicted did indeed come to pass!

What was a good investment 10+ years ago doesn't say anything about their trajectory, going forward. The company IPO'd and the founders + investors got their payday. That's all history, now.

If you look at the state of the enterprise/cloud storage market, it's clear they're just trying to drum up some publicity/business. That's what's behind this pronouncement, I'm sure. Say something a little bit controversial, and get everyone talking about it.
 

Eximo

Titan
Ambassador
As density increases so does the spare area. In the Enterprise space 1DWPD SSDs are classified as "read intensive." Even a 1.92TB "read intensive" drive has a write endurance of 3.5PB. Whereas an Enterprise grade 3.5" HDD has a write endurance of 550TB/year for 5 years. That comes out to 2.75PB of endurance. Increasing size only increases write endurance. Get to a 12.8TB "mixed-use" drive, or 3DWPD, and you have 70PB write endurance which is vastly more than a 3.5" HDD.

Well it does seem that the write endurance of the largest hard drives are getting lower, but if they are speccing out a system with endurance in mind, they are likely to not use those drives, and stick to lower capacities. Trading capacity for endurance. They could go further and use SSDs, but the cost increases a lot. And it really always comes down to cost. Or they could just buy twice as many hard drives and still come out ahead.

There are limits in shrinking to get more space and then adding more chips. Package size standards for one, and more chips means larger flash controllers, and potentially a lot more power use despite the efficiency gains. Also more points of failure.
 
  • Like
Reactions: bit_user
Well it does seem that the write endurance of the largest hard drives are getting lower, but if they are speccing out a system with endurance in mind, they are likely to not use those drives, and stick to lower capacities. Trading capacity for endurance. They could go further and use SSDs, but the cost increases a lot. And it really always comes down to cost. Or they could just buy twice as many hard drives and still come out ahead.

There are limits in shrinking to get more space and then adding more chips. Package size standards for one, and more chips means larger flash controllers, and potentially a lot more power use despite the efficiency gains. Also more points of failure.
Write endurance for HDDs is based on the actuator which itself is only rated for 550TB/year in the highest end enterprise HDDs. Going down to something like a WD Red instead of the Gold and the actuator is only rated for 300TB/year. I've never seen an endurance placed on 10k or 15k HDDs, but they have low capacity and have basically been replaced by SSD anyways. That said any HDD has a much higher failure rate than SSD (especially enterprise SSD). I can tell you from my own experience running a data center that I've had a total of 3 SSD failures in 5 years but over a dozen HDD failures in less time. Reason it is less time is our SANs are all flash now so the only HDD we have running is in our NASs.
 
  • Like
Reactions: bit_user

Eximo

Titan
Ambassador
Write endurance for HDDs is based on the actuator which itself is only rated for 550TB/year in the highest end enterprise HDDs. Going down to something like a WD Red instead of the Gold and the actuator is only rated for 300TB/year. I've never seen an endurance placed on 10k or 15k HDDs, but they have low capacity and have basically been replaced by SSD anyways. That said any HDD has a much higher failure rate than SSD (especially enterprise SSD). I can tell you from my own experience running a data center that I've had a total of 3 SSD failures in 5 years but over a dozen HDD failures in less time. Reason it is less time is our SANs are all flash now so the only HDD we have running is in our NASs.

Cost of the three SSDs vs the Hard drives lost? That is what it really comes down to. If I can buy three hard drives with something like 5 times the capacity per drive vs 1 enterprise SSD and I am after bulk storage and not performance.

Having SSDs as cache is perfectly fine. I just don't agree with the sentiment that hard drives will be extinct by 2028. It is much too quick and the costs don't justify the durability.
 
Cost of the three SSDs vs the Hard drives lost? That is what it really comes down to. If I can buy three hard drives with something like 5 times the capacity per drive vs 1 enterprise SSD and I am after bulk storage and not performance.

Having SSDs as cache is perfectly fine. I just don't agree with the sentiment that hard drives will be extinct by 2028. It is much too quick and the costs don't justify the durability.
$2200 total for the SSDs brand new. We lost 1x 7.68TB Micron 9300 Pro, 1x 7.68TB WD Gold, and 1x 240GB M.2 Micron 5100 Pro. For the HDDs costs were probably close to the same. I cannot say on the HDDs for sure as some were purchased before I got to the company. However, we lost 5x 146GB 10k HPE, 2x 900GB 10K Seagate, 2x 900GB 10k HPE, 1x 8TB HGST He8, 3x 14TB WD Gold. The cost on just the HGST & WD Gold new is about $1800. All the SSDs and the HDD WD Gold's were replaced on warranty replacement. None of the other HDDs were warranty replacements as they only had 3 year warranties.

As I said earlier all our SANs are all flash and the performance difference between them and our old 24x 10k SAN is night and day. I could not imagine running all the work loads we have right now on that old SAN.
 
Mar 18, 2023
19
1
15
I have removed 100% of data from any failed HDD and only 25% or less from any SSD/flash. For my long term storage or backup, HDD is a must
 
Mar 18, 2023
19
1
15
$2200 total for the SSDs brand new. We lost 1x 7.68TB Micron 9300 Pro, 1x 7.68TB WD Gold, and 1x 240GB M.2 Micron 5100 Pro. For the HDDs costs were probably close to the same. I cannot say on the HDDs for sure as some were purchased before I got to the company. However, we lost 5x 146GB 10k HPE, 2x 900GB 10K Seagate, 2x 900GB 10k HPE, 1x 8TB HGST He8, 3x 14TB WD Gold. The cost on just the HGST & WD Gold new is about $1800. All the SSDs and the HDD WD Gold's were replaced on warranty replacement. None of the other HDDs were warranty replacements as they only had 3 year warranties.

As I said earlier all our SANs are all flash and the performance difference between them and our old 24x 10k SAN is night and day. I could not imagine running all the work loads we have right now on that old SAN.
What about data recovery HDD vs SSD? I have been able to recover data without much problem from failed HDD, not so much with SSD
 
  • Like
Reactions: bit_user

USAFRet

Titan
Moderator
I have removed 100% of data from any failed HDD and only 25% or less from any SSD/flash. For my long term storage or backup, HDD is a must
That is a non-issue for a sufficiently clued in user.
Data should be backed up always.

A dead drive should never be more than, "Oh crap, I need a new drive"

You data should never be at risk.


My backups do live on HDD, but that is only due to size v cost. Not physical longevity of the drives.
 
What about data recovery HDD vs SSD? I have been able to recover data without much problem from failed HDD, not so much with SSD
This is an enterprise setup where everything has a backup and all storage is redundant. When a 14TB drive has died, I just take that drive offline and the system rebuilds the array automatically with the spare drive. It takes about 24 hours for the LUN to be back to full use. For personal use you should have things backed up regularly so recovering is minimal.
 
D

Deleted member 2947362

Guest
Until durable 2tb SSD's fall to around £49-53 I'll stick with 2tb HDD's for the money

For stuff that I don't require to load or write as fast as an SSD I wont pay any more than I need to, so HDD's are still very much sort after in my books simply for price and storage size.
 
Until durable 2tb SSD's fall to around £49-53 I'll stick with 2tb HDD's for the money

For stuff that I don't require to load or write as fast as an SSD I wont pay any more than I need to, so HDD's are still very much sort after in my books simply for price and storage size.
I agree with you. For things that aren't accessed regularly or benefit from the speeds of SSD, there really isn't a point to store them on an SSD. For example playing mp3s isn't going to be a better experience on an SSD vs HDD. Therefore it doesn't make sense to use expensive SSD storage for mp3s. That said if you can only have a single drive in your system having an SSD is far superior to a HDD.
 

USAFRet

Titan
Moderator
For example playing mp3s isn't going to be a better experience on an SSD vs HDD. Therefore it doesn't make sense to use expensive SSD storage for mp3s.
There was a guy here recently who swore up and down there is an audible difference in audio files between HDD and SSD.
That is why he had a 4x 8TB SATA III SSD for his music collection.

People believe a lot of weird crap.
 
There was a guy here recently who swore up and down there is an audible difference in audio files between HDD and SSD.
That is why he had a 4x 8TB SATA III SSD for his music collection.

People believe a lot of weird crap.
Wow that is pretty wild. Also that is A LOT of storage for music. I have about 30k MP3s and that is only ~100GB. Having 32TB for music is absolutely insane, unless those are all studio master recordings that are like 150MB each.
 
  • Like
Reactions: bit_user

Tac 25

Estimable
Jul 25, 2021
1,391
421
3,890
There was a guy here recently who swore up and down there is an audible difference in audio files between HDD and SSD.
That is why he had a 4x 8TB SATA III SSD for his music collection.

People believe a lot of weird crap.

whoa that's weird, first time I heard of that.

anyway, not really picky in audio. My pc only has two small speakers. As long as I hear the background music and sounds of combat in Genshin, then I'm satisfied. And for music, I mostly own original cd's of the music, so the audio is crisp and clear when I want to listen to some jpop,