News HDD component supplier ‘collapses’ and 600 jobs will be lost, say reports

Status
Not open for further replies.
A few years ago I bought a 4TB HDD for media storage. Last month I bought a 4TB SATA SSD. They're not quite as cheap yet, but perform so much better that it doesn't make sense to skimp.

As the story says, storage isn't going away, just its form. Hopefully the workers don't get left behind.
 
  • Like
Reactions: bigdragon
As the story says, storage isn't going away, just its form. Hopefully the workers don't get left behind.
Plenty of storage applications still depend on HDDs. Mostly offline/nearline storage and where GB/$ is the dominant factor. Most of the data in the cloud sits at rest, which means it just needs to be accessible enough, and speed isn't otherwise an overriding concern. That rules out tape, but makes it amenable to HDD storage.

The other point I have to keep reminding people is that SSDs aren't suitable for cold storage. You can't put them in a drawer or on a shelf and just forget about them, the way you can with HDDs (to some extent), tape, or optical. If you're not periodically powering them on, they'll lose their contents. I've personally experienced this, and it's only going to become more common as NAND cell sizes continue to shrink and more bits keep getting packed into them (e.g. QLC, PLC).
 
A glaring omission from the story is just who is picking up the slack. Is this going to become a single-supplier type of situation? Are they based in mainland China, perhaps?

If you see where I'm going with this, it could potentially give a supplier or a country considerable leverage over pricing and availability. It should definitely be a concern for us all, because we all use cloud services and they're dependent on HDDs.
 
All I want is cheap but large capacity 3.5in format SSD to replace my spinners, so cutting edge technology not needed, speed is not needed, just reliability, as when configured, the storage pool would saturated the SATA3 bus.
 
  • Like
Reactions: dimar
All I want is cheap but large capacity 3.5in format SSD to replace my spinners, so cutting edge technology not needed, speed is not needed, just reliability, as when configured, the storage pool would saturated the SATA3 bus.
There are some QLC datacenter drives, like Solidigm's P5336, that are north of 60 TB. Not cheap, though.

I've never heard of a 3.5" SATA SSD, BTW. Instead of reviving 3.5", datacenter drives are transitioning towards the E1 and E3 family form factors.

Also, I worry when you say you need it to be reliable. No drive, of any technology, is perfectly reliable. You need to be able to tolerate at least single-drive loss, if not entire array loss.
 
Last edited:
  • Like
Reactions: atomicWAR
Well despite this news, HDD prices simply cannot go up! After-all, how much can they realistically charge for a hammer and a chisel?
 
  • Like
Reactions: bolweval
A few years ago I bought a 4TB HDD for media storage. Last month I bought a 4TB SATA SSD. They're not quite as cheap yet, but perform so much better that it doesn't make sense to skimp.

As the story says, storage isn't going away, just its form. Hopefully the workers don't get left behind.
Yeah, some story. Most people who bought an HDD for mass storage "a few years ago" were buying 16-18TB ones, not 4TB ones – at the price of today's 4TB SSDs.

I just checked my order history and last time I bought a 4TB HDD was a decade ago.
 
  • Like
Reactions: bit_user
Yeah, some story. Most people who bought an HDD for mass storage "a few years ago" were buying 16-18TB ones, not 4TB ones – at the price of today's 4TB SSDs.

I just checked my order history and last time I bought a 4TB HDD was a decade ago.
Yeah, I've got 85 TB worth in main USB drives, GOD only knows how much in 4TB and lesser, but they're all sitting in a box for some rainy day that ain't coming. I wish we had 50 TB drives available at a fair per TB price, otherwise we have to build arrays to get the capacity and that's a whole other set of issues and cost. The WD 18TB was the deal to beat for this season. I didn't pounce only because the 20 TB should be the deal for next year. A year can be a long time, but history repeats and I'm counting on it.
 
Yeah, some story. Most people who bought an HDD for mass storage "a few years ago" were buying 16-18TB ones, not 4TB ones – at the price of today's 4TB SSDs.

I just checked my order history and last time I bought a 4TB HDD was a decade ago.
I bought some in 2017 or 2018, but that's because I wanted WD Gold and balked at the price of the higher-capacity models. I needed at least 4 drives, for redundancy, and don't need a ton of raw capacity.
 
Last edited:
I bought some in 2017 or 2018, but that's because I wanted WD Gold and balked at the price of the higher-capacity models. I needed at least 4, for redundancy, and don't need a ton of raw capacity.
Yeah, but not at the price of a 4TB SSD back then, though. That was my point, comparing apples to oranges.

Even today, buying an HDD for storage is considerably cost effective compared to the alternatives. You can get a cheap 18TB HDD for $200 , which nearly half the price of the least expensive 8TB SSD (QLT, SATA) out there. So, people actually interested in affordable capacity are still buying HDDs in droves.
 
  • Like
Reactions: bit_user
As demand for HDD falls, they will become more expensive, and less profitable, due to fixed costs, and it will cause further collapse of demand.
 
As demand for HDD falls, they will become more expensive, and less profitable, due to fixed costs, and it will cause further collapse of demand.
I think the non-NAS consumer market for HDDs has virtually collapsed. Cloud and industrial demand should continue apace, the past year notwithstanding.

The cost problem that HDDs have isn't lack of volume, as your assertion would suggest. Rather, it's that it's becoming increasingly difficult and more expensive to increase density.

Let me give you another angle to consider: the past year was also an anomaly for NAND storage. The price per GB has been artificially depressed. NAND production has been ramped down, accordingly. Some price rebound is projected and we might further see that densities increase but GB/$ doesn't. That's happened before.


So many people seem over-eager to proclaim the death of HDDs, but they misread market anomalies as if they're "the new normal" and overlook the longer-term trends. HDDs will continue to be the stalwarts undergirding our massive & growing cloud infrastructure, for many years to come.
 
Well despite this news, HDD prices simply cannot go up! After-all, how much can they realistically charge for a hammer and a chisel?
Well they can,
They did with Helium.
They are doing it with dual actuators.
And HDDs are marvels of hammer and chisel.
Other than that I agree with you.
 
All I want is cheap but large capacity 3.5in format SSD to replace my spinners, so cutting edge technology not needed, speed is not needed, just reliability, as when configured, the storage pool would saturated the SATA3 bus.
Why ?
You could buy an adapter to fit a 2.5" ssd in an 3.5" enclosure if you really wanted (just found some on Aliexpress).
Nowadays every motherboard has at least one m.2 slot. M.2 format is too good to be true.
And M.2 drives are faster.
And I guess there are ways to house plenty of M.2 drives.
 
Last edited:
Why ?
You could buy an adapter to fit a 2.5" ssd in an 3.5" enclosure if you really wanted (just found some on Aliexpress).
I think @das_stig 's point was to take the larger form factor of a 3.5" enclosure and use the additional volume to fit more NAND chips, in order to achieve greater capacity. Cost per GB would be similar to smaller drives, but the advantage is that you'd have a single 3.5" unit that might be double the capacity of what the 2.5" drives hold.

However, it seems the venerable 2.5" form factor can hold plenty, if we can talk about 15 mm height instead of 7 mm. I just checked and Solidigm sells two such drives packing 15.36 TB and one drive boasting a whopping 30.72 TB:

Nowadays every motherboard has at least one m.2 slot. M.2 format is too good to be true.
If you're running a fileserver, you probably want more drives than that, for the redundancy benefits of RAID-5. For that, the 6 SATA ports that seemed basically standard were often adequate, though my old fileserver used 8 because I had 5 HDDs, a boot SSD, an optical drive for backups of high-value data, and a spare that I could use to connect old drives I had and transfer off their contents (I know USB could be used for this, but I don't trust USB when it's an old HDD with errors).

And M.2 drives are faster.
And I guess there are ways to house plenty of M.2 drives.
Heh, I got you on a technicality. M.2 is just a connector + form factor. There are both SATA and NVMe drives available in M.2 form factor. Not all M.2 slots can support either! If it's a SATA drive, then it's no faster than the SATA cable (sadly).

Anyway, for something like a fileserver, SATA is fine. Even a single SSD can saturate a 5 Gigabit ethernet connection, while a RAID-5 could easily saturate 10 Gigabits. I certainly have nothing against NVMe, except that it's much cheaper to pack a bunch of SATA ports into a mainstream board than it is a comparable number of PCIe lanes.
 
Plenty of storage applications still depend on HDDs. Mostly offline/nearline storage and where GB/$ is the dominant factor. Most of the data in the cloud sits at rest, which means it just needs to be accessible enough, and speed isn't otherwise an overriding concern. That rules out tape, but makes it amenable to HDD storage.

The other point I have to keep reminding people is that SSDs aren't suitable for cold storage. You can't put them in a drawer or on a shelf and just forget about them, the way you can with HDDs (to some extent), tape, or optical. If you're not periodically powering them on, they'll lose their contents. I've personally experienced this, and it's only going to become more common as NAND cell sizes continue to shrink and more bits keep getting packed into them (e.g. QLC, PLC).

I can say from personal use over the last year the HDD market is far from dead. I upgraded my storage capacity to 120TB X2 using enterprise grade HDDs. There is absolutely a market for those with large storage needs like enterprise/cloud, creators, scientific community, etc. I get most home users don't need the capacity my household does. And while users like me may be niche, we are not going away anytime soon for all the reasons you mentioned above. It just the average home users who have mostly disappeared from the market.

In my case I currently use two separate machines (the wife's and mine) running stable bit drive pool (JOBD), to ensure our data is safe instead of the more typical a raid setup on a single machine. I have used raids extensively over the years but honestly find this solution to be safer and mostly more convenient option though it is bit more time consuming. If a drive fails or a machine gets totally fried from a failed PSU, I have it all on the other machine. Sadly I have had exactly one PC completely die from a PSU going bad...it literally killed every powered component in my setup save my water pump...

With my pool setup, I'll only have to rewrite the failed drive's files. Though typically I tend to freshen up the entire data pool regardless if a failure happens. Currently a full rewrite is my gravest point of failure with this setup and/or in a scenario where a drive or more on both machines fail that contain the same data.

Thus in the next year or so I am looking at getting a TB or USB4 attached 10 bay enclosure, roughly, to minimize my chance of losing our data further. I plan to have it at her folks place most of the time, only bringing it around every month or so for back ups (her folks have a slow connection or I would just build a remote node), then quickly returning it. That's another BIG bit a lot of people forget to do. Have a copy of all your data off site. It only takes one house fire/lightening strike/etc to eliminate all your data in the blink of an eye if you don't have off site storage.

At the end of the day you hit the nail on the head. Between the cost per GB and the inherent issues with nand as cold storage, HDDs are sticking around for a long time. Until nand can match the cost and data reliability in cold storage, HDD and tape will continue to afford users/corporations vastly greater storage capacity at a fraction of the cost compared to nand.

As demand for HDD falls, they will become more expensive, and less profitable, due to fixed costs, and it will cause further collapse of demand.
While true to a degree, as things currently stand though that is hardly a big issue at the moment. The HDD market only died in the consumer space. Nand will continue to be to cost prohibitive and more than a little problematic for cold storage. I'd be amazed to see HDDs completely die in the next 15 years for all the reasons bit_user and myself both brought up. There are just to many markets where cost per GB is king and data reliability is paramount. Until nand based solutions can address those issues fully, things will not change.

I certainly have nothing against NVMe, except that it's much cheaper to pack a bunch of SATA ports into a mainstream board than it is a comparable number of PCIe lanes.
It doesn't help Intel and AMD have cut the number of lanes they offer to mainstream CPU/boards over the years either. Long gone are the days of a mainstream board/CPU having 40 lanes total with two full x16 slots or those running multiplexer giving you 4 or more slots at 'full' speed.
 
It doesn't help Intel and AMD have cut the number of lanes they offer to mainstream CPU/boards over the years either. Long gone are the days of a mainstream board/CPU having 40 lanes total with two full x16 slots or those running multiplexer giving you 4 or more slots at 'full' speed.
I think you're comparing AM2+, but that had a PCIe root node in the motherboard chipset and was limited to PCIe 2.0. Also, I have a 890FX board and it supports only x36 PCIe 2.0 lanes - are you sure they ever supported 40?

In AM5, AMD provides 24 CPU-direct PCIe 5.0 lanes + 4x PCIe 4.0 lanes for the chipset. The chipset can fan that out into as many as 12x 4.0 + 8x 3.0 lanes, though ultimately bottlenecked by that x4 connection to the CPU. That works out to x36 total PCIe 4.0+ lanes + x8 3.0 lanes. Very comparable lane count, and a heck of a lot more I/O bandwidth!

Yes, you don't get dual x16 PCIe slots, but the fact that you can run two x8 PCIe 5.0 cards sort of makes up for it. Plus, multi-GPU is basically a thing of the past.

As for Intel, Sandybridge supported only x20 PCIe 2.0 CPU-direct lanes (not counting DMI), although using the Xeon E3-equivalent would unlock another x4. Intel now has even more, because they now offer x20 PCIe CPU-direct lanes + the DMI connection is another x8 PCIe 4.0 lanes.

I think your main beef is with motherboard makers. If they'd drop a PCIe switch on that x16 link, you could have plenty of x16 slots. However, the market just isn't there for such a product. People who need lots of PCIe lanes are few enough and most are willing & able to pay the premium for a Xeon W or ThreadRipper.
 
Last edited:
  • Like
Reactions: atomicWAR
I think your main beef is with motherboard makers. If they'd drop a PCIe switch on that x16 link, you could have plenty of x16 slots. However, the market just isn't there for such a product. People who need lots of PCIe lanes are few enough and most are willing & able to pay the premium for a Xeon W.
Quite true. I have always used a lot of lanes. Sadly that market has all gone hedt or higher. Users like me became to niche to support in the main stream. And the death of sli made the issue worse as it limited the pcie hungry crowd even further.

I think with the trend toward NVMe drives, increasing pcie lanes or adding pcie bridges again would be benefical. Think of how nice a motherboard with eight or more NVMe drives would be. Yes we"d likely have to switch from a parallel mounting we use now (mostly) to an intersectional mount like AIB or gpu do to have the 'room' to do so.
 
I'm curious about the home consumer market for high capacity storage.

Not corporate; not home "businesses".

Is this multi-TB storage requirement primarily due to video files?

Are there people with terabytes of text files, PDFs, Word, Excel?

I think one million high bit rate MP3s would occupy 4 or 5 TB, but I'd think that would be an extremely unusual household.

My "original" data is:

69% video files
15% audio files; nearly all mp3
10% Macrium image files of boot drives containing no personal data
3% pictures (jpegs and tiffs)
3% everything else

The "everything else" includes less than .5 percent of traditional "text files"....Word, Excel, txt, PDF.

Is all of that atypical?

Do any of you admit to being a hoarder?
 
I'm curious about the home consumer market for high capacity storage.
I use mine for backups of other machines, CD rips (using FLAC codec), my own photos and videos, some recordings of 16 Mbps terrestrial HDTV broadcasts, various stuff I've downloaded over the years, and my other personal files.

My fileserver is currently running at 8 TB of usable storage. The disk configuration is 4x 4 TB HDDs in a RAID-6, enabling me to recover after losing up to 2 drives. A full scrub takes 13 hours.
 
The other point I have to keep reminding people is that SSDs aren't suitable for cold storage. You can't put them in a drawer or on a shelf and just forget about them, the way you can with HDDs (to some extent), tape, or optical. If you're not periodically powering them on, they'll lose their contents. I've personally experienced this, and it's only going to become more common as NAND cell sizes continue to shrink and more bits keep getting packed into them (e.g. QLC, PLC).
When you experienced it - what kind of flash was it, and about how long did it take for data loss to occur?
I bought an external QLC drive (Crucial X6) a couple of years ago, before knowing the increased danger of bit rot with QLC. I probably won't do that again. : P I don't keep anything on it that's not backed up, and try to charge it at least once per week if I haven't used it. The longest it has gone unplugged was about two weeks, and I didn't notice any loss. I have no idea what to expect, though. Can loss occur in a month? A week?
 
Status
Not open for further replies.