News Seagate demonstrates 3D magnetic recording for 120+ TB HDDs — dual-layer media stacks data bits to boost capacity

8086

Distinguished
Mar 19, 2009
56
30
18,560
Once upon a time, tape drives were the primary form of storage but once replaced they hung around for many decades as a backup form of bulk storage. The same will hold true for hard drives in the decades to come.
 
  • Like
Reactions: bit_user and KyaraM

PEnns

Reputable
Apr 25, 2020
632
661
5,760
We've been hearing and reading countless news about those gigantic 50 to 100+ TB HDDs now for over 10 years now.....will this be any different?
 

Notton

Prominent
Dec 29, 2023
343
299
560
It'd be nice if NAND dropped to HDD levels of Price/GB, but that only happened against 2.5" HDDs in 2023 during NAND price crash.
 

jasonf2

Distinguished
That's pretty useless since SSDs will replace HDDs before 2023.
That isn't exactly true. There are a number of applications where rust drives still have significant advantages. Flash doesn't have great write tolerance. Its cost per terabyte is also more than spin drives. So for applications like video recording systems that have constant overwrite cycles and mid-low tier archival storage setups spin drives will remain viable for the foreseeable future. Those use cases have a huge need for capacity and that is why you are seeing a race in the spin drive market for bigger and bigger drives. The obvious latency and bandwidth differences between spin drives and SSD has all but removed them from the main tier storage market in PCs but they are still alive and kicking in the NAS, Server and Security markets.
 

jasonf2

Distinguished
I wonder how throughput is being tackled with these. It looks like SATA will die before HDD's, as ironic as that sounds.
SATA and SCSI both still have very viable uses. While PCI is much better at handling SSD workloads (Due to latency and bandwidth) SATA and SCSI have clear advantages when it comes to spin drives. It would not make sense to tie up the PCI lanes necessary for a M.2 port per spin drive, especially when running banks of them. They are also built for the linear nature of read/write head. SCSI also supports hot swapping which is pretty limited in direct to PCIe connections. This is really important in the server realm in raid configurations where failure drives are pulled, replaced and recommissioned while the machine is hot. SATA and SCSI are such close cousins that SATA can run on a SCSI rack. It is isn't unusual to use cheap SATA drives when cost and reliability isn't a concern. So I am pretty sure as long as spin drives are available SATA and SCSI will be too. There just isn't any real need to add much to the bus standards for spin drives. That being said there have been some NVME spin drives made, I think Seagate put one out but I don' t know how viable they have been, or what the point to them actually was.
 

atomicWAR

Glorious
Ambassador
That's pretty useless since SSDs will replace HDDs before 2023.
LOL neat trick since we are in 2024 now. For boot drives have gone all SSD for awhile now. But for bulk storage, not so much. Until I can buy an SSD with 20TB for under 400 dollars...there will always be a need for HDDS and their spinning rust platters. We have 120TB in both my and my wife's rigs to store our data so users like us...will need HDDs for the foreseeable future. Going SSD would have required a second mortgage on our house to afford that much space.
 
  • Like
Reactions: Amdlova
That's pretty useless since SSDs will replace HDDs before 2023.
Yeah no, hdd are still a thing for bulk storage and simplicity compared to tapes, just plug and play, dont need to take a long time to r/w unless it is SMR hdd.

SSD wont ever replace HDD unless SSD Durability with multi level cell are better, it costs so cheap, and pretty much it. Dont say about hdd is far fragile when in reality if anyone with either both device, and they didnt treat em carefully, they would fail. Only if there are SLC (not SLC cache, true SLC) that are dense enough but also cheap, SSD would win the race, since you got the durability of a single cell, which is why even a used SLC/MLC ssds nowadays are still expensive compared to cheapo QLC drives.

My 2013 1TB HDD (seagate) are still at 100% after numerous clean format (diskpart clean), compared to my ssd who only been bought for a year, it falls to 94% (team mp34, and yes it have dram) after some diskpart clean only on a split partition. So there you have it, Durability, Cost, and Function wise, HDD. Fast Access, Responsiveness, and Simple Factor wise, SSD.
 
  • Like
Reactions: bit_user

jasonf2

Distinguished
Yeah no, hdd are still a thing for bulk storage and simplicity compared to tapes, just plug and play, dont need to take a long time to r/w unless it is SMR hdd.

SSD wont ever replace HDD unless SSD Durability with multi level cell are better, it costs so cheap, and pretty much it. Dont say about hdd is far fragile when in reality if anyone with either both device, and they didnt treat em carefully, they would fail. Only if there are SLC (not SLC cache, true SLC) that are dense enough but also cheap, SSD would win the race, since you got the durability of a single cell, which is why even a used SLC/MLC ssds nowadays are still expensive compared to cheapo QLC drives.

My 2013 1TB HDD (seagate) are still at 100% after numerous clean format (diskpart clean), compared to my ssd who only been bought for a year, it falls to 94% (team mp34, and yes it have dram) after some diskpart clean only on a split partition. So there you have it, Durability, Cost, and Function wise, HDD. Fast Access, Responsiveness, and Simple Factor wise, SSD.
I am not arguing your premise on a whole. The only thing I would maybe note is that overall spin drives aren't really more reliable than SSDs due to their mechanical nature. SSD degradation is due to the limited write cycle nature of the flash memory used. So when you buy an SSD the finite nature is not only calculatable but also able to be monitored as bad flash cells are taken offline. Spin drives are a bit less nuanced as they pretty much operate at 100% until they fail. If you have any bad sectors on a spin drive it is toast. So when a spin drive indicates any degradation it needs replaced ASAP. Over the last couple of years I have read a couple of articles that looked at actual real world reliability. Under normal workloads SSDs were less failure prone overall. That being said in any workload where there are large numbers of write cycles an SSD is toast pretty quickly whereas a spin drive is going to run to mechanical failure regardless of what you do with it. That being said there have been solid state techs like 3dxpoint that aren't as write sensitive as flash. They never really saw large scale adoption due to cost though.
 
I am not arguing your premise on a whole. The only thing I would maybe note is that overall spin drives aren't really more reliable than SSDs due to their mechanical nature. SSD degradation is due to the limited write cycle nature of the flash memory used. So when you buy an SSD the finite nature is not only calculatable but also able to be monitored as bad flash cells are taken offline. Spin drives are a bit less nuanced as they pretty much operate at 100% until they fail. If you have any bad sectors on a spin drive it is toast. So when a spin drive indicates any degradation it needs replaced ASAP. Over the last couple of years I have read a couple of articles that looked at actual real world reliability. Under normal workloads SSDs were less failure prone overall. That being said in any workload where there are large numbers of write cycles an SSD is toast pretty quickly whereas a spin drive is going to run to mechanical failure regardless of what you do with it. That being said there have been solid state techs like 3dxpoint that aren't as write sensitive as flash. They never really saw large scale adoption due to cost though.
Hard drives having moving parts does make them have limited lifespan, but in my experience, hard drives tend to give at least a few months of warning before complete failure. A few bad sectors aren't an immediate death sentence. A disk scan can find them and mark them not to be used. My employer owns thousands of machines like digital jukeboxes and game machines, so I deal with that on a daily basis. One issue that we've faced with SSDs is that there can be bad batches with firmware issues that don't show up immediately. That can get extremely problematic and you get little to no warning.

I'm also laughing about all the people replying to the first post as if it was serious and not a joke.
 
  • Like
Reactions: bit_user and t3t4

t3t4

Great
Sep 5, 2023
90
36
60
Well, I'm looking forward to it at least. If the price is right and the read/write speeds are there, and "IF" the tech can prove it's endurance compared to our magnetic drives, well then sign me up Scotty!

I have 90 some TB spread across multiple drives, it sure would be nice to consolidate that ever growing number into one reasonable location.
 

USAFRet

Titan
Moderator
Well, I'm looking forward to it at least. If the price is right and the read/write speeds are there, and "IF" the tech can prove it's endurance compared to our magnetic drives, well then sign me up Scotty!

I have 90 some TB spread across multiple drives, it sure would be nice to consolidate that ever growing number into one reasonable location.
Buy 2, because you need a backup.
 
The amount of data lost and/or the annoyance of rebuilds when there's drive failure had me resisting larger capacity drives for a long time (until last year my storage was 8x 4TB RAID 6). The individual drive cost factor doesn't matter to businesses which is why the biggest drives possible will pretty much always be the best, but for a home user multiple smaller drives are always going to be a better choice. For example my storage server boxes for over a decade have had 2 disk redundancy so buying 8x 20TB drives would make infinitely more sense than buying 3x 120TB.
 
  • Like
Reactions: bit_user

t3t4

Great
Sep 5, 2023
90
36
60
The amount of data lost and/or the annoyance of rebuilds when there's drive failure had me resisting larger capacity drives for a long time (until last year my storage was 8x 4TB RAID 6). The individual drive cost factor doesn't matter to businesses which is why the biggest drives possible will pretty much always be the best, but for a home user multiple smaller drives are always going to be a better choice. For example my storage server boxes for over a decade have had 2 disk redundancy so buying 8x 20TB drives would make infinitely more sense than buying 3x 120TB.
Back when 1TB Hitachi drives were first hitting the consumer market, I got one. I thought I'd never need more storage than that. Oh how wrong I was. Then it failed... Then I got another at almost $400 a pop! Then it failed too!!!! Then I stayed away from large capacity drives for many, many years.

The only reasonable way to get the mega amounts of storage needed with current tech is to build an array or use multiple small individual drives. It sucks either way! I don't want to light up 3+ drives in an array just to access a single file nor do I want 5 or more individual drives to go through just to find the file I need. We definitely have a need for larger capacities in single drives, but I fear the early adopter price tag and I for one will not relive those 1TB early adopter days. I will NOT put up with that regardless of tech or price.

So whichever direction this goes, I'll be waiting for the provability test data. We already know how questionable optical media is and has been, some like myself had to learn that the very worst way possible as well. I am very much looking forward to much larger capacity HDD's, but I'm now 3 times bit and forever shy! The tech must prove itself thoroughly to me. But "IF" it can do that, I'll be all aboard that train!
 

bit_user

Polypheme
Ambassador
SATA and SCSI both still have very viable uses.
SCSI is dead, unless you mean iSCSI or SAS (Serial-Attached SCSI). I never see/hear people referring to SAS as "SCSI", though.

SATA and SCSI have clear advantages when it comes to spin drives.
I call BS on this.

It would not make sense to tie up the PCI lanes necessary for a M.2 port per spin drive,
Servers don't use M.2, generally speaking. They use U.2 and increasingly E1 form factors (E1.S and E1.L):


I'm seeing some adoption of E3.S, as well.

I'm not sure if it's common to have a partially-populated U.2 port, but you certainly could reduce the lane count.

They are also built for the linear nature of read/write head.
Haven't you ever heard of command queuing? Normally, the OS sends multiple, overlapping requests to the drive, and its controller firmware works out the best order to perform them, according to the optimal seek pattern for the head. It's hard for the OS to do, since it doesn't know the platter orientation, nor the seek/access times for moving the head different distances.

SCSI also supports hot swapping which is pretty limited in direct to PCIe connections.
Dude, the server industry has long ago embraced NVMe for SSDs. Don't you think they would've figured out NVMe hot swap, by now?

There just isn't any real need to add much to the bus standards for spin drives. That being said there have been some NVME spin drives made, I think Seagate put one out but I don' t know how viable they have been, or what the point to them actually was.
Well, as implied by your subsequent comment, NVMe does support mechanical HDDs.

One argument for this is quite clear: reduce hardware cost & complexity. By changing the drive interface to NVMe, you get rid of SATA & SAS controllers from server boards/SoCs. Also, software support for the drivers is no longer needed.

If you consider an AMD EPYC server has 128+ PCIe lanes, you easily have more than enough lanes to connect however many HDDs you might want to stuff into a chassis.

Spin drives are a bit less nuanced as they pretty much operate at 100% until they fail.
This is not true. If you look at the SMART stats, there are ample pre-failure warning signs.

If you have any bad sectors on a spin drive it is toast.
This is not necessarily true. RAID controllers tend to be paranoid and will drop a disk from the array if ever encounters a failed read, but that doesn't mean the drive is "toast".
 
Last edited:
  • Like
Reactions: thestryker

bit_user

Polypheme
Ambassador
Well, I'm looking forward to it at least. If the price is right and the read/write speeds are there, and "IF" the tech can prove it's endurance compared to our magnetic drives, well then sign me up Scotty!
The article throws doubt on whether/when it will happen. Furthermore, it sounds like the layers must be written separately. That holds a dismal prospect for write speeds, where you could see a 2-4x capacity increase accompanied by zero increase in write performance. That will push RAID rebuild times into the multi-week range!!!

I hope it's possible at least to read multiple layers in one pass. That would at least be something.
 
Last edited:

t3t4

Great
Sep 5, 2023
90
36
60
The article throws doubt on whether/when it will happen. Furthermore, it sounds like the layers must be written independently. That holds a dismal prospect for write speeds, where you could see a 2-4x capacity increase accompanied by zero increase in write performance. That will push RAID rebuild times into the multi-week range!!!

I hope it's possible at least to read multiple layers in one pass. That would at least be something.
Fingers crossed it can at least match current speeds. I don't really need more speed just more storage in one location. I guess we'll just hurry up wait and see.
 

NatalieEGH

Distinguished
Nov 23, 2012
49
7
18,545
That's pretty useless since SSDs will replace HDDs before 2023.
First it is already 2024. As your post is less than 48 hours old, I am assuming you are either stating something from an article years ago to show it was wrong or you are joking. There are many that still use HDDs including consumers, let alone businesses and governments.

I have 256TB+ in main HDD storage my system. It consists of 2x 8x16TB HDD (RAID 10) plus a few older HDD that I use for games, personal data files, temp storage when moving things around, and the Windows pagefile. It cost me a bit over $4000. How much would such a system cost with SDD? Does it all fit in my tower? No, that is why I built a NAS with 36 hot swap 3.5" HDD drive capability to also allow for expansion. (Actually also has 2 2.5 inch mounting points, nor really bays)

Oh and I am only using outside of main memory 16x16TB drives in a 36 drive cabinet.

People were making guesses on just how soon magnetic tape media would disappear. It has possibly disappeared for consumer systems except for those few that still have Apple II's, and similar computers from the 1970's and early 1980's. I bought my son a computer with a cassette drive connection for doing LOGO (turtle) in the mid to late 1980s.

Many large corporate and government sites still use magnetic tape for archival purposes for the reason of cost. A tape capable of storing 18TB uncompressed, 45TB compressed costs less than $100. Those tapes can also have multiple archives on them (if they do not need all the storage each day) with a directory of files for quick finding of the file and recovery. For sites with massive storage requirements with the ability to completely recover a system very quickly, the entire system can be archived on just a few tapes compared to all the HDD required compared to all the SSD required, compared to all the m.2 NVMe drives (which admittedly are getting as big as most SSD drives just more expensive).
 
  • Like
Reactions: bit_user