News Seagate demonstrates 3D magnetic recording for 120+ TB HDDs — dual-layer media stacks data bits to boost capacity

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

t3t4

Great
Sep 5, 2023
92
36
60
First it is already 2024. As your post is less than 48 hours old, I am assuming you are either stating something from an article years ago to show it was wrong or you are joking. There are many that still use HDDs including consumers, let alone businesses and governments.

It was a joke, wasn't it obvious with the number 2023? I dunno, but I thought it was kinda funny. Just like that crystal storage tech we were promised at least a decade ago. We can store in DNA but we still can't get crystal tech off the ground?

Yes the whole world still uses HDD's and half of it still uses tape. Laugh where ya can, smile at the rest, but always enjoy your day. Cheers 🍻
 

stoatwblr

Distinguished
Sep 12, 2011
39
8
18,535
Yes the whole world still uses HDD's and half of it still uses tape. Laugh where ya can, smile at the rest, but always enjoy your day. Cheers 🍻
On a mmore serious note, HDD unit sales are 90+% down on what they were a decade ago as SSD has essentially taken over the consumer market

HDD is now a niche market which has pivoted to the bit-barns, just like tape did 20 years ago
 

bit_user

Polypheme
Ambassador
On a mmore serious note, HDD unit sales are 90+% down on what they were a decade ago
Source?

as SSD has essentially taken over the consumer market

HDD is now a niche market which has pivoted to the bit-barns, just like tape did 20 years ago
SSD is a less-than-ideal backup medium, so what areas they haven't penetrated will tend to be HDD-dominated into the foreseeable future.

Most of the data in the cloud is still on HDDs. I don't consider that a "niche market". There are other bulk data applications, like video surveillance, that also still favor HDDs.

Many people seem to think the only things that matter about storage are cost and performance. They forget about important details like data retention.
 
  • Like
Reactions: t3t4

jasonf2

Distinguished
SCSI is dead, unless you mean iSCSI or SAS (Serial-Attached SCSI). I never see/hear people referring to SAS as "SCSI", though.


I call BS on this.


Servers don't use M.2, generally speaking. They use U.2 and increasingly E1 form factors (E1.S and E1.L):

I'm seeing some adoption of E3.S, as well.

I'm not sure if it's common to have a partially-populated U.2 port, but you certainly could reduce the lane count.


Haven't you ever heard of command queuing? Normally, the OS sends multiple, overlapping requests to the drive, and its controller firmware works out the best order to perform them, according to the optimal seek pattern for the head. It's hard for the OS to do, since it doesn't know the platter orientation, nor the seek/access times for moving the head different distances.


Dude, the server industry has long ago embraced NVMe for SSDs. Don't you think they would've figured out NVMe hot swap, by now?
(dated 2019, I might point out)​


Well, as implied by your subsequent comment, NVMe does support mechanical HDDs.

One argument for this is quite clear: reduce hardware cost & complexity. By changing the drive interface to NVMe, you get rid of SATA & SAS controllers from server boards/SoCs. Also, software support for the drivers is no longer needed.

If you consider an AMD EPYC server has 128+ PCIe lanes, you easily have more than enough lanes to connect however many HDDs you might want to stuff into a chassis.


This is not true. If you look at the SMART stats, there are ample pre-failure warning signs.


This is not necessarily true. RAID controllers tend to be paranoid and will drop a disk from the array if ever encounters a failed read, but that doesn't mean the drive is "toast".

To start there hasn't been an active parallel scsi implementation for 20 years. Any assumption that I was talking about anything different than SAS (Serial attached SCSI) overlooks comments on cross compatibility with SATA, where parallel SCSI definitely was not.

In regards to PCI lane implementation there would be absolutely no advantage to tying up individual lanes for a spin drive. SSD Yes.

NCQ was originally implemented in SATA and SAS. Not sure what your point here is.

As to the rest of it dedicating a PCI lane for something that tops out at maybe 200 MBps is fine but serious overkill. My argument is that it doesn't bring any advantage to the table performance wise ,not that it doesn't work. Also how you choose to run your machine is fine, but any spin drive with bad sectors is in the process of failing in my opinion and should be repaired/replaced. Worn out blocks on an SSD are normal and are not indicative of failure.

SATA and SAS are still very relevant for spin drives. It is the spin drive that's relevance is questionable.
 

bit_user

Polypheme
Ambassador
In regards to PCI lane implementation there would be absolutely no advantage to tying up individual lanes for a spin drive. SSD Yes.
As I said, the benefit is in cost & simplicity, at a system level. Modern servers generally have enough PCIe lanes that devoting one per spinning disk isn't unreasonable and avoids the need for separate SATA or SAS controller hardware. It also simplifies software, because then all of your storage is just NVMe.

NCQ was originally implemented in SATA and SAS. Not sure what your point here is.
You claimed that "SATA and SCSI" are "built for the linear nature of read/write head", which is silly, especially in the LBA era, and ignores that SCSI innovated Tagged Command Queuing, which some PATA drives & controllers subsequently copied. The whole point of command queuing was to further decouple the protocol from the linear nature of the read/write head!

As to the rest of it dedicating a PCI lane for something that tops out at maybe 200 MBps is fine but serious overkill.
The PCIe version was never specified. PCIe 2.0 tops out at about 500 MB/s per lane (uni-dir), which would be a good fit for HDDs that can easily reach 300 MB/s and beyond. Here's a 24 TB, 7.2k RPM drive that goes up to 298 MB/s:

SATA and SAS are still very relevant for spin drives. It is the spin drive that's relevance is questionable.
I expect mechanical hard disks will easily outlast the SATA/SAS interface, in datacenter applications.

As for what happens in client machines... I guess we'll have to wait and see. I sure wish U.2 had caught on, because it would be much easier to cool hot PCIe 5.0 SSDs in a 2.5" form factor than the way a lot of M.2 slots are situated, which might or might not receive particularly much airflow within the chassis. And if U.2 had caught on for SSDs, then it would be a natural way for consumer HDDs to transition over to NVMe.
 
Last edited:
  • Like
Reactions: thestryker
As I said, the benefit is in cost & simplicity, at a system level. Modern servers generally have enough PCIe lanes that devoting one per spinning disk isn't unreasonable and avoids the need for separate SATA or SAS controller hardware. It simplifies software, because then all of your storage is just NVMe.
Could even add in some PCIe 2.0 switches to expand ports since it's plenty of bandwidth and they're not very expensive.
 
  • Like
Reactions: bit_user

jasonf2

Distinguished
As I said, the benefit is in cost & simplicity, at a system level. Modern servers generally have enough PCIe lanes that devoting one per spinning disk isn't unreasonable and avoids the need for separate SATA or SAS controller hardware. It also simplifies software, because then all of your storage is just NVMe.

You still need RAID, which makes this kind of a point moot.
You claimed that "SATA and SCSI" are "built for the linear nature of read/write head", which is silly, especially in the LBA era, and ignores that SCSI innovated Tagged Command Queuing, which some PATA drives & controllers subsequently copied. The whole point of command queuing was to further decouple the protocol from the linear nature of the read/write head!

SATA and SAS implementations were extensions on their parallel predecessors which developed around spin drive read write head optimization. NCQ only really came into play when large memory buffers in the drives themselves started to be implemented to reduce latency. NCQ is an extension, not a hardware protocol and wasn't implemented until SATA 1.0. If we are arguing semantics SCSI was originally designed as universal system interface for peripherals. It just never really caught on that way. Regardless of original intent all of these protocols ended up developing around and catering to their primary use devices as their multiple versions developed over time.
The PCIe version was never specified. PCIe 2.0 tops out at about 500 MB/s per lane (uni-dir), which would be a good fit for HDDs that can easily reach 300 MB/s and beyond. Here's a 24 TB, 7.2k RPM drive that goes up to 298 MB/s:
PCI 5 is good for like 3.94 GBps per channel (sans overhead). I would say that is overkill for a 300mpbs hd. SAS is good for like 12 GBps but can logically split that over 65535 devices. In application it is actually usable up to 30 or 40 per SAS channel, but RAID is usually directly integrated in the card with the SAS controller.
I expect mechanical hard disks will easily outlast the SATA/SAS interface, in datacenter applications.

As for what happens in client machines... I guess we'll have to wait and see. I sure wish U.2 had caught on, because it would be much easier to cool hot PCIe 5.0 SSDs in a 2.5" form factor than the way a lot of M.2 slots are situated, which might or might not receive particularly much airflow within the chassis. And if U.2 had caught on for SSDs, then it would be a natural way for consumer HDDs to transition over to NVMe.
Probably.
 

bit_user

Polypheme
Ambassador
You still need RAID, which makes this kind of a point moot.
No, traditional RAID has fallen out of favor in most hyperscalers. Rebuild times have gotten too long and RAID doesn't protect against system-level outages.

You talk very authoritatively, but in point after point, you've demonstrated quite a lack of knowledge about this industry and the way such customers actually deploy storage.

Lots of un-sourced, revisionist history in your claims. I never said "NCQ", I just said "command queuing". You're the one who decided to make this about NCQ, specifically.

In point of fact, PATA drives with command queuing did in fact deliver sizeable performance benefits, especially in concurrent workloads like Windows boot time. In one case, I recall such a drive literally halving Win XP boot times. I would fully expect SCSI drives with command queuing delivered comparable gains in database workloads.

PCI 5 is good for like 3.94 GBps per channel (sans overhead). I would say that is overkill for a 300mpbs hd.
What's "mpbs"? Should be MB/s.

SAS is good for like 12 GBps but can logically split that over 65535 devices.
How many drives do you expect a single chassis to host? The question isn't whether PCIe is overkill, but rather whether it's the most cost-effective option for the servers people actually deploy.
 
Last edited:

stoatwblr

Distinguished
Sep 12, 2011
39
8
18,535
Source?


SSD is a less-than-ideal backup medium, so what areas they haven't penetrated will tend to be HDD-dominated into the foreseeable future.

Most of the data in the cloud is still on HDDs. I don't consider that a "niche market". There are other bulk data applications, like video surveillance, that also still favor HDDs.

Many people seem to think the only things that matter about storage are cost and performance. They forget about important details like data retention.

HDD is a less-than-ideal backup medium too. If you want long term (archival) storage, use tape and if you're cycling your backups every couple of years then it really doesn't matter if you're using SSD or HDD (In any case, never rely on at-rest data succmbing to bitrot. RAIT exists for a reason despite the paranoia of LTO writing algorithms

WRT "most of the data in the cloud" , most of it is on tape, not HDDs - this is one of the reasons why you pay through the nose for _reading_ data. All the big bitbarns use various forms of hierarchical storage systems which put the vast bulk onto tape - which stays in the robots and effectively becomes very slow nearline storage.

In terms of data retention, that's mostly at-rest data which for the most part is seldom accessed and if it takes 30-60 seconds for the robot to load a tape and spool to the exact sector where it's located(*), it's an acceptable delay. There are a bunch of prefetching options one can engage in so that the tape is already in the drive slot by the time XYZ file is requested, particularly if historical reports are being compiled

(*) Yes, LTO can do this with LTFS. Database-backed backup solutions such as Bacula can also seek to the exact file location. On average it takes more time to retrieve and load a tape than to actually extract any given file on it - IF you have appropriate software in play (and datamongers/bit barns _do_), Tape is cheaper than anything else by a couple of orders of magnitude

For smaller-than-bit-barn operations, especially ones like the environment I work in, Several petabytes of HDD simply aren't fast enough to keep up with the demands of ever-increasing computational loads and because they have to be kept spinning, power draw/server room heating has to be factored in. A rack full of several hundred HDDs all seeking like mad has "interesting" performance hits as the load increases and resonances start to coincide. Moving to SSD reduces power draw for the same storage capacity (and simultaneously reduces cooling plant load) or allows us to pack more into the same space - and the lack of vibrations means the last longer. I've literally had drives destroy their head pivot bearings due to the volume of random read seeking going on (Yes, I could short stroke them, but that makes them even less cost-effective)

As I've written a number of times over the last decade, the approximate "jumping off" point from HDD to SSD has stayed at "when SSDs are about 4x the price of their HDD equivalent" (ie: domestic or enterprise drives). TCO calculations are far more important than upfront price and as at least one previous employer has discovered, taking the cheap upfront option ends up costing dearly when power prices spike or there's a heatwave - and that's quite apart from the SSDs usually lasting at least twice as long in service as HDDs - Yes, I've had HDDs with over 120,000 hours on them (XYZ project forcing them to be kept online for years past their use-by date - it happens a lot in space science), but they always had me walking on eggshells, whilst SSDs with similar ages would invariably show very little wear (write once, read randomly forever)

I won't argue that many early SSDs were awful, but that's a long time in the past and the biggest current risk is letting the likes of Seagate/WD make clients think that HDD levels of reliability are acceptable when it's trivial to exceed that by significant multipliers, along with HDD FUD claims (I'm reminded of the boss who insisted that ALL systems run Sendmail because he doesn't understand Postfix - even today. You can imagine the mightmare involved trying to keep that kind of mess up, running and protected from spammers whilst not bombarding innocent victims with bounce messages and not "losing" messages down filtering blackholes - a large business risk and one he failed to address for over 2 decades)