News Seagate demonstrates 3D magnetic recording for 120+ TB HDDs — dual-layer media stacks data bits to boost capacity

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

t3t4

Great
Sep 5, 2023
90
34
60
First it is already 2024. As your post is less than 48 hours old, I am assuming you are either stating something from an article years ago to show it was wrong or you are joking. There are many that still use HDDs including consumers, let alone businesses and governments.

It was a joke, wasn't it obvious with the number 2023? I dunno, but I thought it was kinda funny. Just like that crystal storage tech we were promised at least a decade ago. We can store in DNA but we still can't get crystal tech off the ground?

Yes the whole world still uses HDD's and half of it still uses tape. Laugh where ya can, smile at the rest, but always enjoy your day. Cheers 🍻
 

stoatwblr

Distinguished
Sep 12, 2011
34
8
18,535
Yes the whole world still uses HDD's and half of it still uses tape. Laugh where ya can, smile at the rest, but always enjoy your day. Cheers 🍻
On a mmore serious note, HDD unit sales are 90+% down on what they were a decade ago as SSD has essentially taken over the consumer market

HDD is now a niche market which has pivoted to the bit-barns, just like tape did 20 years ago
 

bit_user

Polypheme
Ambassador
On a mmore serious note, HDD unit sales are 90+% down on what they were a decade ago
Source?

as SSD has essentially taken over the consumer market

HDD is now a niche market which has pivoted to the bit-barns, just like tape did 20 years ago
SSD is a less-than-ideal backup medium, so what areas they haven't penetrated will tend to be HDD-dominated into the foreseeable future.

Most of the data in the cloud is still on HDDs. I don't consider that a "niche market". There are other bulk data applications, like video surveillance, that also still favor HDDs.

Many people seem to think the only things that matter about storage are cost and performance. They forget about important details like data retention.
 
  • Like
Reactions: t3t4

jasonf2

Distinguished
SCSI is dead, unless you mean iSCSI or SAS (Serial-Attached SCSI). I never see/hear people referring to SAS as "SCSI", though.


I call BS on this.


Servers don't use M.2, generally speaking. They use U.2 and increasingly E1 form factors (E1.S and E1.L):

I'm seeing some adoption of E3.S, as well.

I'm not sure if it's common to have a partially-populated U.2 port, but you certainly could reduce the lane count.


Haven't you ever heard of command queuing? Normally, the OS sends multiple, overlapping requests to the drive, and its controller firmware works out the best order to perform them, according to the optimal seek pattern for the head. It's hard for the OS to do, since it doesn't know the platter orientation, nor the seek/access times for moving the head different distances.


Dude, the server industry has long ago embraced NVMe for SSDs. Don't you think they would've figured out NVMe hot swap, by now?
(dated 2019, I might point out)​


Well, as implied by your subsequent comment, NVMe does support mechanical HDDs.

One argument for this is quite clear: reduce hardware cost & complexity. By changing the drive interface to NVMe, you get rid of SATA & SAS controllers from server boards/SoCs. Also, software support for the drivers is no longer needed.

If you consider an AMD EPYC server has 128+ PCIe lanes, you easily have more than enough lanes to connect however many HDDs you might want to stuff into a chassis.


This is not true. If you look at the SMART stats, there are ample pre-failure warning signs.


This is not necessarily true. RAID controllers tend to be paranoid and will drop a disk from the array if ever encounters a failed read, but that doesn't mean the drive is "toast".

To start there hasn't been an active parallel scsi implementation for 20 years. Any assumption that I was talking about anything different than SAS (Serial attached SCSI) overlooks comments on cross compatibility with SATA, where parallel SCSI definitely was not.

In regards to PCI lane implementation there would be absolutely no advantage to tying up individual lanes for a spin drive. SSD Yes.

NCQ was originally implemented in SATA and SAS. Not sure what your point here is.

As to the rest of it dedicating a PCI lane for something that tops out at maybe 200 MBps is fine but serious overkill. My argument is that it doesn't bring any advantage to the table performance wise ,not that it doesn't work. Also how you choose to run your machine is fine, but any spin drive with bad sectors is in the process of failing in my opinion and should be repaired/replaced. Worn out blocks on an SSD are normal and are not indicative of failure.

SATA and SAS are still very relevant for spin drives. It is the spin drive that's relevance is questionable.
 

bit_user

Polypheme
Ambassador
In regards to PCI lane implementation there would be absolutely no advantage to tying up individual lanes for a spin drive. SSD Yes.
As I said, the benefit is in cost & simplicity, at a system level. Modern servers generally have enough PCIe lanes that devoting one per spinning disk isn't unreasonable and avoids the need for separate SATA or SAS controller hardware. It also simplifies software, because then all of your storage is just NVMe.

NCQ was originally implemented in SATA and SAS. Not sure what your point here is.
You claimed that "SATA and SCSI" are "built for the linear nature of read/write head", which is silly, especially in the LBA era, and ignores that SCSI innovated Tagged Command Queuing, which some PATA drives & controllers subsequently copied. The whole point of command queuing was to further decouple the protocol from the linear nature of the read/write head!

As to the rest of it dedicating a PCI lane for something that tops out at maybe 200 MBps is fine but serious overkill.
The PCIe version was never specified. PCIe 2.0 tops out at about 500 MB/s per lane (uni-dir), which would be a good fit for HDDs that can easily reach 300 MB/s and beyond. Here's a 24 TB, 7.2k RPM drive that goes up to 298 MB/s:

SATA and SAS are still very relevant for spin drives. It is the spin drive that's relevance is questionable.
I expect mechanical hard disks will easily outlast the SATA/SAS interface, in datacenter applications.

As for what happens in client machines... I guess we'll have to wait and see. I sure wish U.2 had caught on, because it would be much easier to cool hot PCIe 5.0 SSDs in a 2.5" form factor than the way a lot of M.2 slots are situated, which might or might not receive particularly much airflow within the chassis. And if U.2 had caught on for SSDs, then it would be a natural way for consumer HDDs to transition over to NVMe.
 
Last edited:
  • Like
Reactions: thestryker
As I said, the benefit is in cost & simplicity, at a system level. Modern servers generally have enough PCIe lanes that devoting one per spinning disk isn't unreasonable and avoids the need for separate SATA or SAS controller hardware. It simplifies software, because then all of your storage is just NVMe.
Could even add in some PCIe 2.0 switches to expand ports since it's plenty of bandwidth and they're not very expensive.
 
  • Like
Reactions: bit_user

jasonf2

Distinguished
As I said, the benefit is in cost & simplicity, at a system level. Modern servers generally have enough PCIe lanes that devoting one per spinning disk isn't unreasonable and avoids the need for separate SATA or SAS controller hardware. It also simplifies software, because then all of your storage is just NVMe.

You still need RAID, which makes this kind of a point moot.
You claimed that "SATA and SCSI" are "built for the linear nature of read/write head", which is silly, especially in the LBA era, and ignores that SCSI innovated Tagged Command Queuing, which some PATA drives & controllers subsequently copied. The whole point of command queuing was to further decouple the protocol from the linear nature of the read/write head!

SATA and SAS implementations were extensions on their parallel predecessors which developed around spin drive read write head optimization. NCQ only really came into play when large memory buffers in the drives themselves started to be implemented to reduce latency. NCQ is an extension, not a hardware protocol and wasn't implemented until SATA 1.0. If we are arguing semantics SCSI was originally designed as universal system interface for peripherals. It just never really caught on that way. Regardless of original intent all of these protocols ended up developing around and catering to their primary use devices as their multiple versions developed over time.
The PCIe version was never specified. PCIe 2.0 tops out at about 500 MB/s per lane (uni-dir), which would be a good fit for HDDs that can easily reach 300 MB/s and beyond. Here's a 24 TB, 7.2k RPM drive that goes up to 298 MB/s:
PCI 5 is good for like 3.94 GBps per channel (sans overhead). I would say that is overkill for a 300mpbs hd. SAS is good for like 12 GBps but can logically split that over 65535 devices. In application it is actually usable up to 30 or 40 per SAS channel, but RAID is usually directly integrated in the card with the SAS controller.
I expect mechanical hard disks will easily outlast the SATA/SAS interface, in datacenter applications.

As for what happens in client machines... I guess we'll have to wait and see. I sure wish U.2 had caught on, because it would be much easier to cool hot PCIe 5.0 SSDs in a 2.5" form factor than the way a lot of M.2 slots are situated, which might or might not receive particularly much airflow within the chassis. And if U.2 had caught on for SSDs, then it would be a natural way for consumer HDDs to transition over to NVMe.
Probably.
 

bit_user

Polypheme
Ambassador
You still need RAID, which makes this kind of a point moot.
No, traditional RAID has fallen out of favor in most hyperscalers. Rebuild times have gotten too long and RAID doesn't protect against system-level outages.

You talk very authoritatively, but in point after point, you've demonstrated quite a lack of knowledge about this industry and the way such customers actually deploy storage.

Lots of un-sourced, revisionist history in your claims. I never said "NCQ", I just said "command queuing". You're the one who decided to make this about NCQ, specifically.

In point of fact, PATA drives with command queuing did in fact deliver sizeable performance benefits, especially in concurrent workloads like Windows boot time. In one case, I recall such a drive literally halving Win XP boot times. I would fully expect SCSI drives with command queuing delivered comparable gains in database workloads.

PCI 5 is good for like 3.94 GBps per channel (sans overhead). I would say that is overkill for a 300mpbs hd.
What's "mpbs"? Should be MB/s.

SAS is good for like 12 GBps but can logically split that over 65535 devices.
How many drives do you expect a single chassis to host? The question isn't whether PCIe is overkill, but rather whether it's the most cost-effective option for the servers people actually deploy.
 
Last edited: