News Seagate Demonstrates HDD with PCIe NVMe Interface

InvalidError

Titan
Moderator
It simplifies how many protocols we need in order to talk to storage (well it says it in the article too, but I'm sure a lot of people will rush here seeing the headline)
This stuff will hit datacenters first where its main benefit is going to be allowing server gear manufacturers to use a 4.0/5.0x4/8/16 PCIe switch to directly fan out to 40-80 HDDs instead of having to use layers of fancy SAS or other hardware.
 

truerock

Distinguished
Jul 28, 2006
327
45
18,820
A single interface to the CPU for external data access is a great idea.
There still would need to be some organization around connectors, I suppose.
NVMe and Thunderbolt/USB overlap quite a bit. I guess it wouldn't take too much effort to simplify and establish a unified standard.
Keyboards, SSDs, video monitors, lazer etc
 

InvalidError

Titan
Moderator
NVMe and Thunderbolt/USB overlap quite a bit. I guess it wouldn't take too much effort to simplify and establish a unified standard.
The problem with unified standards is that they end up being extremely wasteful: you don't want to use a full 3.2-gen2x2/USB4/5.0x4/TB5/whatever port with a bunch of alt-mode support on a HDD that can barely make use of a 3..0x1-wide interface.

IMO, there needs to be a SATA successor using PCIe 4.0+/NVMe with combined 12V power + data in x1 and x4 flavors. For backwards compatibility, the x1 ports could support SATA/SAS via adapters that pass data lines straight through and provide the 5VDC converter for legacy devices.
 
Oct 8, 2021
40
9
35
The NVME interface will allow the harddisk cache to be accessed much much faster than ANY SATA or SAS interface !!! (you calculate how many folds) ... and the Hybrid SSD/HDD will be blazing fast as well . This is coming to the desktop soon!
 
The problem with unified standards is that they end up being extremely wasteful: you don't want to use a full 3.2-gen2x2/USB4/5.0x4/TB5/whatever port with a bunch of alt-mode support on a HDD that can barely make use of a 3..0x1-wide interface.

IMO, there needs to be a SATA successor using PCIe 4.0+/NVMe with combined 12V power + data in x1 and x4 flavors. For backwards compatibility, the x1 ports could support SATA/SAS via adapters that pass data lines straight through and provide the 5VDC converter for legacy devices.
While NVMe is designed around PCIe based SSDs, I also don't believe that NVMe actually requires a PCIe based PHY layer, since NVMe is also meant to take over the AHCI protocol. Especially considering that AHCI worked over PCIe via SATA Express.

Maybe we could still use SATA ports, assuming the controllers are generic enough to only handle bit integrity before handing it off to the OS.
 

InvalidError

Titan
Moderator
While NVMe is designed around PCIe based SSDs, I also don't believe that NVMe actually requires a PCIe based PHY layer

Maybe we could still use SATA ports, assuming the controllers are generic enough to only handle bit integrity before handing it off to the OS.
NVMe doesn't have a PHY layer of its own, it is just a device protocol spec that can be encapsulated to run on any suitable L2-L3 protocol stack.

I doubt the SATA cables and connectors were intended for 16-32Gbps (look at USB3.2-gen2's shielded micro-headers for 10Gbps lanes) and with 12VO sounding the beginning of the end for 3.3V and 5V from the PSU, a new connector or two would kill 2-3 birds with one stone. Imagine being able to have 4.0x4 NVMe SSDs in 2.5" form factor - your SSD doesn't have to live buried under the GPU anymore and you don't need a PSU cable + motherboard cable (or two cables from motherboard) going to the SSD either.
 
A 2.5" form factor would allow for a much larger and more elegant heatsink design than the M.2 form factor for sure, especially as SSDs become faster and warmer. I can't see it replacing the M.2 form factor, but possibly co-existing with it much as AICs do now.
 
NVMe doesn't have a PHY layer of its own, it is just a device protocol spec that can be encapsulated to run on any suitable L2-L3 protocol stack.

I doubt the SATA cables and connectors were intended for 16-32Gbps (look at USB3.2-gen2's shielded micro-headers for 10Gbps lanes) and with 12VO sounding the beginning of the end for 3.3V and 5V from the PSU, a new connector or two would kill 2-3 birds with one stone. Imagine being able to have 4.0x4 NVMe SSDs in 2.5" form factor - your SSD doesn't have to live buried under the GPU anymore and you don't need a PSU cable + motherboard cable (or two cables from motherboard) going to the SSD either.
I'm not suggesting running a fast speed over SATA cables. I'm only suggesting we can reuse the existing SATA hardware to connect NVMe based HDDs and all it would take is a driver update.
 

InvalidError

Titan
Moderator
I'm not suggesting running a fast speed over SATA cables. I'm only suggesting we can reuse the existing SATA hardware to connect NVMe based HDDs and all it would take is a driver update.
NVMe was designed to run natively over PCIe with full DMA capabilities. With SATA, the DMA functions are in the SATA host controller. You would need an NVMe extension to SATA along with modified hardware that understands the extension to make it work. At that point, you are much better off having IO lanes that can do native PCIe.

If you look at modern chipsets, you have something like 32 HSIO lanes and most of those lanes are multi-mode, supporting two or more of PCIe, SATA and USB3.2-gen1/2/2x2/TB4. You could hypothetically already connect NVMe x1 over SATA ports with adapters as long as the BIOS allows the PCIe/SATA HSIO lanes to be set to PCIe despite terminating on a SATA port, possibly even x2 using two ports that are under the same PCIe host controller slice in the HSIO map.

There is no point in bloating the SATA spec when you can go straight PCIe with existing hardware, just need to agree on a standard way to do it.
 
  • Like
Reactions: King_V

PiranhaTech

Reputable
Mar 20, 2021
136
86
4,660
I can see this. The answer as to "why" for me is simple: cache and large file access speed.

Large files actually have good performance on a magnetic drive. Try it! Take two magnetic drives and transfer a video file. It's not as fast as an SSD, but you can get 100 MB/sec on it. If they are doing multi-actuator drives, the speed be actually pretty good for a lot of uses

Cache is simple. Slap in RAM or SSD chips, and do a read cache. Many people mostly read from their hard drives and not write. Cache would be an option for premium drives like gamer drives and enterprise, though most gamers would probably opt for an SSD.
 
Cache is simple. Slap in RAM or SSD chips, and do a read cache. Many people mostly read from their hard drives and not write. Cache would be an option for premium drives like gamer drives and enterprise, though most gamers would probably opt for an SSD.
There's already hybrid drives, but the issue with caching solutions is it only works well for files that are frequently accessed.
 

King_V

Illustrious
Ambassador
Try it! Take two magnetic drives and transfer a video file. It's not as fast as an SSD, but you can get 100 MB/sec on it.

Not quite the same scenario, but I just recently had to do something like this and can confirm.

I was backing up about 500GB of data for my son. There were a lot of small files, naturally, but there were also quite a number of large files.

I was copying from his PC's SSD, to a 2-drive NAS I have, set up as RAID 1.

Disk transfer at times for the large files exceeded the 100GB mark a little, and may have gone a bit faster, but at that point, it was saturating the gigabit ethernet, if I'm not mistaken.

That could've happened even in the old PATA 133 days, if you had a drive that could max out the PATA interface.
 

USAFRet

Titan
Moderator
Large files actually have good performance on a magnetic drive. Try it! Take two magnetic drives and transfer a video file. It's not as fast as an SSD, but you can get 100 MB/sec on it. If they are doing multi-actuator drives, the speed be actually pretty good for a lot of uses
And copying between 2x SATA III SSD, 5x that fast.

This is copying a 51GB file from a Samsung 860 EVO to a SanDisk Ultra.
Vt18Q2i.png
 

InvalidError

Titan
Moderator
That could've happened even in the old PATA 133 days, if you had a drive that could max out the PATA interface.
I don't believe any of my PATA drives broke 100MB/s. The biggest benefit of PATA133 (at least to me) was negligible perceivable performance loss from operating two PATA133 drives on a single cable.

And copying between 2x SATA III SSD, 5x that fast.
New-ish 7200RPM HDDs can usually pump around 200MB/s sequential, so that would be closer to 2-3X. I remember one of my HDDs doing over 200MB/s copying to one of my SSDs, not sure if that was my 8 years old WD Black or another one.
 

emike09

Distinguished
Jun 8, 2011
188
181
18,760
Let's get to the point that we can find somewhat common mid-high end motherboards that have 8+ NVMe ports, like we can easily find with SATA ports. I hate that the average motherboard only has 2 NVMe slots available.
 

PapaCrazy

Distinguished
Dec 28, 2011
311
95
18,890
Let's get to the point that we can find somewhat common mid-high end motherboards that have 8+ NVMe ports, like we can easily find with SATA ports. I hate that the average motherboard only has 2 NVMe slots available.

Many of the new Z690 mobos that advertise 5 m2 slots don't even have all 5 on the board itself. Many are using daughter cards. And beyond space limitations, the cooling is limited too, with minuscule z heights to leave room for fatter and fatter GPUs. NVMe is going to need cables eventually. And more ports.

I agree that capability for 8 drives should be the minimum. And even with NVMe's superiority, I still want SATA for disc drives and old HDDs sitting around. I feel like the industry is trying to push everyone into SFF. I see new ATX cases with room for only 2 HDDs and it makes me laugh. I love that my dinosaur of a case still has drive cages and 5.25" bays.
 

shadow.garret

Honorable
Jul 26, 2018
4
2
10,515
PCIe already have plenty of cables and connector types to connect storage,
SFF-8611 (OCULink) or SFF-8643 on host side - SFF-8639 (U.2) on client side (HDD/SSD)

U.2 is also works with Tri-Mode which is great for compatibility so you already have single physical interface for SATA, SAS and NVMe protocols on client side (HDD/SSD)
 

InvalidError

Titan
Moderator
U.2 is also works with Tri-Mode which is great for compatibility so you already have single physical interface for SATA, SAS and NVMe protocols on client side (HDD/SSD)
The problem with U2 is that the connector is relatively large from hacking the SATA data+power connector footprint to bodge in all of the extra functions into previously unused space. Tidying up the connector to remove all of the dead-weight legacy stuff would cut the size in half and also eliminate any possible confusion about what sort of devices will work with the slot.