This stuff will hit datacenters first where its main benefit is going to be allowing server gear manufacturers to use a 4.0/5.0x4/8/16 PCIe switch to directly fan out to 40-80 HDDs instead of having to use layers of fancy SAS or other hardware.It simplifies how many protocols we need in order to talk to storage (well it says it in the article too, but I'm sure a lot of people will rush here seeing the headline)
The problem with unified standards is that they end up being extremely wasteful: you don't want to use a full 3.2-gen2x2/USB4/5.0x4/TB5/whatever port with a bunch of alt-mode support on a HDD that can barely make use of a 3..0x1-wide interface.NVMe and Thunderbolt/USB overlap quite a bit. I guess it wouldn't take too much effort to simplify and establish a unified standard.
While NVMe is designed around PCIe based SSDs, I also don't believe that NVMe actually requires a PCIe based PHY layer, since NVMe is also meant to take over the AHCI protocol. Especially considering that AHCI worked over PCIe via SATA Express.The problem with unified standards is that they end up being extremely wasteful: you don't want to use a full 3.2-gen2x2/USB4/5.0x4/TB5/whatever port with a bunch of alt-mode support on a HDD that can barely make use of a 3..0x1-wide interface.
IMO, there needs to be a SATA successor using PCIe 4.0+/NVMe with combined 12V power + data in x1 and x4 flavors. For backwards compatibility, the x1 ports could support SATA/SAS via adapters that pass data lines straight through and provide the 5VDC converter for legacy devices.
NVMe doesn't have a PHY layer of its own, it is just a device protocol spec that can be encapsulated to run on any suitable L2-L3 protocol stack.While NVMe is designed around PCIe based SSDs, I also don't believe that NVMe actually requires a PCIe based PHY layer
Maybe we could still use SATA ports, assuming the controllers are generic enough to only handle bit integrity before handing it off to the OS.
I'm not suggesting running a fast speed over SATA cables. I'm only suggesting we can reuse the existing SATA hardware to connect NVMe based HDDs and all it would take is a driver update.NVMe doesn't have a PHY layer of its own, it is just a device protocol spec that can be encapsulated to run on any suitable L2-L3 protocol stack.
I doubt the SATA cables and connectors were intended for 16-32Gbps (look at USB3.2-gen2's shielded micro-headers for 10Gbps lanes) and with 12VO sounding the beginning of the end for 3.3V and 5V from the PSU, a new connector or two would kill 2-3 birds with one stone. Imagine being able to have 4.0x4 NVMe SSDs in 2.5" form factor - your SSD doesn't have to live buried under the GPU anymore and you don't need a PSU cable + motherboard cable (or two cables from motherboard) going to the SSD either.
NVMe was designed to run natively over PCIe with full DMA capabilities. With SATA, the DMA functions are in the SATA host controller. You would need an NVMe extension to SATA along with modified hardware that understands the extension to make it work. At that point, you are much better off having IO lanes that can do native PCIe.I'm not suggesting running a fast speed over SATA cables. I'm only suggesting we can reuse the existing SATA hardware to connect NVMe based HDDs and all it would take is a driver update.
There's already hybrid drives, but the issue with caching solutions is it only works well for files that are frequently accessed.Cache is simple. Slap in RAM or SSD chips, and do a read cache. Many people mostly read from their hard drives and not write. Cache would be an option for premium drives like gamer drives and enterprise, though most gamers would probably opt for an SSD.
Not quite the same scenario, but I just recently had to do something like this and can confirm.Try it! Take two magnetic drives and transfer a video file. It's not as fast as an SSD, but you can get 100 MB/sec on it.
And copying between 2x SATA III SSD, 5x that fast.Large files actually have good performance on a magnetic drive. Try it! Take two magnetic drives and transfer a video file. It's not as fast as an SSD, but you can get 100 MB/sec on it. If they are doing multi-actuator drives, the speed be actually pretty good for a lot of uses
I don't believe any of my PATA drives broke 100MB/s. The biggest benefit of PATA133 (at least to me) was negligible perceivable performance loss from operating two PATA133 drives on a single cable.That could've happened even in the old PATA 133 days, if you had a drive that could max out the PATA interface.
New-ish 7200RPM HDDs can usually pump around 200MB/s sequential, so that would be closer to 2-3X. I remember one of my HDDs doing over 200MB/s copying to one of my SSDs, not sure if that was my 8 years old WD Black or another one.And copying between 2x SATA III SSD, 5x that fast.
Many of the new Z690 mobos that advertise 5 m2 slots don't even have all 5 on the board itself. Many are using daughter cards. And beyond space limitations, the cooling is limited too, with minuscule z heights to leave room for fatter and fatter GPUs. NVMe is going to need cables eventually. And more ports.Let's get to the point that we can find somewhat common mid-high end motherboards that have 8+ NVMe ports, like we can easily find with SATA ports. I hate that the average motherboard only has 2 NVMe slots available.
The problem with U2 is that the connector is relatively large from hacking the SATA data+power connector footprint to bodge in all of the extra functions into previously unused space. Tidying up the connector to remove all of the dead-weight legacy stuff would cut the size in half and also eliminate any possible confusion about what sort of devices will work with the slot.U.2 is also works with Tri-Mode which is great for compatibility so you already have single physical interface for SATA, SAS and NVMe protocols on client side (HDD/SSD)