The Week In Storage: 3D XPoint Optane And Lenovo SSDs Spotted, Google's SSDs Get Pwned, Toshiba Rising

Status
Not open for further replies.

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
40
The next SATA and SAS standards should support variable clock speeds perhaps with pre-sets like 6G, 8G, 12G and 16G. A second option should support the 128b/130b jumbo frames now in the PCIe 3.0 spec. By the time PCIe 4.0 becomes the current standard, "syncing" data channels with chipsets has much to recommend it. We should not be forced to buy new motherboards if our immediate needs are faster storage.
 

jasonf2

Honorable
Oct 11, 2015
502
127
11,240
43
SATA and SAS are and were not really suitable bus types for the newer emerging ultra low latency storage types. While they will be necessary for spin drives for a long time I don't see SATA or SAS being able to be anything but a man in the middle that will act as a bottleneck for solid state. We will be seeing spin drives for years to come, but only due to cost per gig and as a mainstream/ big data tech. Mainstream spin drives aren't even coming close to maxing out the bus as is and probable never really will. So what is the point to continued evolution of SATA? SAS I can see with large data center deployments of drives. The two need to merge into a single standard as they are already quasi interoperable as a specific spec for mechanical drives. PCI on the other hand, while one step closer to the CPU, is ultimately going to prove to still be too far away from the CPU to address latency issues with the emerging non volatile ram types. That is why we are seeing these things stuck on the RAM bus, which has been working the ultra low latency game for decades. The challenge there is that the addressable ram space to date is far too small for an enthusiast/consumer terabyte drive on most machines. With all of this in mind I fully expect that motherboard replacement will be necessary for faster storage until at least 3 players are competing in the NVRAM space (I only consider Intel/Micron as one at this time.) and real standards on the RAM bus and NVRAM come back into play. What we have coming is another RAMBUS Rimm moment.
NVMe is only a temporary stop gap to address this issue. The only reason SATA was ever used for SSDs in the first place is that the manufactures needed somewhere to plug them into and almost all spin drives at that time were SATA. NVMe on the PCI bus was an easily implementable standard without having to dream up a whole new RAM system all without a technology that even needed it. This is going to kick AMD around pretty good for a little bit because it won't be cross system operable on the RAM side giving Intel a performance edge because of a chip-set feature that they have already told everyone is going to be propitiatory. AMD machines on the other hand will have NVMe implementation so that Intel/Micron can still sell drives to that market segment. I give Intel kudos on this one. With CPU performance gains flagging they are going to have a pretty compelling performance advantage until the standards can catch back up and that will probably take a while.
 

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
40
Very good answer: thanks.

There is a feasible solution, particularly for workstation users (my focus)
in a PCIe 3.0 NVMe RAID controller with x16 edge connector, 4 x U.2 ports,
and support for all modern RAID modes e.g.:

http://supremelaw.org/systems/nvme/want.ad.htm

4 @ x4 = x16

This, of course, could also be implemented with 4 x U.2 ports
integrated onto future motherboards, with real estate made
available by eliminating SATA-Express ports e.g.:

http://supremelaw.org/systems/nvme/4xU.2.and.SATA-E.jpg


This next photo shows 3 banks of 4 such U.2 ports,
built by SerialCables.com :

http://supremelaw.org/systems/nvme/A-Serial-Cables-Avago-PCIe-switch-board-for-NVMe-SSDs.jpg


Dell and HP have announced a similar topology
with x16 edge connector and 4 x M.2 drives:

http://supremelaw.org/systems/nvme/Dell.4x.M.2.PCIe.x16.version.jpg


http://supremelaw.org/systems/nvme/HP.Z.Turbo.x16.version.jpg


Kingston also announced a similar Add-In Card, but I could not find
any good photos of same.

And, Highpoint teased with this announcement, but I contacted
one of the engineers on that project and she was unable to
disclose any more details:

http://www.highpoint-tech.com/USA_new/nabshow2016.htm

RocketStor 3830A – 3x PCIe 3.0 x4 NVMe and 8x SAS/SATA PCIe 3.0 x16 lane controller;
supports NVMe RAID solution packages for Window and Linux storage platforms.

Thanks again for your excellent response!

MRFS

 

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
40
p.s. Another way of illustrating the probable need for new motherboards
is the ceiling already imposed by Intel's DMI 3.0 link. It's upstream bandwidth
is exactly the same as a single M.2 NVMe "gum stick":
x4 lanes @ 8G / 8.125 bits per byte
(i.e. 128b/130b jumbo frame = 130 bits / 16 bytes)
 

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
40
> I wish intel stops the DMI links rubbish and rely on PCIE lanes for storage ...

Indeed. We've been proposing that future DMI links
have at least x16 lanes, to match PCIe expansion slots.

That way, 4 x U.2 ports can run full-speed downstream
of that expanded DMI link.

4 @ x4 = x16

There's an elegant simplicity to that equivalence.

Plus, at PCIe 4.0, the clock increases to 16G:

x16 @ 16G / 8.125 = 31.50 GB/second

That should be enough bandwidth for a while.

 

PaulAlcorn

Senior Editor
Editor
Feb 24, 2015
795
161
11,160
0




We have, in fact, seen the last generation of SATA. The committee in charge of the spec has indicated that the increase in power for a faster interface is simply not tenable, and that there is no real way around it, short of creating a new spec. SATA is done, it will not move forward with new revisions, which will push us to PCIe.

I agree, the end game is the memory bus, but industry support is, and will continue, to be slow. At least until there are non-proprietary standards.

 

PaulAlcorn

Senior Editor
Editor
Feb 24, 2015
795
161
11,160
0


Those aren't technically U.2 ports, they are MiniSAS-HD connectors that are re-purposed to carry PCIe. There is no problem with it, but it is noteworthy because they can also carry SAS or SATA as well, making a nice triple-play for a single port.

Most of the M.2 x4 carrier cards, which aggregate multiple M.2's into a single device, are designed by Liqid, which is partially owned by Phison.

The Highpoint solution isn't unique, and is probably built using either a PMC-Sierra (now Microsemi) or an LSI (now Broadcom) ROC. Both companies have already announced their hardware RAID capable NVMe/SAS/SATA ROCs and adaptors.

http://www.tomsitpro.com/articles/broadcom-nvme-raid-hba-ssd,1-3165.html

http://www.tomsitpro.com/articles/pmc-flashtec-nvme2016-nvme2032-ssd,1-2798.html


 

jasonf2

Honorable
Oct 11, 2015
502
127
11,240
43
My overall point has very little to do with bandwidth itself. It is an issue with latency. More PCIe lanes won't fix this but actually make it worse. Conventional RAM isn't just able to move large blocks of data it is able to access and write them in a fraction of the time necessary for flash to do it in and much faster than the PCI bus. The new Quasi-NVRAM that Intel/Micron is bringing forward is promising considerably faster access times. As any device gets further away from the CPU additional latency is put into play. So to put the PCI bus into the system as an intermediate just negates the ability for the low latency memory to do its thing. This is why you would never see the RAM system operating on top of the PCI bus. While it might be technically possible it would significantly bottleneck the computer. Putting NVRAM on the PCI Bus is pretty much the same thing in my opinion. Data center and buisness class work has a distinct need for data safeguarding and redundancy (IE Raid and crypto) and will be at least partially stuck on NVMe for a while. First Gen enthusiast class isn't probably going to have RAID in the controller but it will be very fast on the RAM bus. On the other hand I would not be at all surprised for later xeon implementations with an enhanced memory controller end up with at least some RAID like capability with possible crypto. (I don't think RAID will apply here as Redundant Array of Inexpensive Drives will be more like Redundant Array of Very Expensive Memory Modules.) Any additional controllers and bus additions being put between this and RAM itself that aren't necessary are just bottlenecks. Anything PCI just throws an IRQ polling request into the mix along with some memory to PCI interface controller to slow things down. Think homogeneous Ram and storage memory type hybrids that have no need to move large memory blocks into ram cache from slow storage. This is because storage is directly addressed to be processed on from the CPU. This has some caveats with Xpoint due to it still having a limited write cycle. But mixing static RAM and Xpoint in a hybrid situation is going to really change things. No matter what though motherboard change and memory replacement will not be an option on this level for future upgrades just like RAM is today. Again Kudos to Intel.
 

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
40
> the increase in power for a faster interface is simply not tenable

In my professional opinion, that's BS.

Here's why:

We now have 12G SAS and 10G USB 3.1.

The latter also implements a 128b/132b jumbo frame.

The PCIe 4.0 spec calls for a 16G clock.

Well, if a motherboard trace can oscillate at 16G,
a flexible wire can also oscillate at 16G. DUUH!!

I think that "committee" is full of themselves.

They crowded behind SATA-Express and
now they refuse to admit it's DOA.



 

TJ Hooker

Champion
Ambassador
"Well, if a motherboard trace can oscillate at 16G, a flexible wire can also oscillate at 16G. DUUH!!"

Umm, well, no, not necessarily. First off, the traces are likely shorter than a typical cable, which tends to allow tighter timings/faster transmission rates. Also, a flat trace on a dielectric substrate will have different properties in terms of parasitic inductance and capacitance, and could behave quite differently as a transmission line compared to a typical wire.
 

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
40
But, your original point was to quote the SATA standards group
who claimed that increasing the 6G SATA clock would
"require too much power."

So, why isn't that a problem for 12G SAS?

So, why isn't that a problem for 10G USB 3.1?

Looks to me as if that SATA standards group have
mud all over their faces right now.

I honestly do not believe that the status quo needs defending.

I smell an oligopoly, and I don't like it.

And, I suspect you're trying to argue that a 16G cable
is NOT POSSIBLE, but you haven't even tried it yet.

Working backwards from a foregone conclusion
is not good science, and it's also not good engineering either.


"Hey! Orville and Wilbur! That thing will never get off the ground!!"

 

jasonf2

Honorable
Oct 11, 2015
502
127
11,240
43
There are physical limits due to not only the basic issues of capacitance, inductance and resistance but in high frequency circuits the reactances become a major issue. The effect the longer the cable gets. That is why when we overclock we have to increase core voltage to keep the transistors being able to maintain a clear on/off state. I don't think what was being said is that it isn't technically possible, but a new SATA/SAS standard, probably with higher voltages, would be necessary to make the jump and backwards compatibility would be an issue. There are competing technologies, fiberchannel being one, that are not hitting the cable length restrictions in the data center due to the intrinsic nature of light itself. SATA on the other hand has no need in the mainstream space to be updated. This is primarily because spin drives are the only thing that need SATA and cannot max out the bus. If they were you would be seeing NVMe spin drives.
So if there is no profit in it the standards organizations (which are actually consortiums of companies) aren't going to put money behind it.
USB has a future because there is money in it. But if you look carefully even at USB/Thunderbolt (via usbc) it will have to evolve onto fiber to keep getting faster.
Long cable technologies also require better error checking routines that are not typically present in short cable buses. 10g copper Ethernet exists (With much longer cables than USB to boot.) but I guarantee that end to end throughput isn't even close to 10 gigabits per second when you figure in the error handling overhead of the network stack.
 
Status
Not open for further replies.

ASK THE COMMUNITY

TRENDING THREADS