nvme ssd raid pair on x370 am4 native m.2 ports?

msroadkill612

Distinguished
Jan 31, 2009
204
30
18,710
Some AM4 x370 mobos have 2 xm.2 slots, but only 4x pcie3 lanes assigned to them. I cannot make sense of the documentation on running 2 x 4 lane pcie 3 nvme ssds on the integrated sockets.

I would simply like to know if there is any reason I cant run 2 such drives, each with 2 x pcie3 lanes of bandwidth each?

I know its sub optimal, but its my best option for raid 0.

I think they may be incoherently saying you cant split the 4 pcie3 lanes. You have to use either a single 4 pcie3 lane ssd, or 2 x sata m.2 ssdS - yuk.
 
Solution
That would likely depend on the motherboard and manufacturer.
What make & model is your motherboard?

I just looked at an X370 motherboard and (for that one) as long as you have a Ryzen CPU you'll be able to run 2 M.2 drives (PCIe x4 but it shows that it will run using PCIe 3.0 on one and PCIe 2.0 on the other one.) (Effectively PCIe 3.0 x4 and PCIe 2.0 x4)
The PCIe 2.0 x4 will run have roughly half the bandwidth as the other slot. But both will work at the same time and you don't have to use the SATA bus.

USFRet is right... Take a look at this quote from that link:
"In a normal desktop environment, you simply won't see the benefit. In fact, at low queue depths, where most workloads can be characterized, the RAID array is actually a little slower than a single drive."
 

msroadkill612

Distinguished
Jan 31, 2009
204
30
18,710


yes, sorry not to have been clearer, but I was hoping it was a generic question.Can I split the 4 lanes into 2 x pcie 2 lane links for 2 x ssds.

You have helped. It finally dawns on me how it works (the vnvme m.2 ssd is still 4 lane, just shared pcie2 lanes (which its bachwards compatable with, natch) from the chipset (which btw uses half the chipsets bandwidth on its own - as much as 4 sata ssds taxed concurrently)).

The pair set up as u describe, is useless for striped/raid 0 raid, as the drives get such differing bandwidths and latency, and even worse, the second drive is on the chipsets shared and laggy 4 pcie3 lanes, not the direct link to the cpu.

i am pretty sure the answer is that u cannot split the 4 direct to cpu pcie3 lanes into 2 x pcie3 lanes, but its a great pity. It doesnt seem hard & "theoretical 4 GBps" nvme raid pair is an exciting resource for a ryzen HEDT as a very fast media for video editing or swap files.

dunno how to paste images, sorry, so:

e.g.

https://www.msi.com/Motherboard/support/X370-GAMING-PRO-CARBON.html#down-manual

(I hate they are so vague about pcie2 & pcie3)

go to page 32 - drive options shown pictorially (which gives the illusion its doable)
(note, we lose the pcie2 slot on the mobo - its lanes are used by the second m.2 ssd)

go to page 19 for a sadly confusing block diagram.

Interesting the graphics outputs on the mobo - they seem destined for use by APUs (which would mean stacks of spare pcie3 lanes, as it forgoes the 16 lanes for gpu?).

So for normal am4 ryzen, raid vnvm means a fancy controller and 8 lane gpu i/f to free up some lanes.

I imagine some am4 mobos forgoe the nvme port, and simply add a 4 lane pcies slot to the mobo?

Maybe such a card would allow 2 x 2 pcie3 lane nvme?

Perhaps the key to my misunderstanding is its a end device limitation. i.e. - the ssd cannot operate on 2 lanes. It must use 4 lanes, but they can be of the older, slower pcie2 type, which are plentiful, because the chipset acts as a switch which can multiply its allocated 4 pcie3 lanes into many lower banwidth links like sata and pcie2.

Its not hard to see the chipset bandwidth getting saturated at times.

which makes the native ryzen 4x usb3 ports interesting. Thats about 2GBps direct link to the cpu isnt it?

Sorry i have raved a bit. thanks again. u have been a catalyst in me finally understanding.

I can live with those limits. I just wanted to know the limits.
 

msroadkill612

Distinguished
Jan 31, 2009
204
30
18,710


re:

http://www.tomshardware.com/reviews/samsung-950-pro-256gb-raid-report,4449-2.html

With respect to my moderator overlords :), those numbers DO impress me.

why?

mostly, a 50% increase in write speed. w/ the 512gb samsung pro 950, the speed sweet spot, goes from ~1900 to ~3100MBps write, & importantly, makes write speed ~on par with read speeds (which gain a handy 25% - going from ~2500 to 3200) - a more balanced resource.

Its no good having fast reads if they are waiting for a write to happen.

In other depts, a damn sight more wins (some, astonishing) than losses, and the losses can be remedied often, by adding more drives to the array (helps lots w/ depth of queue issues too), lanes allowing.

btw, striping doubles the life of the nvme storage, and helps w/ heat problems they reputably have.

Some say nvme is v reliable?, and it does seem to do a lot of native error/fault management, but in the virtualising scenario I have portrayed below, how much more vulnerable to failure/outages can we be, compared to using volatile ram as we do now?

I have my suspicions that w/ TR and Epyc mobos w/ multi nvme ports on board, and dedicated (unlike intel i suspect) lanes galore for the onboard m.2 nvme ssdS, and their direct link to the ryzen cpu, that the intel speeds can be bettered.

Vega gpu has a 256TB address space, ie. ~unlimited gpu workspace memory, which can include fast storage like this, as well as system memory. A 4GBps read AND write raid pair seems very possible on Thread ripper mobos.

Virtualising nvme arrays into huge pseudo memory pools is already a big deal in the server world. If the tools are on the desktop, desktop apps will find uses for big memory, just as vm servers and render farms do.

I forget the apps, but i have seen many posters grumbling that they have to segment their work into batches that will fit in gpu memory space, which cant be big enough for them.

Vega and ryzen? have dedicated & sophisticated hi bandwidth cache controllers (HBCC) for all the management to be handled independently of the apps code and main processor resources. It can recognis usage patterns eg., and pre-emptively have data where it can rapidly be fetched from the fastest resource in the managed memory pool.

When amd get around to super apuS, w/ multi vega gpuS, on die hbm & nvme, all on an epyc like die or pair of them for a 2p mobo, then the mobo and its bus will have little to do - all the main resources bar system memory, are linked via the huge fabric links.

As i said, looking forward, such a resource seems a good asset for a hedt.

I know its a contrary view, but i think it has a big future. our minds are still fixed in the hdd era of 100-150MBps media w/ glacial access times. Storage now IS fast enough to act as the bottom tier of a vast and fast enough pool of pseudo memory - not for all jobs, but for some, and depending on amds HBCC execution.
 

USAFRet

Titan
Moderator
I don't know about you, but benchmark numbers don't impress me.
Real world performance does.

And it looks like realworld is, at best, equal to a single drive. In some use cases slower.

If a RAID 0 with those gave a realworld benefit...I'd jump on it as well. But I'm not seeing it.
 

Seanie280672

Estimable
Mar 19, 2017
1,958
1
2,960


I have the MSI x370 gaming Pro carbon, running 2 x M.2 nvme driver, zero hard drives in my computer, just 2 Samsung Evo 960's and an external USB drive, the full speed M.2 has windows and program files on it and the other just has games on it.

The first M.2 runs at full speed, PCI-e 3.0 x 4 speeds, I get 3200mb/s read and 1500mb/s write, it gets it lanes direct from the CPU, plus you also have the 16 for the graphics card, which if you stick a 2nd graphics card in or anything in the 2nd PCI-e slot, that gets split in half to x8, hence crossfire / SLi runs at x8x8

There is no way to spit those x4 3.0 lanes into 2+2 3.0 lanes, and if you could get a card, that card would also have to be a an independant RAID card, as the board itself doesnt support it, where would you plug it in ? if you put it into the 2nd PCI-e x16 slot, your graphics card will be running at x8 instead, so you may as well use the full x8 supplied by the 2nd slot and have 2 M.2 drives connected to it by some sort of card, both running at full speed, you've got 8 lanes to use after all.

The limiting factor here is lack of PCI-e lanes from the CPU, I studied this quite a lot too before buying my 2nd M.2 drive, it all boiled down to the fact that you literally only needed something stupid like an extra 4 PCI-e lanes from the CPU to run that 2nd M.2 drive and full speed and potentially RAID them together, im sure board manufactures will add to this later on by adding 3rd party solutions like a PLX chip to give extra PCI-e lanes, but for now, RyZen is still a baby, even CPUs are not fully released yet.

From the CPU you have 24 PCI-e lanes, so 16 for the graphics card or split into 2 if your using 2 graphics cards etc, x8x8 for each slot (shielded slots), and 4 for the M.2 slot, and the other 4 are dedicated to the chipset and cant be used for anyhting else, 16+4+4=24, but only 20 are usable.

Then I cant remember if its 8 or 10 from the chipset, thats a shared 4 for the bottom PCI-e slot, or the second M.2 drive, let just say for example, these are the 4 lanes from the CPU to the chipset, the DMI limits them to a 2.0 bus instead of a 3.0 bus, hence the slower speeds, and the rest are for SATA, PCI-e x 1 slots, LAN, USB etc, (non shielded slots)

One set of lanes for one of the M.2s is coming from the CPU and the other one is coming from the chipset, plus they are running at different speeds, its impossible to RAID them together, Intel are able to do this as there all M.2 drive slots on Z270 etc run through the chipset.

the 2nd M.2 slot gets its PCI-e lanes from the chipset, and runs at PCI-e 2.0 x4 I get approx half the read speeds but full write speeds, so approx 1600mb/s read and 1500 mb/s write, however, if you use this slot, you loose the very bottom PCI-e 16 slot with is actually an x4 slot, as the M.2 drive basically steals its lanes.

Anyway, here's my numbers:

http://imgur.com/adXkLjI

http://imgur.com/3xzdkOm

Here's my video of my boot times with the nvme drive, sorry its a bit blurry it was filmed with a mobile phone and took me a while to find the power button lol just feeling for it: https://youtu.be/V4azWiBh1cc
 

msroadkill612

Distinguished
Jan 31, 2009
204
30
18,710


Folks, just order what he is having.

u sound a kindred spirit. Must be dull now having spoiled the fun and actually bought something :)

Sounds a lovely system & u deserve it. The samsungs are not an extravagance. They are just so much better & worth paying for. Interesting u got ~small 256GB 960S. i thought ~500GB was smallest.

yeah, screw raid. your rig is great

cpu? I imagine a tossup betw the 1600 & the 1700.

memory?

I would have liked to stall on the gpu & wait for affordable vega. I think there will be many synergies w/ ryzen.

I rather fancy the apu, & I dont think the gpu uses any pcie lanes in them.

whimsically, u could argue the second ssd drive is just as fast when reads have to wait on writes anyway.

You should also write manuals :). Thanks
 

Seanie280672

Estimable
Mar 19, 2017
1,958
1
2,960


I couldnt afford a 500gb at the time, and even if I could of, I would of just ended up splitting it into 2 partitions, 1 for Windows and 1 for Games, so may as well just grab 2 half sized ones, which was more affordable to me at the time.

The rest of my specs: Obviously the MSI X370 Gaming Pro Carbon, a Ryzen 7 1700 @ 3.9ghz, as of yesterday im now running 32gb (4 x 8gb) G.Skill Trident Z RGB 3200mhz, its currently running at 2933mhz, but I was pretty impressed 4 sticks would do that speed, the 2 Sammy Drives, and a Sapphire RX480 Nitro+ 8gb Graphics card which ive just this second ordered a full cover water block for, as Im custom water cooling the lot.

I got very lucky with the new kit of ram, as really if you intend to run 4 sticks you should buy a quad channel kit, or 2 sticks a dual channel kit, but I purchased a dual channel kit a few moths ago when Ryzen was first released and then brought a 2nd kit last week, the 2nd kit arrived yesterday, I removed my old kit, and stuck the new ones in, checked them out with Thaipoon burner, and turns out they had the same Hynix chips my original kit has on them, I flashed them anyway with the SPD data from my older sticks just to be sure, then put my older ones back in, they are all running quite happy together now.

Here's my full rig, obviously you cant see the 2 Evo 960s hidden away on the board: http://imgur.com/rSBIIjs

And this is just incase anyone doesnt believe me: http://imgur.com/NbbYxHZ
 
Solution

msroadkill612

Distinguished
Jan 31, 2009
204
30
18,710


The numbers from Toms are manifestly ~meaningless - the intel m2 ports are from the chipset, which in its entirety has only 4x pcie3 lanes as bandwidth. Each nvme uses 4 lanes, and each nearly maxes out the chipset's bandwidth.

Those raid numbers simply reflect the limits of the chipset bandwidth, which is why the faster 500GB nvme samsungs are no faster than slower 256GB when in raid 0.