Slow Raid 0 performance with Samsung 960 Pro

noxendine28

Prominent
Jan 17, 2018
9
0
510
I bought 2x Samsung 960 Pro 512Gb SSDs on Amazon to do RAID 0. Sadly, I don't think I'm getting the best performance. Windows 10 Boot time is only a few seconds faster than with Samsung 850 Evo 2.5-inch SSD. I did an ATTO disk benchmark and Crystal Disk, are these really the expected speeds of RAID 0 or should I be getting more? Also, the rest of my specs is:
CPU: i7 8700K @ 5GHz
Motherboard: ASUS MAXIMUS X APEX
RAM: 16GB DDR4 @ 4133Mhz
GPU: GTX 1050 Ti
PSU: EVGA Super Nova 750G2
OS: Windows 10 Pro x64 16299.125
HDD: Western Digital Black 3TB
HDD: Western Digital Black 6TB
FYI, I have the best Motherboard for overclocking, I'm using the ASUS dimm.2 for the m.2 SSDs
Capture.png

Capture.png

 
Solution
Solution
Heres the question, how are those M.2 SSD connected to the CPU? If it is direct PCI-Express 3.0 4x slots the performance is low, however if they are going through the PCH (They likely are) which is a single PCI-Express 3.0 4x connection which would give bandwidth to USB, SATA, Ethernet, and probably your M.2 SSD's than that's the best you will get.
 


Only way to make Windows 10 bootable with RAID 0 on my motherboard is to use Intel Rapid Storage Technology in my BIOS. I don't know of any other way to create a RAID 0 configuration.
 
Windows boot times are not exactly a good measure, and it really depends on how you do it.

Are those speeds a problem? Very quick indeed, but I can't imagine what you need that level of performance for.

As you can see only high queue depths can really make the drives work to their potential. So for there to be any benefit you would have to be moving around some pretty huge files.

For your typical gamer, a SATA SSD is perfectly adequate.

I enjoy a good M.2 SSD because it doesn't require cable, but RAID is not needed. Better to get the larger drive, or keep the two drives as separate disks.
 


They are connected to the PCIe lanes on my motherboard using the Dimm.2 slot.
 

Because I want the ultimate performance. Isn't that what PC enthusiasts do?
 

I'm saying there's virtually no real world performance gains to be had by putting your SSDs in RAID0 in most applications. The primary advantage to doing that would be increased sequential read/write speeds (especially at high queue depths), which look good in benchmarks but aren't really relevant to typical consumer workloads. Most consumer use is low queue depth random IO, which may actually be worse due to RAID overhead.

In fact, you probably won't even really get a huge increase in sequential performance, because both M.2 slots go through the PCH, which only has equivalent to PCIe 3.0 x4 bandwidth to the CPU.

You also now have double the chance of losing all the data you've stored on the SSDs (if either SSD fails), unless you have them backed up.
 

First, because you have SW RAID vs HW RAID, which is slower. 2nd, max speeds quoted on spec sheets are usually at high queue depths which result in the most impressive numbers. Your test is only ad QD = 4.
 

You see that bottom line in your CrystalDiskMark scores? That's why there isn't much speed difference. The bottleneck in most PC operations is small file (4k) read/writes. And booting the computer is mostly going to be constrained by the 4k read speeds. The 960 and 850 are virtually identical at 4k reads. The 960 and your RAID 0 gets you a slight advantage due to queuing, but it only makes a small difference.

The way people perceive disk speed (how long you have to wait for an operation to complete) is the inverse of MB/s. So the bigger number in MB/s (the sequential speeds) ends up mattering the least. The sequential speeds pretty much only matter for specialized tasks like real-time video editing and copying large files (movies, databases) from one SSD to another. Once the sequential speeds hit about 500 MB/s you can pretty much ignore them, because beyond that speed every time you double it only gets you a few milliseconds in time savings. If you need to read 100 MB of sequential data, going from 500 MB/s to 1000 MB/s only saves you 0.1 seconds (from 0.2 sec to 0.1 sec) - shorter than the blink of an eye. Going from 1000 MB/s to 2000 MB/s only saves you an extra 0.05 seconds. And going from 2000 MB/s to 4000 MB/s with RAID 0 only saves you an extra 0.025 seconds.

For the vast majority of real-world tasks, if you want faster storage, you need to get devices with faster 4k speeds. If you need to read 100 MB of 4k data, a 60 MB/s drive will save you 1.67 seconds over a 30 MB/s drive. *That* is a noticeable difference in speed. Not the 0.1 or 0.025 seconds you save by aiming for the fastest sequential speeds. (Also note that RAID 0 can decrease 4k speeds due to RAID overhead.)

Edit: It should also be mentioned that real hardware RAID is a card that costs a few hundred dollars (and you'll need at least two so you can recovery your data if the first one ever fails). The RAID built into motherboards is not hardware RAID, it's software RAID aka fakeRAID. It's just a bootstrap in the BIOS which ties in with software RAID drivers installed on your OS. Unless you're doing something that does very intensive sequential read/writes, RAID 0 is a lot of cost or trouble for very little benefit.
 
The difference between a hardware Raid with caching and a software raid with no caching. That is why you see almost double the read speed in that one image (6000). That is caching doing its job most likely.

As for doing this as a PC build for gaming. For one. I wouldn't Raid my OS. I would purchase a separate SSD for my OS and then add the remaining SSDs in a Raid for your content such as games or whatever else. If you decide to stay software Raid. It also makes it easier to manage since you don't need to include the OS disk in the Raid volume.

Also this way, if your Raid breaks, you don't lose your OS.

 

Can you direct me on how to do a hardware raid please? Because Intel RST is currently the only way I know how.