[SOLVED] x2 NVME Samsung 970 EVO Plus - Raid0 - Not benefitting from Read/Write speed increase.

Nov 20, 2020
3
0
10
0
Currently have two Samsung 970 EVO Plus NVME 500GB SSD drives, when running either Samsungs own Samsung Magician's benchmark OR CyrstalDiskMark's benchmark - both drives return advertised 3500MB/s read speeds and 3300MB/s write speeds, however following creation of a raid 0 volume in the BIOS, and re-running the there dos not seem to be any benefit to the Read/Write where I would expect to see these somewhat double.

any suggestion on how to proceeed?

System Specs:
CPU - Intel core i7-10700k
Motheboard Asus ROG STRIKS Z490-f
RAM: Corsair Vengance 16GB DDR 3000MHz
PSU: Corsair RM750, RM Series, 80 Plus Gold Certified
GPU: N/A
 

prince_xaine

Respectable
Feb 3, 2018
999
26
2,540
123
Currently have two Samsung 970 EVO Plus NVME 500GB SSD drives, when running either Samsungs own Samsung Magician's benchmark OR CyrstalDiskMark's benchmark - both drives return advertised 3500MB/s read speeds and 3300MB/s write speeds, however following creation of a raid 0 volume in the BIOS, and re-running the there dos not seem to be any benefit to the Read/Write where I would expect to see these somewhat double.

any suggestion on how to proceeed?

System Specs:
CPU - Intel core i7-10700k
Motheboard Asus ROG STRIKS Z490-f
RAM: Corsair Vengance 16GB DDR 3000MHz
PSU: Corsair RM750, RM Series, 80 Plus Gold Certified
GPU: N/A
This is because you are already hitting the maximum bandwidth of the PCIe lanes. So running two of them simply runs them at the maximum speed achievable by your PCIe slots.
 
Reactions: Yarrum

prince_xaine

Respectable
Feb 3, 2018
999
26
2,540
123
Currently have two Samsung 970 EVO Plus NVME 500GB SSD drives, when running either Samsungs own Samsung Magician's benchmark OR CyrstalDiskMark's benchmark - both drives return advertised 3500MB/s read speeds and 3300MB/s write speeds, however following creation of a raid 0 volume in the BIOS, and re-running the there dos not seem to be any benefit to the Read/Write where I would expect to see these somewhat double.

any suggestion on how to proceeed?

System Specs:
CPU - Intel core i7-10700k
Motheboard Asus ROG STRIKS Z490-f
RAM: Corsair Vengance 16GB DDR 3000MHz
PSU: Corsair RM750, RM Series, 80 Plus Gold Certified
GPU: N/A
This is because you are already hitting the maximum bandwidth of the PCIe lanes. So running two of them simply runs them at the maximum speed achievable by your PCIe slots.
 
Reactions: Yarrum
Nov 20, 2020
3
0
10
0
This is because you are already hitting the maximum bandwidth of the PCIe lanes. So running two of them simply runs them at the maximum speed achievable by your PCIe slots.
Hmm, okay I think I get it, but for the sake of my own benefit so as to ensure I'm understanding correctly.

I have NVME cards that are plugged into m.2_1/m.2_2 slots which supports PCIe 3.0 x 4 ( which gives a max band width of 3940MB/s per slot). While configured in a Raid 0 volume, to reach the desired R/W speeds of around 7000MB/s / 6600MB/s - these would need to both be in PCIe 3.0 x 8 ( which gives a max band width of7880MB/s per slot) slots?

EDIT**
To add, most likely just wishful thinking, but I would have though that as both NVME drives are in slots that can fully support the R/W individually, I would have thought they would work in conjunction due to Raid 0 essentially writting half data to each, rather than being limited.
 

prince_xaine

Respectable
Feb 3, 2018
999
26
2,540
123
Hmm, okay I think I get it, but for the sake of my own benefit so as to ensure I'm understanding correctly.

I have NVME cards that are plugged into m.2_1/m.2_2 slots which supports PCIe 3.0 x 4 ( which gives a max band width of 3940MB/s per slot). While configured in a Raid 0 volume, to reach the desired R/W speeds of around 7000MB/s / 6600MB/s - these would need to both be in PCIe 3.0 x 8 ( which gives a max band width of7880MB/s per slot) slots?
Even if you plugged them into the x8 slots, you would get the same speed. The drives would still be running at a bandwidth of x4 per card. For example, a x4 card in a x16 slot.
 
With a X570 mobo, or when Intel launch the next CPU with PCIe v4.0 support you may be able to use Raid 0 and get better results, but keep in mind that, as other explainned here, the AVG speed while doing usual/normal work will be about the same than using 1 drive only.

If you run some benchmark you may see some "awesome" peak speeds but thats it.
 
Last edited:

Colif

Win 10 Master
Moderator
With a X570 mobo, or when Intel launch the next CPU with PCIe v4.0 support you may be able to use Raid 0 and get better results,
I don't think that would give more speed though as PCIe 4 doesn't make PCIe 3 drives any faster, they still run at 3500mb max. I am using an X570 board right now.

Yes, sure, buying 2 PCIe 4 drives would but fastest PCIe 4 NVME I seen so far only hits 5k, not the max speed PCIe 4 can do yet, and if anyone tells you they can tell difference between 3500 and 5000, they might be dreaming. Sure, benchmarks show it but actually being able to tell in applications, might depend on file sizes.
 

ASK THE COMMUNITY

TRENDING THREADS

Latest posts