[SOLVED] Getting weird NVME speeds in raid0

Nov 8, 2019
4
0
10
1TWD7GI.jpg
yFJ6AO2.jpg

First picture is me doing userbenchmark after setting up raid0. Second picture is a bench of when my 2 nvme sticks were not in raid.
I am using two 1TB nvme GEN4 sticks called CSSD-M2B1TPG3VNF 1TB, yes weird name, not known brand but they are mostly sold in japan only as far as I know, and they are good and reliable at a fair price.

My hopes were that going into RAID0 would increase speeds, but I wasn't expecting much. However, looking just at the numbers it seems it might actually be slower, would I be correct in assuming this? Yes sequential speeds are far faster, but its mostly write speed only, and random 4k speeds are quite a bit lower. Since nvme drives seem to be somewhat "bottlenecked" by their random 4k speeds anyway this would result in a bit of decrease in performance.

My question would be why? Am I lacking in drives somehow? I haven't installed any particular nvme drives, just ones for raid capability. I believe Samsung 970 evo series have drives that increase performance quite a bit, and wondering if there is there is a general version that applies to other nvme sticks. If my bench results seem to be off, do anyone have any recommendations on how to improve? I have a x570 MSI gaming edge wifi motherboard, and they are supposed to have "storage boost" function but downloading their latest dragon center application this is nowhere to be found.

Anyway I want to be clear, that I am OK somewhat if this looks like a small decrease in overall speed. That's because for me, having my two main drives combined into 1 adds way too much convenience, so I never have to worry in which drive something is stored or installed. I don't want to have to keep track of separating different types of files for different drives. Ill get a sata hdd later for pure storage.
What Im mostly curious is, does this look like a significant drop in speed? And if so, would there be anything I could do to increase it? Or do you think I should never ever RAID0 on two nvme drives? Let me hear your thoughts.
Thanks.
 
Solution
If you prefer the convenience the go for it, the performance difference won't be noticeable from your perspective, just in benchmarks.

As always, be sure to make backups of important things, esp when in R0. One drive hiccup and all data could be gone.

popatim

Titan
Moderator
If you prefer the convenience the go for it, the performance difference won't be noticeable from your perspective, just in benchmarks.

As always, be sure to make backups of important things, esp when in R0. One drive hiccup and all data could be gone.
 
  • Like
Reactions: priesten2
Solution
Nov 8, 2019
4
0
10
If you prefer the convenience the go for it, the performance difference won't be noticeable from your perspective, just in benchmarks.

As always, be sure to make backups of important things, esp when in R0. One drive hiccup and all data could be gone.

Thanks,
You are probably right that I wont see much of a real world difference.
Now the problem is, since I was still expecting better results from going RAID0, yes even with slightly slower speed I prefer having them in a RAID0 array for the convenience, but if I wanted that 2TB gathered in one place I should have just bought one 2TB drive for a similar price I paid for the two 1TB drives, and it would have been slightly faster than what I have right now AND there would be another slot left on motherboard for expanding in the future.
Kinda bummed out..
 

popatim

Titan
Moderator
In Raid 0, reads slow downs because the drives have to fetch the data pieces and then the raid software (iRst for intel) has to put them back together, verify they aren't corrupt (do a checksum), build the whole file piece by piece, and then release then it to the OS.

Writes increase because the OS dumps the file into Cache and moves on.
 
In Raid 0, reads slow downs because the drives have to fetch the data pieces and then the raid software (iRst for intel) has to put them back together, verify they aren't corrupt (do a checksum), build the whole file piece by piece, and then release then it to the OS.
Where is the checksum stored? I thought that the only error checking was in truly redundant RAIDs which are designed to continue operating in the event of a drive failure. RAID 0 has no redundancy or fault tolerance.
 

popatim

Titan
Moderator
On the HDD end its built into the data encoding. One the PC side , the cpu calculates it via the raid driver.
This is why chipset/software raid is slower then a dedicated raid card. There's less overhead with a card as the checksum calculation is hardware and much faster. With Raid 0, this checksum is very simple, or possibly non-existing these days, which is why it is mostly just as fast as hardware. The harddrives has done the brunt of the work to insure it's delivering the correct data bits by introducing it's own error-correcting coding to the data which it verifies during the read.
 
The point of a checksum is that it is calculated when the data are written to the drive and then stored somewhere. When those same data are read back, a new checksum is calculated and then compared against the stored one.

So where would the checksum be stored? BTW, that's one checksum byte/word/dword for each sector, and these checksums would need to be nonvolatile. At the very minimum, a 1TB drive would require 2GB of storage to store one checksum byte per sector.

Here is an interesting discussion:
https://www.enterprisestorageforum.com/storage-networking/sas-vs-sata.html

SAS drives can add CRCs to the data by using a 520-byte sector size. This allows data integrity to be verified along the entire data path from drive to HBA. The author of the article is not aware of any similar integrity checking for SATA or PCIe.

The article also talks about a file system called ZFS which adds its own checksums and stores them on the drive.
 
Last edited:

TRENDING THREADS