Question Raid 10 proper performance?!

Status
Not open for further replies.

FreeBee101

Reputable
Jul 20, 2020
16
2
4,515
I have a raid 10 with 6x re4 hdd's. They get 95mb/s write and 113mb/s write in gnome disk utility individually. In a raid setup they get between 70-110mb/s write and 200-250mb/s read. This is lower than it should be. https://hdd.userbenchmark.com/SpeedTest/5792/WDC-WD2503ABYX-01WERA0 I've tried manual raid 1+0, linux easy raid 10 near 2 and offset with various chunks. Nothings working.

How do you get raids to full performance abilities? Do you need to do alignments or something. Why is it getting the write performance of a single disk(or less) instead of 2-3 disks?

Does anyone know how to set raids correctly. I've seen thing where people set more details, but can't find the info. I'm not sure if it's available for this type of raid. I have write cache enabled on all drives.

My boot drive is an 850 pro 256gb drive that can do 540mb/s read/write. I wanted to get the read speed as close as possible to that and the write speed as close to or higher than half of that. Possibly with an offset raid setting. Given I'm doing a 6 disk raid 10 that should be realistic. The problem is all raid setups end up with the nearly identical real world performance and benchmarks of 90-120mb/s. It's not using the disks fully. It's literally worse in real application that if I used a single drive. I don't remember how to get raids working correctly anymore.
 
Last edited:

RealBeast

Titan
Moderator
I have a raid 10 with 6x re4 hdd's. They get 95mb/s write and 113mb/s write in gnome disk utility individually. In a raid setup they get between 70-110mb/s write and 200-250mb/s read. This is lower than it should be. https://hdd.userbenchmark.com/SpeedTest/5792/WDC-WD2503ABYX-01WERA0 I've tried manual raid 1+0, linux easy raid 10 near 2 and offset with various chunks. Nothings working.

How do you get raids to full performance abilities? Do you need to do alignments or something. Why is it getting the write performance of a single disk(or less) instead of 2-3 disks?

Does anyone know how to set raids correctly. I've seen thing where people set more details, but can't find the info. I'm not sure if it's available for this type of raid. I have write cache enabled on all drives.

My boot drive is an 850 pro 256gb drive that can do 540mb/s read/write. I wanted to get the read speed as close as possible to that and the write speed as close to or higher than half of that. Possibly with an offset raid setting. Given I'm doing a 6 disk raid 10 that should be realistic. The problem is all raid setups end up with the nearly identical real world performance and benchmarks of 90-120mb/s. It's not using the disks fully. It's literally worse in real application that if I used a single drive. I don't remember how to get raids working correctly anymore.
What controller are you using? If you say motherboard, you are getting about what I would expect.
 

FreeBee101

Reputable
Jul 20, 2020
16
2
4,515
It's a software raid with mdadm. Why would that limit the speed? Is there some way to set things to get full speed. I can sacrifice cpu power if needed. Is there a way to set it to not effect the actual max transfer?

I was running things off of a fedora 25 stick and in general it was getting about 3 times the speed. I'm wondering if fedora 32 has software that doesn't work well with file transfers. No idea which peice of software though. Fedora 25 might have been on gnome. Fedora 32 is on mate. 1.24.

Could settings like these be adjusted to increase performance:

vm.dirty_background_ratio=10
vm.dirty_ratio=20

I'm not familiar enough with what can change raid performance.

Does the fact that I don't have swap on my ssd boot drive effect performance? Would a ram drive help or be useable to maximize performance? I would assume ram would be fine without a ram drive though. This is on a home computer for games. I just need max file transfer speeds and not security or anything.

Can something like this be done with ram or with software settings with or without zfs?


I'm not sure what options exist.

I have no problem with using ram or cpu and having the data vulnerable to power outage in exchange for maximum performance. I'm just moving game files around. I can redownload them and reverify their data.

Do I need to do a full rezeroing of the drives to get better performance?

I'm not sure what needs to be done or what performance can be gained overall from a software raid. I'm not finding any proper underlieing information to begin to figure this out correctly. It seems like full information has vanished everywhere. Or swamped by so many personal websites it's impossible to find. I haven't dealt with raid in about a decade so I don't remember the basics anymore(As in fundamental hardware details or how what options exist to change performance and details about why the performance happens theway it does under the hood.).

I'm actually trying to get gnome installed on my system to see if something makes a difference. But like everything in the linux world all the info to install it is incorrect...

Edit: I set the VM dirty and background settings to 50 and 25... It's making one CPU core max out and make hickups periodically. I wonder if the problem is that caja or mate is only allowing it to use one core and restricting the number of threads or something it's using. Maybe that is the difference between the two version I was running.

It's running at about 44.7 mb/s and the core is full.

Edit2: I did a raid 0 with all 6 disks and it's starting at 300mb/s but dropping every moment. Ended at around 138.4 mb/s. And the raid to the ssd is only doing about 250mb/s and dropping faster.

I'm using dolphin instead of caja file manager. I'm getting up to 300+ almost 400mb/s at points. But it's stalling a lot(actually says stalled.). First thing I found said it could be a file limit size. I wonder if they are running 32 over 64 bit or something. The limist is around 2-4gb. I'm transfering a copy of warthund at around 33gb. Some of this seems to be linux related.. (joke: Can we blame stallman for this?)


My Drives are mounted inside folders inside each other. I wonder if that makes problems. My main drive has a folder called Storage that the Raid is mounted in. The second drive that is a backup drive is mounted in Backup in Storage. They can both be mounted without each other as the folders are already there or created. But I have no idea of any underlying logic that could effect things. I'll need to test it in a tradtional /mnt/*. This is going from my raid to my ssd btw.

If it wasn't for this, "stalled" state this thing would be blazing fast. At least going to my ssd. This was not happening before. I wonder if Caja is hiding the stalls and averaging the speed. Dolphin is still transfering files while it's stalled sometimes though. So, i'm not sure what is happening.

Does having drives mounted in other folders make inode problems? I'm still learning how they work.

I installed gnome desktop and it's running better with a 1024 chunk and 4 disks. It's using 3-4 cores at max though. I wonder if this is because I have no swap. Should I make swap on the raid or a second raid with 2x stripped and 500gb or a ramdisk with about 8gb and half my ram?! Maybe that would help the CPU and transfer rates.

If I made both an in ram ramdisk and a tmpf could I get maximum performance? Tempfs for ram and ramdisk or another tempfs with swap in a raid 0 with 2 disks? Or an 8gb ramdisk with tmpfs and a full 6 disk raid 0 or 10?

I think the problem may be becoming cpu. Would ramdisk or anything lower cpu usage overall and get full transfer without hickups?! Could it be enhanced cache for the file transfers and raid?

There is something about async also, but I can't find how to use it or where. If that drops cpu and increase performance that would be nice.
 
Last edited:
Status
Not open for further replies.

TRENDING THREADS