Highpoint RAID5 performance poort

lifespeed

Distinguished
Jun 14, 2008
3
0
18,510
I have Highpoint Rocketraid 3520 PCIe x4 RAID card with an Intel IOP340 I/O processor buffered by 256MB RAM running on Vista x64, newest Intel X48T chipset motherboard, 4-disk RAID5, 4 each 1 TB Seagates. I am using write-through cache, I don't have battery-backup for the card.

It gets 18 - 23 MB/sec write performance. This seems too slow. I was telling myself, "yeah, it's raid5, slow write, but you're just using it for media so no big deal".

But this lack of speed isn't right! Could it really be slow because I accidentally let it select a small stripe size instead of 64K or 128K at installation? I'm pretty sure it defaulted to 4K block size. I checked the raid manager software and it doesn't seem to tell me the block size, but I think I may have left it small during install unaware of the penalty. Can the array be reuilt to a larger block size without losing data? How would I go about doing this?

Or could I just not be feeding it data? The inputs so far have been from a Panasonic SW-5583 BD drive or through an externally connected single HD over motherboard SATA. Wouldn't the single motherboard hard drive provide a fast enough source to test the poor performance of this raid array?

Any
__________________
Lifespeed
 

pjay

Distinguished
Jan 23, 2008
2
0
18,510
Just saw lifespeed's post - I've been messing with a couple of these controllers. I have 2 x 80GB drives and 6 x 1TB drives (all Seagate SATA II) on a RocketRaid 3520 controller, Asus P5BV-C mobo, E6850 CPU, 2GB RAM.

The 80GB drives are my boot volume, mirrored (RAID 1) and the rest are in a RAID 5 config. The controller seems to work much better if the cache is left at "write-back", even though with large files (typically 3-4GB) the cache would fill up and you wouldn't think it would make much difference.

***************************
UPDATE UPDATE UPDATE

I have done a lot more monkeying with the settings, including turning write caching off on the drives and controller, and resetting the block size (sector size always left at default). The best RAID 5 performance appears to be with everything at default settings, and the MOST IMPORTANT factor is that Write-Back caching be left ON at all times on the controller (the default setting). If the caching is changed to Write-Through, performance becomes very sluggish, indeed much more so than expected. My tests include writing 9+ GB of a few large files (1 - 3GB each).

Again, I made sure I wrote enough data that the caches most certainly filled up, both on the OS, drives, and controller.

So you can disregard everything else in this posting about alternative settings. I am still curious, however, to see if anyone has different/better results.

***************************

I left the sector size at the default of 512 and set the block size to the maximum setting. I get best large-file write performance with the block size this way, it appears. With Windows 2003 server, I have to convert the 5TB "drive" to GPT to get a volume larger than 2 TB. I am now getting a bit over 40MB/sec performance, and the block size seems to be the biggest factor in the speed increase over what lifespeed is getting.

I have also setup a RocketRaid 3522 on a Mac Pro with 6 x 1TB drives in an external enclosure and it's working pretty well.

Added notes - I will be doing further testing and posting more here if I can. I am going to try disabling the on-drive caching and then tinker again with the write-back/write-through settings on the controller.

(A client is paying for all of the above stuff.)

Feel free to post if anyone wants to further compare notes here.

-PJ
 

lifespeed

Distinguished
Jun 14, 2008
3
0
18,510
I figured out what was causing the slow performance: I had the Vista x64 boot partition on the RAID5 volume. Now that I have a separate OS drive, speeds are up to 330/290 MB/sec read/write. Performance was degraded by the write heads bouncing back and forth between the OS partition swap file and the data partition.

I am using the Seagate 1TB 'NS' series enterprise drives, which are firmware upgradeable. There was a compatibility issue between the SN04 firmware and the Intel IOP341 IO processor on the raid card. After upgrading the drives to the AN05 firmware the write speeds improved by 80 MB/sec to the above listed numbers.
 

Hovaucf

Distinguished
May 6, 2008
87
0
18,630
Yes always have the cache on especially for R5 and R6 even without a battery, I mean the data you're transferring is not mission critical so no reason to have it off w/o a battery.

Also if you're only transferring large file sizes keep your chunk/stripe size around 256 instead of 64 or whatever some companies default settings seem to be.