Server Raid 5 not fast enough

justin1989

Distinguished
Mar 2, 2011
24
0
18,510
Hello everyone,

I am doing some work for a company and I like to get everyone's opinion from Tom's, most of the advice given is great. I have 2 Server 2008 Boxes both with Raid 5 with 4 drives. Once system is the Sql box, the other is simply a file storage system. The file storage system is quite frankly gettings it's ass kicked on hard drive performance. To put this simply, there are only around 30 users within this company. However, they are using CAD files anywhere from 10-500MB in size. Accessing them constantly, updating, creating new, etc. That is in combination with obvious other files used by many of the other users. The entire network is gigabit, first lines between the 2 main building. The network cards on the server aren't maximized nor is the processing or memory usage. While looking at the hardware monitoring for the hard drives, the queue times etc are high. Simply put, the hard drives aren't keeping up. This is where I Need a better solution. There are a few things that I can think of. One is simply adding another box to spread out the load of the file server. Two, is enterprise storage devices with much higher RPM, and or enterprise SSD. Third, is a Nested Raid 5. If I am correct also known at Raid 50.

Advice is always welcome :)
 

fancarolina

Distinguished
Jan 3, 2009
234
1
18,715
Are your current cards hardware or software Raid this will make a huge difference as will 10-15K RPM drives.

If your cards do not list a seperate coprocessor or their own dedicated memory they are probably software RAID.

I currently have a High Point 2640x1 Soft Raid card running with 4 Samsung Green 1.5TB drives and get a max read speed of 70mb/sec out of my RAID5 array under Server 2008. What are your speeds for reference?
 

justin1989

Distinguished
Mar 2, 2011
24
0
18,510
Yes you are correct, 15K drives are not being used. I also will have to talk to my boss which is primarily asking the question and having me be his research guru I Suppose you could say. Unfortunately this is simply my lack of experience even with my two degrees :( They are hardware raids however I am also unsure of specifics as far as their own co-processors or their own dedicated memory, this is something I will also question him about.
 

fancarolina

Distinguished
Jan 3, 2009
234
1
18,715


If you are correct about the controllers then 15K RPM SAS drives would be a nice option or SSDs depending upon cost and capacity considerations only real options I can think of to get higher throughput out of a RAID5 array. Only other option is to split the data up into more arrays of different types to get your protection and speed. For example I'm thinking 2 4 drive 0+1 Arrays. This would offer 2 drive failure protection in each array while maintaining RAID0 speed as far as performance is concerned.
 

justin1989

Distinguished
Mar 2, 2011
24
0
18,510
I will definitely double check. I was looking into Seagate Savvio 10k drives, unless there is a better option. I looked into the ExpressSAS R608 raid card. The Seagate Cheetah's may also be an option however the price seemed to drop for the same size, cache, etc.
 

curtis_87

Distinguished
Sep 16, 2009
98
0
18,660
I think its overkill to do this but there is one other suggestion, I dont know what anyone else thinks...if the total space taken by the files is no more than say a few GB and the data they are accessing is mainly read opps, why not install more ram and add a ram drive? You could set the ram drive to copy any changes in data every XX minutes to hard drive..the only thing you'll should then be limited by is the network.
 

justin1989

Distinguished
Mar 2, 2011
24
0
18,510



Yes, this is understandable and was to be expected. However currently the Raid 5 that the company has is a huge bottleneck, much more than the Gigabit LAN bottleneck will be. There are two NIC's which are load balanced in theory giving 2Gigabit throughput to the switch, however it is still bottle necked by the other hookups.

The total space of all the files is around 800GB +. However single file size is most likely never above 1GB. However I have never heard of this solution and to be honestly would be lost. Memory usage is currently not an issue but as I stated all advice is welcome and I will consider it.
 

justin1989

Distinguished
Mar 2, 2011
24
0
18,510
The queue length is due to not being able to keep up with the amount of files trying to be accessed to my understanding. A lack of read / write speeds unfortunately increased the queue due to the drives physically not keeping up however again, everything I am forwarding unfortunately is not my own findings. I am simply forwarding what I was told by my boss who does most of the work for this company.
 

You need to find out what RAID controller, including options, is installed and what hard disks are used. 7200RPM drives and no write caching definitely is a bottleneck for 2 Gigabit LAN controllers.