I know I know, the age old dilemma....
So I've used both and frankly I like them both. I've also expressed my opinions and recommendations in other posts about both. To be completely honest I'm having a hard time committing to one over the other, and frankly don't know if I should. I build a lot of custom workstations for a certain niche market, all of which have a RAID-5 array consisting of 3-5 2TB (or 3TB) disks. This logical volume is purely used for storage. No DBs or applications installed.
So, we all know the arguments for both sides:
Chipset
pro - cost, obviously
con - uses host CPU (I don't buy this one for a second as I barely see any CPU usage...)
pro/con - uses RAM for caching. I call this one a pro as I've literally performed a file transfer of an 80GB zip file from an SM951 to a RAID-5 of 4 x 2TB HDDs and it sustained (again, literally) 2GB/s transfer speed WRITING to the array. Now....it did get up to 10GB of RAM usage, though I put 64GB in these systems which covers things like this. This would not be possible with a RAID card. I'd expect to see that transfer level off around 500-550MB/s after a few seconds.
con - you lose your mobo you lose your array. Granted, if you lose a RAID card you lose your array, so.....
con - Intel's chipset RAID can be "flaky." I haven't seen this, so I'm interested to hear people's stories.
Dedicated Card - note that I'm an LSI guy so that may be reflected here
pro - takes load off host resources (CPU & mem)
pro - allegedly better performance than chipset RAID. In some ways I agree but would like everyone's real-life feedback to this one.
pro - can forklift your array to another machine
pro - more configuration flexibility. I agree with this for sure
pro - more drives supported (this is the one thing I can draw a definitive line on going dedicated)
pro - battery backups
con - cost
con - even more cost for battery backup (there's always UPS as a sorta alternative)
con - eats a PCIe slot & lanes
con - only 1GB of cache (unless you wanna get real pricey with Areca)
So give it to me gurus. I'm beginning to feel like this is one of those things that varies by each builder's preference when it comes to arrays with 4-5 drives max, and that there's no real "right" answer. Typically I can find that one thing that really seals the deal for one side or the other with things like this but I'm having a heck of a time with this one.
So I've used both and frankly I like them both. I've also expressed my opinions and recommendations in other posts about both. To be completely honest I'm having a hard time committing to one over the other, and frankly don't know if I should. I build a lot of custom workstations for a certain niche market, all of which have a RAID-5 array consisting of 3-5 2TB (or 3TB) disks. This logical volume is purely used for storage. No DBs or applications installed.
So, we all know the arguments for both sides:
Chipset
pro - cost, obviously
con - uses host CPU (I don't buy this one for a second as I barely see any CPU usage...)
pro/con - uses RAM for caching. I call this one a pro as I've literally performed a file transfer of an 80GB zip file from an SM951 to a RAID-5 of 4 x 2TB HDDs and it sustained (again, literally) 2GB/s transfer speed WRITING to the array. Now....it did get up to 10GB of RAM usage, though I put 64GB in these systems which covers things like this. This would not be possible with a RAID card. I'd expect to see that transfer level off around 500-550MB/s after a few seconds.
con - you lose your mobo you lose your array. Granted, if you lose a RAID card you lose your array, so.....
con - Intel's chipset RAID can be "flaky." I haven't seen this, so I'm interested to hear people's stories.
Dedicated Card - note that I'm an LSI guy so that may be reflected here
pro - takes load off host resources (CPU & mem)
pro - allegedly better performance than chipset RAID. In some ways I agree but would like everyone's real-life feedback to this one.
pro - can forklift your array to another machine
pro - more configuration flexibility. I agree with this for sure
pro - more drives supported (this is the one thing I can draw a definitive line on going dedicated)
pro - battery backups
con - cost
con - even more cost for battery backup (there's always UPS as a sorta alternative)
con - eats a PCIe slot & lanes
con - only 1GB of cache (unless you wanna get real pricey with Areca)
So give it to me gurus. I'm beginning to feel like this is one of those things that varies by each builder's preference when it comes to arrays with 4-5 drives max, and that there's no real "right" answer. Typically I can find that one thing that really seals the deal for one side or the other with things like this but I'm having a heck of a time with this one.