Intel chipset RAID vs dedicated RAID controller

marko55

Honorable
Nov 29, 2015
800
0
11,660
I know I know, the age old dilemma....

So I've used both and frankly I like them both. I've also expressed my opinions and recommendations in other posts about both. To be completely honest I'm having a hard time committing to one over the other, and frankly don't know if I should. I build a lot of custom workstations for a certain niche market, all of which have a RAID-5 array consisting of 3-5 2TB (or 3TB) disks. This logical volume is purely used for storage. No DBs or applications installed.

So, we all know the arguments for both sides:

Chipset
pro - cost, obviously
con - uses host CPU (I don't buy this one for a second as I barely see any CPU usage...)
pro/con - uses RAM for caching. I call this one a pro as I've literally performed a file transfer of an 80GB zip file from an SM951 to a RAID-5 of 4 x 2TB HDDs and it sustained (again, literally) 2GB/s transfer speed WRITING to the array. Now....it did get up to 10GB of RAM usage, though I put 64GB in these systems which covers things like this. This would not be possible with a RAID card. I'd expect to see that transfer level off around 500-550MB/s after a few seconds.
con - you lose your mobo you lose your array. Granted, if you lose a RAID card you lose your array, so.....
con - Intel's chipset RAID can be "flaky." I haven't seen this, so I'm interested to hear people's stories.

Dedicated Card - note that I'm an LSI guy so that may be reflected here
pro - takes load off host resources (CPU & mem)
pro - allegedly better performance than chipset RAID. In some ways I agree but would like everyone's real-life feedback to this one.
pro - can forklift your array to another machine
pro - more configuration flexibility. I agree with this for sure
pro - more drives supported (this is the one thing I can draw a definitive line on going dedicated)
pro - battery backups
con - cost
con - even more cost for battery backup (there's always UPS as a sorta alternative)
con - eats a PCIe slot & lanes
con - only 1GB of cache (unless you wanna get real pricey with Areca)

So give it to me gurus. I'm beginning to feel like this is one of those things that varies by each builder's preference when it comes to arrays with 4-5 drives max, and that there's no real "right" answer. Typically I can find that one thing that really seals the deal for one side or the other with things like this but I'm having a heck of a time with this one.
 
Well for the most part you have it right down! Except for one thing. I mainly deal with Dell Server which means I deal with Dell Branded PERC Cards which are really LSI's, and lets say you love your RAID card, you can toss in another LSI card in there and it can pick the forigin config and you will have you're raid back like that pretty much. This should work with most high end RAID cards as well. I have not tried this with anything but PERC cards since they are all I have access too.

Great thing about RAID Card if it is for storage moving it to a new system is a snap that is for sure. I use a Dell SAS HBA 5 for my drives. I have 2 2Tb in a RAID 0 (I have a 4TB backup drive that i backup when ever i update a lot to it) and it also has my other 2TB that is by itself. I do plan on getting a better card when i have the money because i need to expand my storage options and the board i use for my server is an embedded board with a Dual Core Celeron. 8Gb and a SSD and i can't tell the difference in every day usage between that and my 8 Core OC'ed AMD.

If you are going to use the PC and you have a RAID 5 or 6 on it yea it can bog it down. A RAID 0, 1, 10, 01 doesn't matter because there is little to no overhead like there is with RAID 5 and 6.

In MY Opinion it depends on the situation. Is it mission critical data that is on a RAID 5/6 and has many many drives i say RAID card. If it is just to have a quick RAID 1 or RAID 0 etc to boot speed or have some extra protection just use onboard.
 
Thanks for the feedback drtweak. So I'm taking this through some real world testing as we speak and so far here's what I've got:

I just built a workstation with a 5960X and 4 x 3TB Toshiba 7200RPM SATA drives.

RAID-5 on the Intel chipset:
- Initialization time: about 55 hours (OUCH). The last one I built with 3 x 2TBs took about 40. One before that with 4 x 2TBs took about 48.
- Sequential read/write, about what you'd expect, around 550MB/s for both once cache is maxed.
- I think its using read-ahead by default based on behavior. Unfortunately there's no config-ability for this (and other things, but we knew that)

86GB zip file transfer testing (to/from a 950 Pro)
- Read/Write: around 550MB/s
- Memory spiked about 8GB on write, as expected, for caching
- CPU spiked two cores to between 12-17% on write. A couple other around 2-3% but no telling if this was for the RAID

Crystal Disk bench:
1GB, QD32
Read: 551, Write: 412
Seq Q32T1: 6.17 (1500 IOPS), 1.777 (441 IOPS)

Testing a rebuild.
Started 50 minutes ago, still at 0%...
Tested zip file transfer to drive during rebuild: Write bogged all the way down to 7MB/s (OUCH)

Now, at the same time in my home workstation on an LSI 9266-8i, I threw 4 x 2TB Toshiba 7200s in. I know its not exact apples to apples as they're not 3TB-ers, but see below.

- Initialization time (not background initialization): about 4 hours (WIN, big win)
- Sequential read/write, around 575MB/s for both once cache is maxed.
- Note that on both arrays I'm using a 64K strip, write back and disk cache is enabled. On the LSI I enabled read-ahead

86GB zip file transfer testing (to/from a 950 Pro)
- Read/Write: around 550-575MB/s

Crystal Disk bench:
1GB, QD32
Read: 590, Write: 605
Seq Q32T1: 23 (5700 IOPS), 27 (4192 IOPS) - Another win, and an important one for me

Testing a rebuild.
- Started 60 minutes ago, already 25% complete and says just over 2 hours remaining (though this can change). If I had to guess it will be around 4 hours.
- Tested zip file transfer to drive again during rebuild: Sustained transfer only got down to 380MB/s about 1/2 way thru, then only got as low as 360. Again, HUGE win.

This is what I figured may happen. During normal operating conditions they'll perform about the same (for sequential performance anyway). However, you do need to have some memory & a little CPU available if using the on-board. That being said, if you need IOPS, and that depends on your applications, you want a dedicated controller.

So it especially comes down to contingency. If a drive fails, how quickly can you recover. I give a spare drive with all my workstations so rebuilds can kick off as soon as a failed drive is noticed by the user. As we know with RAID 5, if a 2nd drive dies during that rebuild, you're done, so rebuild time can be important. I know a guy that has 3 x 2TB 7200s in a RAID-5 on the intel chipset and had to rebuild one time. It took over 24 hours.

I have another 9266-8i on order and will be putting it in this workstation to drive the 4 x 3TB drives and I'll be performing the same tests as I have above so I have an apples-to-apples with the same drives. This should be interesting...

At this point I think I've found my answer and most builds will be going out with dedicated cards.
 
Nice work there! Yea Wasn't sure what the performance on them would be but now i know! Call me old fashoned but I'm just a old school give me a RAID card kinda guy. I get about 200 R/W on my RAID 0 of my 2TB Reds from my Samsung SSD. SSD is on a SATA 6Gbps and the Dell RAID is a SATA 3 Gbps. I'm happy with it for me. It is just storage only for the most part. Sometimes i do stream my videos to my phone or work PC but other than that there isn't any real reading and writting going on to the drives.

I have a Client that does CCTV and they order big old servers with Dell MD Vaults and are maxed out with 12 6TB drives and we set them up as RAID 6 for them (We never use to set up the RAID or anything with these guys till they found out their Old IT guy was setting these guys are RAID 0 when a client of theirs, a police station might I add, had a Drive file RIGHT when they needed some footage lol) but yea they get insane read and write of a RAID 6 with those guys for 7.2k RPM drives.

But yea again this was some good info.
 
Yeah I'm interested to see what the LSI card does apples-to-apples.

Teaser, awaiting approval on a proposal I have out to a client for a storage server & ground-up LAN rebuild with 24 x 6TB 7200 Hitachis in a RAID-60 (driven by a 9361-4i & Intel 12Gb/s SAS expander), 10Gbps LAN (lots o' port channels) to workstations that will be moving data between their local PCIe SSDs and the server. I'll probably throw the benchmarks (synthetic and real) here once complete.
 



This is my face when i read that

gA68c6ZPBR-4.png


I have clients who order Dell MD Vaults for their security system storages. They have space like that. Well more like 12x6TB in RAID 60 but I would love to test that out. I don't get to fire the server all the way up though to test it (Well now that I have a portable version of Windows 10 I might just boot off of that and see). I have seen systems with 24 SSD's in RAID 5 put out 5GB read and write.

But yes PLEASE report back your findings!
 
So I got the 4 x 3TB drives connected to the LSI 9266-8i. Here's the apples-to-apples vs the Intel chipset RAID on an ASRock X99 OC Formula board (which by the way is going back due to crazy USB 3.0 issues....).

After finishing writing this up I wanted to come back to the beginning (here...) and precede my upcoming remarks with a note: This is based on RAID-5. I did not do a full like-for-like comparisons with RAID 10 or even 0. I have compared RAID-0 with SSDs on an LSI card vs intel's chipset and from what I saw the synthetic benchmark data actually "appeared" to favor the Intel chipset. I did build a RAID-10 on the Intel chipset with these same 4 x 3TB drives but oddly was only getting about 350MB/s read and write speeds, when I would have expected much more read performance (again, CrystalDiskMark synthetic benching). I didn't t-shoot that though, nor did I fully initialize that array.

So, without further adieu:

RAID-5 Initialization:
Initialization time using Intel chipset: about 55 hours
Intialization time on LSI card: 5.5 hours (WIN!)

Sequential read/write performance copying an 82GB zip file. No clear winner here.
Intel RAID controller (copied from Samsung 950 Pro PCIe SSD: Around 500-550MB/s at the end of the transfer
LSI card (copied from a RAID-0 with 4 x 128 850 EVOs on the LSI card): About the same. Ripped through the first 10GB in 10 seconds then SLOWLY worked its way down to finally finishing off around 500MB/s at the end.

Its worth noting that the Intel chipset will give you more cache just because it appears it will use up to 10GB of RAM to cache, which can be nice. You wanna make sure that data can get out of RAM and on to disk at a reasonable rate though....at least if you're doing a lot of writes. All my rigs have UPSs, which gives at least a little piece of mind.

I have now been able to actually notice the CPU (5960X in this case) being utilized while moving large files to/from the Intel Chipset-controlled RAID array. CPU spiked two cores to between 12-17% on write. A couple other around 2-3% but no telling if this was for the RAID. So not huge, but its there.

Crystal Disk benches:

Intel Chipset RAID: 1GB, QD32
Read: 551, Write: 412
4K Q32T1: 6.17 (1500 IOPS), 1.777 (441 IOPS)

LSI: 4GB QD32 (to rule out any caching anywhere).
Read: 548, Write 546 (WIN)
4K Q32T1: 6.46 (1576 IOPS), 8.85 (2161 IOPS) (WIN)

It appears that benchmark tools (or at least Crystal) can't utilize RAM cache on the intel chipset RAID, which is interesting. This makes me worry about other apps that may have the same issue. So I turned off cached IO (disabling the controller cache) on the LSI when testing. Also turned off read-ahead to keep it fair. I still had to bench using 4GB on the LSI because doing it at 1GB was BLOWING the intel away. 1GB results are below (again, cached I/O and read ahead OFF. Disk cache is enabled, just like the intel controller does).

LSI: 1GB QD32
Read: 1529, Write 655 (Another HUGE win)
4K Q32T1: 31.6 (7600 IOPS), 18.6 (4550 IOPS) (WIN - not even close)

So yeah, for the majority of writes, especially in a business-type scenario, LSI is going to just destroy intel here...

Testing some rebuilds:

Intel chipset RAID:
- Rebuild took about 8 hours to rebuild
- Tested the 82GB zip file transfer TO the array during rebuild: Write bogged all the way down to 2MB/s (OUCH) after about 10 minutes and only a few GBs copied at that point so I finally cancelled it as it was just not going to finish

LSI: (winner, big time)
- Rebuild took a little under 6 hours
- Transferring the 82GB zip file to the array during rebuild STILL transferred at around 500MB/s and finished NO problem. The file was also coming from the RAID-0 of 4 x 120GB 850 EVO SSDs on the same controller, which would require additional load on the controller for the read. Pretty impressive.

So there ya have it.
1) The dedicated controller smokes intel on writes, especially your smaller files and transfers
2) Overall performance is absolutely better with a dedicated card, in literally every aspect except huge sequential transfers where its about the same
3) The load on the host system did become apparent using the Intel chipset RAID, all be it minimal (CPU anyway). Granted, I had 64GB of RAM so RAM caching wasn't an issue, but not all machines are going to have all that to spare.
4) In the event of a rebuild, the intel-driven RAID can become unusable! Not good, especially in a professional workstation where time = money.

Where does the chipset-RAID win? Cost. So basically if you can afford it, get a card. And yes, this is what most people have been saying all along! ;-) I just had to see the real numbers for myself & thought it may help some others as well.
 
Dude I am SOO bookmarking this thread just to post to people who ask for real world performance haha. I don't have anything "New" to play with. I have 6-7 year old Dell PERC 6 Cards I can use but nothing newer to really play with. Thanks for the info man!
 
Hi guys, great thread! Maybe you can help me with a recommendation for an entry level RAID 5 card up to 200$ (video editing purposes, redundancy and read performance), I m searching for couple of days but didn t find anything. Beside, as I don t have so much experience with RAID cards, will a pcie 2.0 x 2 card work with one of my X99 SLI pcie 3.0 x16 slots? Also, are these LSI cards able to set some of the HDDs in RAID5 for example while the others in passthrough/non raid or a different RAID level, 1 or 10 for example? Many Thanks and sorry for noob questions
and A HAPPY NEW YEAR!
 
Hi Lutz. Happy new year!

First, yes, a PCIe 2.0 RAID card will work just fine in one of your PCIe 3.0 slots, so no worries there. More importantly though you need to ensure you have a proper amount of PCIe lanes available to that slot to ensure enough bandwidth to the bus from your RAID array, and this depends on your mobo model and what else you have in the other slots (video cards, etc). If you provide those two bits of info here I can let you know what you're up against.

In regards to the "passthrough/non raid" for some drives, this is technically called "JBOD" for when you're researching specs of cards. The LSI cards don't officially support JBOD BUT you can use drives individually by creating a RAID-0 array on the card using just one drive, and you've basically got the same thing. The difference between doing this and true "JBOD" (passthrough) is the RAID card will write some RAID data to the single drive and you can't pull it off the card and pop it in a computer on a native SATA port, as the OS won't be able to read the drive, so this should be taken in to consideration.

Other quality RAID cards in this space are the Adaptec and Areca cards. Both very nice cards that I'd install any day. I only look to move away from LSI myself if I'm building a RAID array that I think will benefit from a LOT of on-card cache, in which case I'm looking at the high end Areca cards that you can put an 8GB RAM stick on for cache. Again, that's getting way up there in need and certainly cost (over $1k for the card & cache). The Adaptec and Areca cards officially support JBOD too though, if you're not comfortable with the "workaround" on the LSI cards, and all three manufacturers have very similar cards in the same price ranges.

As far as pricing goes, I typically buy my LSI cards for workstation builds from the Chinese resellers on ebay like here (http://www.ebay.com/itm/New-LSI-MegaRAID-9266-8i-8-port-1GB-SATA-SAS-Controller-Card-LSI00295-/121816149169?hash=item1c5cceecb1:g:3fUAAOSwsFpWSWJ~). I've done this several times and never had a single issue with a card, and they're half the cost of buying through normal retail channels (newegg, etc), and work perfectly. For me, I keep a cold spare on hand in case one ever dies in one of my workstations. Drawback is that you've got no warranty and no tech support on these cards, so if you don't fully know what you're doing, or aren't comfortable with coming to places like here for configuration support, well.... That being said, if I'm doing a server build with a higher end card then I'll build in the cost for a supported card and buy from a supported reseller to get the 3yr warranty and support.

If you wanna save even more, and give up 512MB of cache, the 9260 is the same as the 9266, just with half the cache, which many home workstation users won't feel anyway. You can get those used on ebay for <$150. As far as the "8i" versus "4i" versions, for the sake of simplicity, look at this as the number of hard drives you can connect directly to the card (using fanout cables). So with the 8i cards you'll be able to connect 8 hard drives to the card.

Either way, you're not getting in to one of these good cards for $200 outside of ebay.

Just to throw a wrench in for ya, storage spaces with win8 and win10 is pretty interesting for software RAID, though I have no experience with the read/write/rebuild performance using the technology.
 
Many Thanks, Marko.
Meanwhile I ve read a little more and I found an adaptec controller which seems interesting for my purposes (ebay, used): http://www.adaptec.com/en-us/support/raid/sas_raid/sas-71605/
Still, I didn t find any clear confirmation that such a controller can create multiple arrays (most important issue for me at this moment)
What I mean is, especially if I ll get a controller with 4x4 direct connectors, is if I ll be able to set more logical drives with different RAID levels (0,1,5, 6 or 10) on the same controller. Different drives, of course. For example, 4x 3TB WD red in an RAID5 + 2x SSDs in RAID0 + 2x 4TB WD red in RAID1. If this could work, well, such a card will be of great value also for future storage upgrades. If not, if I can only use one type of RAID arrays, well, not that much.
Thanks again
 
Great! Actually is the first confirmation which I get regarding multiple arrays on same controller until now :) (I wrote to the ebay seller, searched the adaptec forums, wrote Adaptec tech support).
Many Thanks!
 
Yeah you can create multiple arrays on all these cards. You'll notice you can connect over 100 drives to these cards, which is accomplished by daisy chaining expanders or chassis backplanes. They'd be kinda crazy to only allow one array with 100 drives.;-)
 
Just a small update. I bought the card, decent ebay US price, works like a charm, decent W/R speeds, various RAID setups, backed configuration, but only as a warning, this board is getting really hot without proper cooling/airflow, 80-90 celsius with a 100 degrees shutdown threshold. So I ve mounted a Noctua 40mm, screw it in the heatsink with ease (but not in the original threads for the opriginal Adaptec fan) temps dropped to 50 deg C with 50-60% RPM. So far, I m a happy bunny :) Also, check your supported CPU MB PCI lanes, as I mounted it in the 3nd PCIe slot, as it was to tight to my first slot mounted gtx970, so the graphic card is now working only in 8x not 16x, but that s a limitation of my CPU MB combo and has nothing to do with the Adaptec board, I just wanted to share (I m fine so far with 8x for the gpu, not a big gamer anyways lately)