Four SAS 6 Gb/s RAID Controllers, Benchmarked And Reviewed

Status
Not open for further replies.

americanherosandwich

Distinguished
Aug 24, 2010
40
0
18,530
Great review! Though I would have like to see some RAID 1 and RAID 10 benchmarks. Don't usually see RAID 0 for expensive SAS RAID Controllers, and more RAID 10 configurations than RAID 5.
 

purrcatian

Distinguished
Aug 7, 2010
101
0
18,710
I just sold my HighPoint RocketRAID 2720 because of the terrible drivers. Not only do the drivers add about 60 seconds to the Windows boot, they also cause random BSODs. The support was a joke, and the driver that came on the disc caused the Windows 7 x64 setup to instantly BSOD even though the box had a Windows 7 compatible logo on it. I even RMAed the card and the new one was exactly the same.
 

dealcorn

Distinguished
Jun 12, 2008
73
0
18,630
Very cool, fast and expensive means not home server stuff. For that, try the IBM BR10I, 8port PCI-e SAS/SATA RAID controller, which is generally available on eBay for $40 with no bracket (I live for danger). You are stuck with 3 GB/sec per port, but if you add $34 for a pair of forward breakout cables you have 8 sata ports at a cost of under $10 per port. The card requires a PCIe X8 slot but if you only give it 4 lanes (the number of lanes offered by our Atom's NM10) if will give each port 1.5 Gb/sec. Cheap SAS makes software RAID 6 prudent in a home storage server.
 

slicedtoad

Distinguished
Mar 29, 2011
1,034
0
19,360
I have pretty much no use for anything other than raid 0 but it was still an interesting read. I think i prefer this type of article over the longer type with actual benchmarks thrown in (not for gpu or cpu reviews though).
 

pxl9190

Distinguished
Jan 27, 2011
21
0
18,510
Only wish this review had came earlier !

I had a hard time deciding between 9265-8i, 1880 and 6805 a month ago. I bought the 6805 and always wondered why RAID-10 was not as fast as I thought it should be. This reviewed proved my worries.

I eventually went to RAID 6 with 6 Constellation ES 1TB disks. Here's where the adaptec really shines. This is for a photo/video storage/editing disk array.

Admittedly if I have a choice again I would have picked the Areca after seeing the numbers. Adaptec was the cheapest among all of them so it's not too much of a regret.
 
G

Guest

Guest
Great review! As I am in the process of building a new home file server and always have a habit of going overboard in such situations, I will be referring back to this article many more times before purchasing.

That said can you please talk more to the differences performance wise between SATA and SAS? I understand the reliability argument, however I wonder if for my purposes I would not be better served by using cheaper SATA disks over SAS disks?

I would also love some direction with regard to a good enclosures/power supplies for a hard drive only enclosure. I realize I am quickly priced out of an enterprise solution in this arena, but have seen at least a couple cheaper options online such as the Sans Digital TR8M+B. (This enclosure is normally bundeled with some RocketRaid controller which I would probably discard in favor of either the Adaptec or LSI solution.)
 
G

Guest

Guest
You are missing a huge competitor in this space. Atto RAID Adapters are on par and I think the only other one out there, why are they not compared in this review?
 

stuckintexas

Distinguished
Jan 13, 2009
27
0
18,530
I evaluated all but the Highpoint for work. What isn't shown, and would be unrealistic for a home user, is that the LSI destroys the competition when you throw on a SAS expander. With 24 15k SAS drives, the LSI card tops out at 3500MB/s, RAID0 sequential write, while the Areca is
 

stuckintexas

Distinguished
Jan 13, 2009
27
0
18,530
Sorry for the double post, comment system doesn't like the less than character.

I evaluated all but the Highpoint for work. What isn't shown, and would be unrealistic for a home user, is that the LSI destroys the competition when you throw on a SAS expander. With 24 15k SAS drives, the LSI card tops out at 3500MB/s, RAID0 sequential write, while the Areca is less than 2500 and the Adaptec is less than 1800. The Areca also has a lot of issues with stuttering during writes, your average may be fine, but the throughput has some significant dropouts.
 

fenwickc

Distinguished
Oct 6, 2011
2
0
18,510
How do these cards compare with using the 6x SATA 2 connections on my motherboard, a couple of cheap $30 2 port SATA card (eg StarTEch PEXSAT32 2-Port PCI Express SATA 6 Gbps) and software RAID 6?

I have more CPU that I can use (core i5) and want to use cheap 2 or 3 TB 7200 rpm SATA drives because I want lots of storage rather than maximum speed.
 

marraco

Distinguished
Jan 6, 2007
671
0
18,990
When we will have asymmetric RAID 0? A RAID controller capable of splitting data on different sizes parts, so the largest parts go to the fastest drives, and the shorter ones to the slower drives.
 
G

Guest

Guest
Great review, however I would have liked to see more details around raid configuration for each card. Things such as:
1. Supported raid features
2. Raid rebuild rates, notification features, etc
3. Gatcha's with each card, ie are JBOD disk interchangeable between different raid cards.

I am not surprise to see the HighPoint's card at the bottom of the list. You really get what you pay for with these cards, poor performance and even poorer support. I have a RocketRaid 2320 which has horrible drivers and sucks in every category. Will never use another HighPoint card due to the mounting issues I have encountered.
 
A slight correct to the Article about FC controllers. You don't use FC for raw speed you use it for redundancy and multi-pathing. A single FC drive will connect to two different channels, each channel can go back to the same HBA on two different channels or to two completely separate HBAs. This way each drive has at least two channels to reach the host system. Also FC comes in 2, 4, 8 and 10 Gbps flavors, kinda crush's SAS-6 in raw bandwidth. Although honestly you won't see faster then 4 or 8 on the inside of a system, 10 is usually reserved for between SAN drive arrays and SAN fabric switchs. With multi-pathing not only are you getting redundant connections, you can mux the two path's to combine their bandwidth. A system sporting two dual 8Gbps HBAs would be communicating to the SAN at 32Gbps across four connections to two different switches.

Which brings up the last point, FC's expandability is beyond SAS and FIS PM/PE's. PM/PE was designed for BBC connections where you have a single channel to a back plane with four to eight hot swap SAS connectors. And while they left room for you to implement 255 ID's per channel, there isn't a single vendor who provides that solution. FC on the other hand is as expandable as Ethernet. you can just keep adding more drive arrays, as many as you want. Each storage processor has it's own limit, usually around 255 disks, but you can just add more storage processors.

That all being said, FC is for enterprise class storage networks. Its the absolute best protocol for that due to its expandability and scalability (disks + bandwidth). SAS is for local system disks on small to medium business servers. Any enterprise worth it's salt will be using VM technology with the VM's being stored on the SAN for availability / redundancy purposes.
 

mras

Distinguished
Oct 7, 2011
11
0
18,510
"Aside from their performance characteristics, they stand apart by offering handy features like mixed-environment SAS and SATA support, along with scalability via SAS expanders."
Can't you test those statements in a upcoming article?
My personal experiences says that the HP Sas expander, works flawless with the LSI and Acera card you tested, with both single and dual linking.
However, the Adaptec only seems to understand single linking, while the Highpoint doesnt work at all with it.
 

g00ey

Distinguished
Aug 15, 2009
470
0
18,790
With my experience of losing a lot of data due to failing hard drives one motive to build a storage cluster on a dedicated controller is reliability.

I myself have built a storage pool using ZFS operating the SAS controller in IT-mode (Initiator-Target mode which means that all RAID functionality is turned off which it should be when using ZFS). So you don't buy an expensive hardware RAID card for that, instead you buy a cheaper card with lower RAID functionality. The RAID is instead taken care of by the software which has shown to be a lot more reliable than hardware RAID solutions. The IT-people at CERN who process petabytes of data every day can testify to that when they operated a huge storage cluster built on Areca cards; In short, the hardware RAID wasn't as reliable as promised whereas the ZFS software RAID solution was.

When using an operating system such as Solaris or OpenIndiana, one really important property of the controller is the platform compatibility. There are currently only two brands that can hold up to compatibility and that is LSI and Intel. LSI are known to be especially reliable and most thoroughly tested as most operating system vendors provide native drivers for use of LSI hardware in server environments and they have been used in such environments for years by now.

Brands such as 3Ware and OEMS such as Dell, IBM, Intel, HP, Fujitsu-Siemens, Cisco et al build SAS cards that are mostly based on LSI chips (look for MegaRAID 1068e/1078e or 2008e/2108e chips in the specs).

From a compatibility standpoint the Highpoint cards is the last brand I would recommend and from a reliability standpoint I would certainly recommend people to stay away from anything that comes from JMicron.
 

scrianinoff

Distinguished
Oct 7, 2011
2
0
18,510

scrianinoff

Distinguished
Oct 7, 2011
2
0
18,510
Now with working links, sorry for the double post:

Interesting review! What I find really disturbing is that results obtained by others using the Highpoint 2720 are much better while being consistent with each other, such as here:

and here:

Are the others lying, have they done it all wrong, or was there something wrong with Tomshardware's setup or drivers?
 

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
Provided that we followed the directions in the readme.txt file that comes with the Highpoint drivers, we've had much success with one RR2720 wired to 4 x 15,000 rpm 2.5" Hitachi HDDs, and one RR2720SGL wired to 2 x 7,200 rpm 3.5" Western Digital WD2503ABYX HDDs. Some users have tripped on the factory default which ENABLES INT13, to make those cards bootable. There is a way to disable INT13, using a Windows program: Newegg customers have reported success with this Windows program by disconnecting all drives until after INT13 is DISABLED. We were also successful with that Windows program, but it and the latest device driver must be downloaded TOGETHER from the Highpoint website. And, with our RR2720SGL, we did a fresh install of Windows 7 Ultimate 64-bit version and therefore had no need to disable INT13. So far, so good!
 
G

Guest

Guest
@palladin9479

FC is a terrible thing to use for backplane's and disks. no new tier 1 Enterprise storage array's (I think 3PAR is the only recent product release that still uses that crap). New SAS back planes are 4x6Gbps which destroys single channel 8Gbps FC arbitrated loops (There is no 10Gig FC, that is FCoE, which is a terrible idea but thats another matter). SAS supports duel ports, and two connections to two HBA's. EMC/HDS/Netapp have all made the switch, for performance, reliability and cost.

FC is way more expensive than Ethernet (Have you seen the cost of a director?). Pricing for FC switching scales exponentially more expensive the more port you add, and FCoE is an even worse excuse.

-Disclaimer, I was fixing a Netapp this morning, so my storage rant mode is on high.
 
@palladin9479

FC is a terrible thing to use for backplane's and disks. no new tier 1 Enterprise storage array's (I think 3PAR is the only recent product release that still uses that crap). New SAS back planes are 4x6Gbps which destroys single channel 8Gbps FC arbitrated loops (There is no 10Gig FC, that is FCoE, which is a terrible idea but thats another matter). SAS supports duel ports, and two connections to two HBA's. EMC/HDS/Netapp have all made the switch, for performance, reliability and cost.

FC is way more expensive than Ethernet (Have you seen the cost of a director?). Pricing for FC switching scales exponentially more expensive the more port you add, and FCoE is an even worse excuse.

-Disclaimer, I was fixing a Netapp this morning, so my storage rant mode is on high.


Umm no, seriously no.

http://en.wikipedia.org/wiki/Fibre_Channel

http://en.wikipedia.org/wiki/Serial_attached_SCSI

Right outside my office door I have several 10GFC connectors. We use them as connectors between our storage processors and our cisco FC switch's.

Internally most systems have 4~8GFC connectors as that's been determined the safest speed over copper. You can do up to 16 provided its a really short distance.

SAS is just the SCSI protocol serialized, its nothing different then U320 or any previous implementation. It's used as internal storage options because it's cheap and efficient. FC is expensive due to the complex circuitry you need at every point, it's highly redundant and can get ridiculous speed but that is often overkill for a 6~8 disk back-plane.

400MB ~ 800MBps dual (so actually 800MB ~ 1600MB/ps aggregate) is overkill when your disks won't put on more then 50~60MBps each sustained.

FC-SW is just a transport protocol, you can technically put anything on it. I've seen SAS arrays have 4 x 8GFC connectors on their back that we loop into the SAN. The SP's treat them just like any other disk and export them out to the appropriate host / zones. PATA, SATA, U320 SCSI, SAS and FC can all be connected via FC-SW.

Your horribly wrong about SAS and multi-pathing. SAS allows for multi-pathing by providing a mechanism to identify via WWN the same device being advertised through multiple initiators. SAS itself is a point to point protocol, there is no bus looping or aggregation going in. SAS disks themselves have only one data bus not two (FC disks have two). What you can have is a non local initiator connecting to target disk via multiple SAS channels. As in one disk to a back-plane, but the back-plane will have two connects to either the same controller or (preferably) two different controllers. Both controllers will advertise the disk to the host OS through different channels so the OS will see two different disks. Having the same WWN on each instance of the disk is what allows the OS's storage layer to identify that the disk has multi-pathing. This is something that's been around since SCSI just implemented in different flavors.

FC supports the disk being physically linked to two different channels, typically A and B. Either the channels will connect to the same HBA or they'll be on two different HBAs, in either case the result is the same that the OS see's two separate instances of the disk and the storage subsystem is what links them together. Because of how the FC protocol works you can send packets to different logical targets and they'll be treated the same when they arrive at the disk. You can send one half of your file down channel A, the other half down channel B at the same time and the FC circuitry on the disk will know how to work with it. SAS won't let you do that as it's only got a channel A and must receive all communication from the same initiator.

There is a reason FC-SW is king in the enterprise storage sector.

-=Edit=-

Also we have EMC as our storage provider and we've seen their SAS offerings. Its a SAS disk array that's all. It plugs into the storage processors using ... Fiber Channel. From that point on it's treated like a FC array by the SP's and zoned out appropriately.
 

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
@lost_Signal

Thanks for the rant! I'm a big supporter of the free speech, and I welcome and encourage such honesty, even when it's "on high".

I've been repeating the following high points of my own to quite a few memory and storage vendors, in recent months.

Here goes:

The published specs for PCIe 3.0 call for an 8 GHz transmission rate and a new 128b/130b "jumbo frame" at the bus level.

This will result in a raw bandwidth of 8G / 8 = 1.0 GBps for each x1 PCIe lane, in each direction.

It occurs to me that the entire IT industry now has a wonderful opportunity to extend that same logic out over SATA and SAS cables, directly to the storage devices themselves -- point-to-point.

Add-on RAID controllers, followed by chipsets in the visible future, can be enhanced to drive each channel at 8 GHz, with an option to upgrade the transmission protocol with that new 128b/130b "jumbo frame".

Assuming also (for the sake of this argument) that storage devices can also take advantage of the increased bandwidth, it will be entirely feasible for a RAID 0 array to support a raw bandwidth of 4.0 GBps with 4 compatible devices, 8.0 GBps with 8 compatible devices, and so on, at the other end of the SATA and SAS cables -- in a 1-to-1 relationship (1 GBps for each x1 lane).

What I see in these ideas are practical ways to extend the topology of the PCIe 3.0 bus logic out over storage cables, in a flexible and very easily managed fashion, that also accommodates all existing RAID levels with no other significant changes to existing software.

We can see this general concept being deployed in add-on PCIe RAID controllers with both x8 and x16 edge connectors, and both SFF-8087 and SFF-8088 multi-lane cable connectors in varying numbers e.g. Areca, Adaptec, Highpoint and LSI (in the review above), and several others that were not reviewed e.g. Intel RS2BL080, Promise and Newer Tech (to name a few).

These controllers are showing up in Apple systems too!

Newegg lists 365 different "Controllers / RAID Cards" at that popular online retailer.


In closing, I believe we are now seeing the dawn of an explosion in much higher speed storage architectures and creative solutions to eliminating that historically slow bottleneck e.g.

http://www.supremelaw.org/patents/overclocking.storage.subsystems.version.3.pdf


Your thoughts, observations and criticisms are always welcome!


MRFS

 
Status
Not open for further replies.