Four SAS 6 Gb/s RAID Controllers, Benchmarked And Reviewed

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
MRFS,

Are we talking local storage or enterprise wide storage? Really big difference between those two.

Local is just system to an internal disk, or via a DAS array sitting nearby. This is an extremely simple setup, the host system has direct access to all disks present without any arbitrators or fabric involved. It's severely limited in expandability though.

Enterprise fabrics are a bit different. You have one to n+1 storage bays full of disks all looped to a central Storage Processor (or more). The SP actually has it's own set of CPU's and memory in the GB's. The SP will cut up the disk's into LUN's and manage any / all RAID0/1/5/10/50/60/51/61 or however you want to slice it up. From the SP you have multiple paths to your switch's, preferably at least two fabric switch's. From each host system you should have four connections to the switch's. Port 0 on HBA0 goes to Switch A, port 1 on HBA0 goes to Switch B, Port 0 on HBA1 goes to Switch A, Port 1 on HBA1 goes to Switch B. That provides four lanes of bandwidth and complete redundancy. Any switch / controller can be taken off line and you still have access to all your disks. The SP's then do host LUN mapping to the HBA's of the systems you want to assign them to, you zone the switch's to segregate your disk traffic. This setup is incredibly important because your doing visualization using VMWare (our vendor) or your vendor of choice. Each LUN mapping represents a single virtual machine's file system and can be on any combination of disks you want. Further you can disk dupe the contends in real time to a secondary set of disks either at your facility or at a remote facility using a RPA.

Now you have multiple physical system's all connected to each other as a single VM Cluster. Because you have the host LUN's mapped to multiple systems they can all communicate with the disks. This allows you to seamlessly move a virtual machine from system A to system B without shutting the virtual machine down. You can move all the VM's from one physical system to another without shutting them down, this allows you to perform maintenance on or expansion of that physical system. All with zero downtime in your enterprise.

None of that cool magic is possible with DAS, you need a shared storage fabric to do it.


Also remember, disks have a finite speed upon which they communicate at. It makes no sense having 1.0Gbps (approx 100MBps) per disk when each 15K 2.5inch HDD tops out at 80MBps max and has sustained 40~60MBps. So unless we're talking SSD, which in and of itself is a different beast, no amount of bandwidth will make a HDD faster. You can have a single HDD using a 10GFC optic connection and it would still be limited to it's 40~60MBps. Plus we already have ridiculous bandwidth now, 16GFC x 4 with encoding gets you about 7 ~ 7.8GBps from SP to fabric switch. Then 8GFC x 4 gets you half that from switch to host system. You'd need a mainframe or something like an Sun M9000 to consume that much storage bandwidth.
 
These Comments are being posted in response to a review entitled "SAS 6 Gb/s RAID Controllers, Benchmarked And Reviewed."

Perhaps you should ask the Reviewer to add some comparative FC measurements, or write an entirely new article targeting enterprise users.

I hear echoes of a long past and short-lived debate between IBM's mainframe division, and IBM's brand new PC division.

PCs won that one, remember? :)

There is a form of corporate fascism emerging, in which only large corporations with huge IT budgets will be permitted access to very high-speed storage devices.

Some well informed journalists are openly discussing the emerging police state in America, and the national security infrastructure it complements and requires cf. "smart meters".

The rest of the world -- small and medium sized businesses and individuals -- will have to be satisfied limping along with much slower alternatives, and learn to love it.

Just check the prices on available PCIe SSDs! e.g. OCZ, Fusion-io etc.

I regard this situation as an artificial one that is politically imposed, not technologically imposed.

I will end this Comment with two, arguably microscopic examples:

(1) I was recently participating in a very lengthy and quite verbose thread actively publishing empirical measurements of SSD Write Endurance.

At one point, I just happened to add a comment about the added Write Endurance that could be realized by configuring multiple SSDs in RAID 0 arrays.

One other participant took note, and added a clarifying comment, so I thought we were off and running.

Within a matter of days, however, ALL of my posts on that particular point were completely erased, with no explanation -- to me or to anyone else -- by someone who never identified himself/herself.

GEE, did I touch a nerve? Was that thread secretly sponsored by some SSD vendor, who wanted the world to believe those measurements were all being done by totally objective and unbiased reviewers?

Who knows?

All I know is that, whenever I've tried to challenge the urban legend about RAID 0 causing higher failure rates, I am met with silence -- or more urban legends.

I maintain that each member of a 2-member RAID 0 array will experience 50% of the wear that a single JBOD device will experience, given the same worklaod; each member of a 4-member RAID 0 array will experience 25% of the wear of a single JBOD -- and so on -- over time.

I've tested this hypothesis in my own private office, and I have yet to refute it.

And, I believe this hypothesis -- and that's what it is -- should be tested rigorously and the results fed into a GO/NO GO decision to add TRIM and other SSD-specific features to RAID arrays, across the board.


(2) Similarly, when I asked Intel if TRIM still works with Windows OS software RAID arrays, they conveniently neglected to answer my question, quite notably even after they requested clarification.


Then, there is the possibility that price fixing is now occurring with SSDs, but that topic is also beyond the scope of the review we are discussing here (NOT FC technology, BTW).


MRFS
 
p.s.:
http://goldsilver.com/video/smart-meters/
http://www.youtube.com/watch?v=8JNFr_j6kdI&feature=youtu.be
http://www.youtube.com/watch?v=kdxSKv2vm_0&feature=player_embedded


On the merits of the review above, I would also like to see comparative measurements of OS software RAID arrays.

No new controller card is required at all, as long as unused SAS or SATA ports are available e.g. AMD's 990FX chipset.

I know this can be done with Highpoint's 2720 cards: each member drive must be initialized and configured as a JBOD device e.g. by invoking the Option ROM during boot-up (Ctrl-H).

Then, using Windows Disk Management, that drive must be initialized as "dynamic" instead of "basic".

With those two settings done correctly, it's very easy to combine 2 or more drives in an OS software RAID 0 array, or RAID 1 array: I've done a RAID 0 in this manner, and it works fine under XP/Pro SP3.

Windows cannot boot from a such a software RAID 0, however.

Since the 2720 does not host a high-powered IOP and as such it shifts the computational load back to the CPU, a good comparison can be had by running the same battery of tests on a software RAID 0 array, using all the same hardware.

I don't know if JBOD is an option with the other cards reviewed, but I'd be surprised if they did NOT support JBOD mode.

Again, this option would be useful to SOHO and SMB settings (NOT wasteful government enterprises that go out of their way to create problems so they can rush in with more solutions).

This other review of the 2720SGL could have done the same measurements of an OS software RAID:

http://www.tweaktown.com/reviews/4306/highpoint_rocketraid_2720sgl_sata_6g_raid_controller_review/index1.html


MRFS

 
@palladin9479 Because some marketing dept. decided that it is enterprise class equipment or because there is some measurable difference between enterprise class and consumer class equipment?

The fact is I don't know, and what is really great about Toms hardware is that they independently put it to the test. So it would be nice to compare cheap motherboard ports and 2 port PCIe cards managed by software RAID with high end cards.

Umm... you do realise that we're talking about consumer level and small business equipment here. Take your overpriced marketing solution and go elsewhere.

(On second thoughts, who am I to judge?)
 
@MRFS I hope you do realize that we still have yet to see a software (or even hardware) RAID solution that is as reliable as ZFS.

The filesystem btrfs is a promising alternative but is many years behind in development.

Here is a whole PhD disertation showing that normal file systems are unreliable:

http://www.zdnet.com/blog/storage/how-microsoft-puts-your-data-at-risk/169

Dr. Prabhakaran found that ALL the file systems shared

...ad hoc failure handling and a great deal of illogical inconsistency in failure policy...such inconsistency leads to substantially different detection and recovery strategies under similar fault scenarios, resulting in unpredictable and often undesirable fault-handling strategies.

We observe little tolerance to transient failures;...none of the file systems can recover from partial disk failures, due to a lack of in-disk redundancy.


Regarding shortcomings in hardware RAID:

http://www.cs.wisc.edu/adsl/Publications/corruption-fast08.pdf

Detecting and recovering from data corruption requires protection techniques beyond those provided by the disk drive. In fact, basic protection schemes such as RAID [13] may also be unable to detect these problems.
..
As we discuss later, checksums do not protect against all forms of corruption


http://www.cs.wisc.edu/adsl/Publications/corrupt-mysql-icde10.pdf

Recent work has shown that even with sophisticated RAID protection strategies, the "right" combination of a single fault and certain repair activities (e.g., a parity scrub) can still lead to data loss [19].


CERN discusses how their data was corrupted in spite of hardware RAID:

http://storagemojo.com/2007/09/19/cerns-data-corruption-research/

Here is a whole site that only talks about the lacks and shortcomings in RAID-5:

http://www.baarf.com/

Lacks and shortcomings in RAID-6:

http://kernel.org/pub/linux/kernel/people/hpa/raid6.pdf

The paper explains that the best RAID-6 can do is use probabilistic methods to distinguish between single and dual-disk corruption, eg. "there are 95% chances it is single-disk corruption so I am going to fix it assuming that, but there are 5% chances I am going to actually corrupt more data, I just can't tell". I wouldn't want to rely on a RAID controller that takes gambles 🙂


In other words, RAID-5 and RAID-6 are not safe at all and if you care about your data you should migrate to other solutions. In the past the disks were small and you were much less likely to run into problems. Today when the hard drives are big and RAID clusters are even bigger you are much more likely to run inte problems. Assume that there is a 0.00001% chance that you run into problems, if the hard drives are large and fast enough you will run into problems quite frequently.
 
If PCIe 3.0 will use 128b/130b jumbo frames and 8 GHz bus lanes,
how hard will it be to do the same with modified SATA and SAS protocols?

For the end user, I would predict 3 simple changes:

(1) set the channel speed to 8 GHz in the Option ROM
(or a physical jumper on add-on cards);

(2) set the protocol to transmit jumbo frames in the Option ROM
(or a physical jumper on add-on cards); and,

(3) install a compatible storage device, which may
need jumpers to sync it with such a host controller.


We're been overclocking CPUs and RAM for many years now;
we should also be enabling OC for storage subsystems
that need extra speed across the board.


P.S. Is an OS software RAID, managed by Windows, a "fake RAID"?

Why use the term "fake", which is a pejorative loaded with
negative implications probably designed to steer SOHO and SMB
users into buying more hardware than they really need.

"RAID" = Redundant Array of Independent (or Inexpensive) Disks

The acronym says nothing about how the array is actually controlled
and by what it is controlled.

So, I conclude that the term "fake" is just not appropriate here,
particularly in an era of CPUs with multiple cores, none of which is
"fake" either :)

Try telling Intel that their quad-core CPUs have "fake" cores!

Moreover, the propriety of using that term in this discussion
is a matter I choose not to debate any further.

I think it's a stupid word, frankly!

I have a Windows XP/Pro software RAID running just fine
with 2 x Western Digital 6G HDDs wired to a RR2720
and initialized as JBOD by that controller's Option ROM.


MRFS
 
FYI: re: 2720SGL

We recently built a budget storage server with a very inexpensive
ASRock G41M-S3 motherboard (~$50) and the 2720SGL installed
in the single x16 PCIe 1.0 slot. We then installed Windows 7
Ultimate 64-bit version on a RAID 0 with 2 x WD2503ABYX.

For purposes of a storage server, the integrated graphics are quite adequate;
and, I added a Gigabit NIC to one of the legacy PCI slots.

We then documented various measurements at WD's website --
to support our proposal that WD consider a "RAID Edition 5"
series of mechanical HDDs:

http://community.wdc.com/t5/Internal-Drive-Ideas/RAID-Edition-5-quot-RE5-quot-series-of-rotating-hard-drives/idi-p/250006
(6G interface, large 64-128MB cache, time-limited error recovery, PMR, 7,200 rpm, full range of capacities)


None of WD's "RAID Edition" HDDs currently supports a 6G interface, however: hence, our proposal.


With the default 2720SGL driver and Windows 7 Ultimate 64-bit, stock settings,
we exceeded 300 MBps, measured with ATTO:

2xWD2503ABYX.RAID-0.RR2720SGL.G41M-S3.Win7.x64.JPG



That storage server still has 6 more 6G ports available on the 2720SGL,
and 4 more 3G SATA ports on the motherboard (albeit no AHCI on those ports).

And, with 2.5" HDDs maturing nicely, a single 5.25" drive bay can accommodate
4 such 2.5" HDDs or SSDs in a 4-in-1 enclosure like the Enhance Tech X14:
http://www.enhance-tech.com/press/press_082509_QuadraPack_X14.html
there are other vendors now e.g. Icy Dock, Thermaltake, Addonics.

Also, 3.5"-to-2.5" adapters typically accommodate 2 x 2.5" drives
in a single 3.5" bay e.g. Silverstone makes a nice one.


This experiment was intended for SOHO and SMB settings, however,
NOT large enterprises who like to scoff at such "puny" PCs.


MRFS
 



Read again, that was in a specific side bar about FC vs SAS, specifically in an enterprise setup. Small / medium business's have absolutely no need for a SAN or what it provides.

SAN level storage is neither overpriced nor a marketing gimmick. It exists to separate your storage from the systems that depend on it. It allows you to maintain higher uptime and lower your time till service is restored. When an hour could mean millions of USD lost then it becomes cost effective to spend serious cash on ensuring that your service is restored in 5~10 min, not 30 ~ 180. With a SAN environment implementing copy on write, should a VM crash and become unresponsive you can restore it to it's operating state of 10 min before the crash. You can also start a canned "known good" clone of the VM and have the service back up instantly. You move the crashed VM into it's own container and go about dissecting it and discover why it crashed so as to prevent it from happening again. Business's pay millions of USD for this level of guarantee that their operations do not stop, not even when a server PSU smokes.

@gooey

Yes ZFS is amazing. We have several enclaves that are running Solaris 10 on T2, T3, T6 and M4 series systems. Some are cabled into the SAN for hosting their zones, others have a directly attached SE35xx / SE36xx SP device with it's own disks. Only issue with ZFS is that it's not friendly with SAN environments, ZFS wants to directly control the disks but our SP's need to for data duping and cloning to work properly. What we found works best is to use the HW RAID / SP functionality to create the LUN's, then use ZFS as the file storage system. This gives you the flexibility and security of ZFS while working inside an enterprise world.
 
You're testing server-class hardware on a desktop-class system. I'd like to see how these cards perform on multi-CPU systems, any compatibility issues, etc. I doubt if even 1% of the readers will put a SAS card in a desktop system.
 
I'd like to add a few comments concerning what I've read, and experienced myself, when the 2720SGL's default of INT13 ENABLED becomes a problem.

The easiest way to DISABLE INT13 on that controller is to flash the latest controller bios with a handy Windows program: that program has an option to DISABLE INT13 (aka Interrupt 13).

This requires that the Windows program AND the latest device driver be downloaded and installed together.

Clearly, then, Windows must already be installed on some OTHER storage devices wired to some OTHER controller, before this Windows program can execute.

And, users can be faulted for not reading and heeding the readme.txt file, which contains important details relevant to the INT13 problem.

At Newegg, I've read a few customer Comments which elaborate the sequence that must be followed if multiple 2720SGL controllers are installed in the same motherboard.

In that situation, the key is to understand that one of 2 such controllers must have INT13 DISABLED, before it will inter-operate with the other controller and allow that other controller to boot the OS with INT13 ENABLED.

Happily, for systems that need only one 2720SGL installed, and that one 2720SGL is intended to host the boot drive(s), following the factory directions is enough to succeed with a fresh install of Windows 7.

Just be aware of the potential for problems that result when INT13 ENABLED results in conflicting with other storage controllers already installed in the same system, particularly when those other storage controllers have storage devices attached to them.

If you want to add a 2720SGL to an existing system which already has Windows installed and running AOK on other devices wired to a different controller e.g. the chipset, BEST WAY is to install the 2720SGL withOUT cabling any drives to it. Then, if you need to disable INT13, that is the ideal point in time to make that change -- by flashing the latest controller bios.

Once you've succeeded with flashing the latest controller bios and disabling INT13, you can shutdown, cable drives to that controller, and then re-boot into the motherboard BIOS to confirm that the list of available Boot Devices is correct.

Always remember that INT13 is ENABLED by default, at the factory.

You can expect problems if you install the 2720SGL and cable drives to it, and then expect your system to boot normally from some other boot drive(s) already installed in your system: under these circumstances, Highpoint's controllers have been known to KNOCK OUT all other controllers from the list of available Boot Devices, as listed in the motherboard's BIOS.

Then, the system is unable to locate the normal Boot Device simply because the boot drive no longer shows up in that list!


RTFM (Read The Fine Manual = readme.txt !!)


MRFS
 
> I doubt if even 1% of the readers will put a SAS card in a desktop system.

Tell that to Highpoint: I believe the 2720SGL is selling like hot cakes!

Also, it works with SAS and SATA drives: we've installed Windows 7
to a RAID 0 array with 2 x WD2503ABYX wired to a 2720SGL.
Those are 3G SATA drives.


MRFS
 
Can we get some 'Array Rebuilding' timed benchmarks, by swapping a blank drive for a used drive in either RAID 10, 01, or 5, verses an Intel on-board controller, for example. This would be super helpful for me.
 
Your article starts by raising the question are Raid controllers required, but sadly doesn’t answer that question, eg, No if the requirement is…, yes if …

Also doesn’t deal with the central issue for having RAID at all, Failures! The obvious one being a drive failure, how easy is it to resolve? No test! More complex, what if the MB blows up – 2 scenarios, identical replacement available, and completely different MB required, in either case can you just plug in the card and off you go? What about O/S change or upgrade, eg, Windows 7 to 9, or from MS to Unix, etc? Lastly what happens if you want to change the controller, eg, Start with Rocket and subsequently upgrade to Mega, or whatever Tom tells us is the best in a couple of years time?
 
HPT controllers are great for the price. I have several now, and although they're slower than my Areca and LSI controllers they have given me much less problems in terms of HDD compatibility.
 
In the article comparing RAID 6 controllers, including the Highpoint 2720SGL RAID controller (see http://www.tomshardware.com/reviews/sas-6gb-raid-controller,3028-15.html) a benchmark shows RAID 6 performance for this card, including with 1 or 2 drives disabled. Based on this I bought the card because I was looking for a low cost RAID 6 controller.

I got the card but the Setup GUI, box documentation, internal documentation as well as the Highpoint website does not show RAID 6 supported. I've downloaded the latest drivers/manual, etc and same thing - RAID 5 support but no RAID 6.

What gives?!?

How did THW pull this off? Am I missing something?
 
Can we please have benchmarks of these cards with SSDs in RAID 5 or 6? Then we can get a better idea of how quickly these cards can do parity calculations.
 
Dear Tom's Hardware,
What striping size was used for the RAID 5 and Raid 6 Benchmarks?
 
Hi, are there great changes when the Raid Level is 50 instead of Raid 5 on all controllers? are there any results??
 
Bought a highpoint raid controller a couple years back. Last time I ever do that. 3 failed drives in a year (that weren't actually dead), one replacement card that took a year to complete (lousy support from the company, worst I've ever experienced) and now I'm looking to replace it because my spare disk has suddenly come up as failed just when another disk has died (yeah, right). Controller also can't rebuild so if you have a RAID 5 and are depending on being able to rebuild after replacing a failed drive, buy something else because the Highpoint stuff can't do it.
 
Bought a highpoint raid controller a couple years back. Last time I ever do that. 3 failed drives in a year (that weren't actually dead), one replacement card that took a year to complete (lousy support from the company, worst I've ever experienced) and now I'm looking to replace it because my spare disk has suddenly come up as failed just when another disk has died (yeah, right). Controller also can't rebuild so if you have a RAID 5 and are depending on being able to rebuild after replacing a failed drive, buy something else because the Highpoint stuff can't do it.
 
Status
Not open for further replies.