Looking for LGA1155 motherboard with more than 2 SATA3 RAID Ports

obsidianaura

Honorable
Jul 2, 2012
8
0
10,510
I've got 3 Vertex SSD drives and I'd like to run them at their full bandwith on SATA-III (6GB/s) on raid0

I can't find any Motherboards of the LGA1155 type that can do it, does one even exist or is it the limit of that mobo type?

I know there's some in the LGA2011 line but I'd rather not fork out on that.

Can't use a PCI-E card because it'd bottleneck the system wouldn't it?


Hope someone can help?

Thanks
 

ilikegirls

Distinguished
Jan 26, 2009
702
0
19,010


I think you missed what he is looking for as well. He wants to run raid 0 on 3 SSD. The motherboard you picked can only support 2 hard drives for raid on SATA3

I make the same error though :/
 
What do you wish to accomplish with raid-0 of three ssd's?

Raid-0 has been over hyped as a performance enhancer.
Sequential benchmarks do look wonderful, but the real world does not seem to deliver the indicated performance benefits for most
desktop users. The reason is, that sequential benchmarks are coded for maximum overlapped I/O rates.
It depends on reading a stripe of data simultaneously from each raid-0 member, and that is rarely what we do.
The OS does mostly small random reads and writes, so raid-0 is of little use there.
There are some apps that will benefit. They are characterized by reading large files in a sequential overlapped manner.

And... the SSD's in a raid array will lose the "trim" capability. That can seriously compromise your update performance.
 
On ANY Intel you won't find more than (2) Intel SATA3 ports; instead you'll find 2 x SATA 6Gb/s ports + 4-6 x SATA 3Gb/s ports. A few ASRock's offer multiple SATA3 ports BUT IF you read the fine print they are either non-bootable (data only) and/or SHARED i.e. NOT 4 * 600 MB/s and more like a fraction of that bandwidth; Interface vs Bandwidth.

Many LGA 1155 MOBO's offer a 4x PCIe 2.0 slot which is more than sufficient; 4x PCIe 2.0 = 500MB/s * 4 = 2,000 MB/s. I'd look an an LSI RAID Card, example $360 LSI MegaRAID SATA/SAS 9260-4i or $665 LSI MegaRAID Internal SAS 9265-8i and for optimal performance LSI00289 / FastPath SW & Key.

Otherwise use either 3x Intel SATA2 ports (3 * 300 MB/s = 900 MB/s) or 2x Intel SATA3 ports (2 * 600 MB/s = 1,200 MB/s).

Keep in mind the VAST MAJORITY of SSD 'speed' on a typical Windows system is all about 4KB Random R/W which barely (most are slower) is as fast as SATA2 speeds. The thing that makes SSD's 'fast' is NOT the max ATTO R/W 'peak' speeds but their ultra-low latency; example typical HDD's max is 12ms~15+ms versus a SSD 0.10ms~0.15+ms or by a factor of approximately 100X.

Also for every drive attached to a RAID 0 the failure rate is exponential, meaning in RAID 0 if ANY (1) one SSD fails then POOF all that data is toast. Also, as mentioned TRIM is not Enabled in RAID.

IF you want 'speed' then IMO look at a RAM Drive. Otherwise there's no comparison to an LSI RAID with 512MB or 1GB Cache and especially with FastPath versus ANY onboard RAID. Further, it's best to add a battery backup to a RAID Card, the risk of data corruption drops significantly.

As far as 'bandwidth' using a x4 PCIe RAID Card it won't have any effect in either gaming or any real world scenario.

Example my RAM Drive:
CDM-RamDisk-Plus-48GHz.jpg
 
A MOBO that I 'may' consider is the ASRock X79 Extreme11 with it's integrated LSI SAS 2308 PCIe 3.0 Controller (product brief PDF) it's a decent option and I'm sure it shares a lot of bandwidth but the SB-E/LGA 2011 has plent of bandwidth and the only instance I can imagine is 4-WAY SLI + large Array.

The only possible issue is IF it's bootable or simply for data only...suggestion contact ASRock pre-sales or Read the Manual.

/Edit - Just looked at the Manual (page 85) it illustrates the LSI is bootable and similarly in it's LSI supplemental manual (page 29).
 

obsidianaura

Honorable
Jul 2, 2012
8
0
10,510
Thanks for the info from all of you.

Looks like i'll stick with SATA2 ports for the time being.

The reason I'm stripping them is not just for the performance increase but because they're only 120GB in size (cheaper than buying 1 big drive) and I need them all in a single partition. I'm not concerned about the 3 points of failure TBH. I've got a decent enough backup anyway.

I've been using RAID0 on various breeds of Raptor Drives for nearly 10 years and have never run into problems.

Jaquith when you say the risk of failure is exponential in RAID0 I don't know what would cause that? I thought having 3 drives would just tripple the risk?

As for the Trim function not working in RAID0, the garbage collection should still work so when the computers idle it should still do the task, is that right?

Are people saying there's no performance increase from putting the SSDs in RAID0?


 

I Understand - I was just leading you to post the info for others to learn, as I'm sure most have no idea that's possible
;)
 

1−(1−r)^n ; r = failure rate, n = number of drives. It's an exponential expression (^n). RAID 0 has no fault tolerance.

Nope, TRIM is disabled. At one time Intel was working on a solution, but I've never seen it implemented.

SSD's in RAID 0 scale pretty well, so I'm not saying there's no performance increase. Also, with (2) drives I'd use the SATA3 ports.
 


Some time back, I had a Intel X25-M 80gb ssd. I needed more space for my "c" drive, so I bought a second and used raid-0 to get a larger single image.
It worked well enough, but I could detect NO better performance. Yes, the sequential benchmarks were better. Then, I had a better use for those two 80gb drives, and I replaced them with a single 160gb drive. If anything, I felt it was a bit faster.

Larger drives have more nand chips so they can access more chips in parallel. Sort of an internal raid-0 if you will.
But, the value in a ssd is random access times, which we do mostly, not sequential.

To solve your single image problem, I think you can use software to aggregate a number of drives into a single image.
I have not done this, so I can't comment on any issues.

Or, just buy a single larger ssd. The larger the ssd, the longer it will last(not really an issue), and the better it will take to updates. Regardless, you do not want any ssd to get past 80% full. Once you do, there will be extra overhead in finding free nand blocks for updates and deletions.

Perhaps, it might be better in this case to buy a 240gb ssd and a second 120gb ssd if you can allocate your files reasonably between them. It might be the case if you have 300 gb of data to store that you really need a 500gb ssd. Remember that a 120gb ssd will have a max useable capacity that is less, perhaps 110gb.
 
IMO use the 1 or 2 SSD's as a 'Steam' drive if you're a gamer and get a single large SSD as a boot drive.

The 'speed' as I've said depends on the data type & size, and as stated most OS & Apps are 4KB Random R/W.
 

obsidianaura

Honorable
Jul 2, 2012
8
0
10,510
Again thanks for everyones input

Thing is if i just spread the partition spanning over 3 drives isn't that nearly as likely to lose data as RAID0 is anyway?

Having 3 SSDs in RAID0 means that they dont get filled fully as the data is split 3 ways so I dont approch their individual maximum size as quickly.

So far as I can see the only downsides are, the failure of a drive losing my data which I can live with, and loss of trim which is a real pain, I was sure that Garbage Collection built into the Vertex drives would work by its self. Are you saying this isn't the case jaquith?

I already have the 3 drives and I'm not in a position to change them and I don't want to buy more drives right now.
 
IF I wanted everything fast + reliability then I'd RAID 0 2 SSD's as the boot drive, 1 SSD as a 'Steam' drive (games only) and 2 HDD's in RAID 1 for a Data drive. On most of my personal rigs:
1 SSD (OS + Apps + Working Data) + 2 HDD's in RAID 1 + optional: RAM Drive (Working Data) + SSD (Steam Drive)
or
2 SSD RAID 0 (OS + Apps + Working Data) + 2 HDD's in RAID 1 + optional: RAM Drive (Working Data) + SSD (Steam Drive)
and
Backup: NAS (Office) or Windows Home Server (Home) or External Storage (eSATA or USB 3.0 or Thunderbolt)
and
Off Premises Critical Backup: NAS @ third-party or somewhere else -- including e.g. SkyDrive, Google Drive, etc
and (Gamer's)
Steam Drive (SSD or HDD) ; the data is portable, recoverable and backed-up through Steam. Even data corruption can be fixed. It's a great service.

---

To answer your questions:
1. Partitions on any drive or on a RAID 0 array won't help in reliability.
2. RAID 0 Data Striping - The data gets split into 3 equal segments reads/writes simultaneously (concurrently) and evenly over n = number of drives, but individually the data per drive is segmented 'garbage' and useless data.
3. RAID 0 + TRIM = No Joy. Maybe 'someday' but keep in mind the data (garbage) is segmented over n = number of drives. I've been seeing it's coming for over two years...

Now in my examples above, worst case if the RAID 0 goes poof all I'm doing is re-installing the OS and Apps. If I add 'Working' data then yep I lose data from the last daily backup. Knowing that possibility I'm in the habit of manually backing-up Working Data once the job is finished. The RAM Drive creates a backup image on my SSD @ shutdown or anytime I click backup.
 
You may wish to recheck prices. SSD prices continue to drop.
I did some checking, and it seems that for Intel drives, at least, it costs less per gb as you buy larger drives.
For instance, on newegg, I see the following:
330 series:
120gb = $0.92 per gb.
180gb =$0.88
240gb=$82
520 series:
120gb=$1.16 pergb
180gb=$1.05
240gb = $1.04
480gb=$1.02
If you want 360gb, then perhaps a pair of 180gb drives would be better.
If you think you will fill up 360gb past the 320gb mark, then go anead and buy a larger 480 gb SSD.
Why am I only listing IIntel SSD's?
Read this article:
http://www.anandtech.com/show/5817/the-intel-ssd-330-review-60gb-120gb-180gb
 
Assuming 'home' use NAND lifespan p/e cycles is a non-issue. In our office enterprise SQL yep and there we use enterprise grade Intel SSD's or SAS drives on e.g. image servers. Either Intel, Samsung (only mfg of both NAND + Controller), or Crucial. Other companies are fine for home: OCZ, Corsair, Mushkin, etc. Take reviews as a grain of sand unless other (multiple) sources confirm an issue.
 

obsidianaura

Honorable
Jul 2, 2012
8
0
10,510
geofelt - I already have the drives now and i got them at a lower price than 1 larger drive of equivelent storage. They were also in damaged packaging so it was the best option at the time

jaquith - Thanks for all the suggestions, sounds like raid 5 might be worth looking at too maybe? Not too bothered about failure though with the 3 year warantee. My data is synced to both a NAS and HDD in the case.

I'm still confused by what you're saying about trim though. I understand trim doesn't work in RAID, but the built in Garbage Collection on the Vertex drives are a different system independant of the trim function as I understand it and will do the job as well.
 


After rereading your initial post, I realized that you had already bought the drives.

Now, the question is, how to best deploy them.

Option 1)
Use them in raid-0, even using sata 2 ports. From a sequential performance point of view, there is no real negative to this.
The random performance which should be 90% of what you do will not be impacted. Sequential performance will still be plenty fast, and if your apps are constructed appropriately, you might even get better sequential transfer rates. The biggie for you is that you will have a single 360gb(perhaps330gb useable) image. But, when updating, if the drives near full capacity, can be very slow if trim is not enabled.

Option 2) Exactly why is it so important to have a single large image? I know it is easier to manage.
You could attach the three drives individually to separate ports. It should not be hard to load apps and files to libraries where you want them located.

Option3) Can you accomplish what you want using windows software that allows spanned drives or JBOD?
Someone more knowledgeable on this might know if it is feasible, and what the issues might be.