Noob questions about SATA, RAID, and PCIe lanes

marshedpotato

Reputable
Sep 30, 2014
74
0
4,630
Hi everyone
I have 2x960GB SanDisk Ultra II SSD and I am interested in using them in a RAID 0 configuration. I would love to know:
What sort of performance difference can I expect from using motherboard fake RAID to buying a physical RAID controller?
If buying a physical RAID controller is the way to go, would this compromise my GPU performance? I wonder this because my i7 4770K has 16 PCIe lanes, and my GPU is plugged into an x16 slot. If I were to connect a RAID controller, it would surely need to use some of these lanes and therefore drop my GPU down to x8?
Maybe I am being stupid, or maybe I am over thinking things... Either way, any answers to my questions would be very much appreciated.
 
Nope, not stupid......very well thought out.

However, I would advise against using the on-board RAID controller for anything that has data on it that would be difficult or tragic if you lost it. They are inherently less stable, and can be prone to array failure. RAID 1 is a bit less tricky, but the Windows disc utility does a decent job of this too (for data security).

With SSD's in RAID though, you unlikely to see much of a real-world, day to day performance improvement. When HDDs were the norm, doubling or tripling their speed was meaningful. With SSDs, super fast to really super fast isn't as noticeable (unless you move about or manipulate very large amounts or sized files). Game loading 'could' be almost twice as fast, but the extra seconds are not worth the instability risks for on-board solutions, in my opinion.

For the PCIe card route with Haswell, it will probably drop your GPU down to x8, though this can vary with certain motherboards. That said, the difference between x8 and x16 for almost all modern GPUs is negligible (2-3% performance hit tops), but even that seems unnecessary to give up for a few seconds of loading time.
 


Thank you for your response, it is very helpful.

So just to confirm I understood correctly, it's best to avoid using motherboard RAID for a RAID 0 configuration, and GPU performance is likely to take a small hit if I go down the route of a dedicated RAID controller? If this is the case, I think you're right, the small speed increase I would see isn't really worth the tiny hit to GPU performance.
 
i'll add a question here.

the i7 4770 have 16 pci express 3.0 lanes, but the motherboard chipset have 8 additional pci-e 2.0 lanes, wich are used for expansion card other than GPU, or am i mistaken?
 


My knowledge on the subject is very limited, but I've never heard of this before. It has me very interested though, as in theory if I can use these chipset lanes for a RAID controller, I can (in theory) implement a RAID 0 configuration with a dedicated PCIe RAID controller without lowering the bandwidth of my GPU.

Anyone know if this is possible and if so how I would go about doing this?
 


This was what my "vary with certain motherboards" was eluding to. Sometimes PCIe lanes are routed through the chipset rather than directly to the CPU. To know what possibilities you may have, just consult your motherboard user manual. Often though, this is limited to PCIe 2.0 x1 slots/lanes, so running a high-performance RAID controller though it doesn't accomplish much.

Additionally for performance, RAID controllers will often use x2 or x4 configurations, which makes it even harder to find a way to route it through the chipset (except for very very high end boards). PCI or PCIe x1 cards can still be used effectively for data security/redundancy in these slots....but that's different than a performance setup.
 
In regards to the original question, you're just not going to feel any difference in day-to-day performance by putting the two SSDs in a RAID-0 as your only storage drive (to the OS). Larger file transfers (64k & up) to & from an SSD RAID-0 array from another storage channel of equal (or faster) speed in the same box would benefit from this, and in some workstations this is necessary.

Personally I've put a lot of high end workstations in to the wild with RAID-5 arrays consisting of 3 or 4 x 2TB+ drives using the intel chipset's RAID and RST and frankly its very impressive. These arrays are used for storage and are constantly moving data to & from PCIe SSDs for very long periods of time. They've all remained stable and RST has come a long way. You've got write-back capability, SSD caching now, and the utilization of your system's RAM for cache is great! I've performed a transfer of an 80GB zip file from a Samsung SM951 to a RAID-5 of 4 x 2TB Toshiba spin disks and no joke it transferred at a sustained speed of around 2GB/s. Mind you RST reached about 10GB of RAM utilized to cache during the transfer but that's pretty awesome, and you don't have that kind of cache available on dedicated RAID cards (without getting in to the REAL high end).

To Geekwad's (great name btw lol) point, cutting a Video card's PCIe lanes to 8 from 16 will not be noticeable to your FPS in games. It "may" result in 1-2FPS here and there, but not impacting. There are Toms Hardware studies showing this and in fact even taking a PCIe slot down to x4 only resulted in losing a few frames per second. Very interesting study actually.

The PCIe 2.0 slots that get their lanes from the chipset are free to use and won't take from the CPU's 16 lanes. Most (real) RAID cards people buy for home workstations/servers (LSI, Adaptec, Areca, etc) are PCIe 2.0 x8 cards anyway, so these slots are perfect if you have one that's x16 or x8. PCIe 2.0 at 8 lanes gives you about 4GB/s (500MB/s per lane roughly).

Google "SSD in RAID 0" to see tons of articles about pros & cons, like losing TRIM for instance.
 
Raid 0 , even chipset raid0, is fine as long as you know and prepare for the downfalls: no easy bootable restore option, Double the possible failure rate, slower for normal users than a single drive... Backups for important info remain just as critical as without raid.

I've been running motherbd raid on my personal system for about 10 yrs without an issue beyond drive failure which you would see even with a dedicated raid card. I used to run a raid card but once I got my own home server the need for redundancy on my system diminished greatly but I still needed the fast sequential performance that a single hdd could provide. I have 3 raid0's in my current system but then I like to edit videos and actually need the extra speed.

Motherbd raid 0 is fine as long as you accept that if a drive fails you will need to reinstall the OS & everything else all over again.

Intel drivers pass trim even in raid, so that's not an issue with their boards. Amd doesn't but you have the drives native garbage collection algorithm to handle that, just let it sit idle one it a while so that it kicks in and does its job.