The LSI 9361 will scream for RAID-ing even a LOT of SSDs, but I can tell you (from personal experience) that a 9266-8i (or 4i) will not restrict bandwidth of SSDs in RAID sets at all. Funny thing is, the 9361s are about the same price these days as the 9266s if bought brand new from normal retail channels (newegg, etc). Difference being the 9361 gives you SAS3 (12Gb/s), a faster on-card CPU and a PCIe 3.0 connector vs the 9266's SAS2 (6Gb/s) and PCIe 2.0. That being said, you can pick up 9266-8i's, as I have 4-5 times, off ebay (sometimes straight from China) and they're 100% legit, and you can get them for $225-ish which is a steal.
The LSI cards have "fastpath" by default since a few firmware versions ago, which boosts SSD logical volumes' performance. A couple years ago you had to buy that separately. To enable fastpath you just need to configure your logical volume with certain settings (64k strip, write-through, etc). Note that if you want to do write-back on a RAID-5 of SSDs, you will see faster write performance as the card's cache is used but that will disable fast path. To me, its worth it for parity RAID sets though and fast-path only adds about 10% performance boost, while SSDs already scream so.... For RAID-0 or RAID-10, I'd stick with write-through.
I've got four 120GB Samsung 850s on my 9266-8i and the speed scales as you'd expect for all RAID sets. I've got them in a RAID-0 that I'm running a bunch of VMs off of (VMWare workstation) which are all backed up to a large spin disk so losing the array to a dead drive wouldn't hurt me. I've set those 4 drives up in RAID-5 and 10 before to test, and again, they perform as expected for each RAID set.
One important thing to consider when RAID-ing SSDs: overprovisioning. You lose TRIM when putting them in RAID so the OS can't run TRIM on the logical volume to clean up previously used NAND so its ready for new writes. This is sort of a hot debate around the web as many people say that native "garbage collection" on the SSDs themselves does a good enough job, but that's not as efficient as TRIM. Granted, this isn't as important of a subject unless you've completely filled your logical volume and are continually writing new data to the array. Basically, to ensure good performance when the volume is filled up, or close to filled up, you provision the logical volume to about 80-90% (again, you'll hear 50 diff opinions on how much you should do) of the total disk space available. So if you put 4 x 120GB drives in to a RAID-0, when you specify how big to make the logical volume in your RAID config, make it 400GB instead of the complete 480GB available to the volume. Your OS will see a 400GB drive available and your RAID card's config utility will see an additional 80GB unused, which you won't touch and just leave there.
If using SSDs I wouldn't sweat RAID-5 vs RAID-6. The biggest concern with parity RAID is losing drives DURING rebuild, after a drive has died in the array. Reason being, rebuilding an array once a dead drive has been replaced puts a lot of load on the other drives in the array and if those drives are large (over 1TB) then the rebuild can take a long time (days in many cases) and during that time if a 2nd drive dies (in a RAID-5) then you're screwed. RAID-6 adds one more bit of protection against this, but you pay with even slower write speeds under normal conditions. Write-back and RAID cache may keep you from ever noticing how "slow" your write speed is with RAID-6 though, especially if you don't do a lot of writing and even more so if using SSDs which can get the data to disk much faster anyway. The thing about SSDs is that rebuilds are SUPER fast, like a couple hours, even with 1TB SSDs in your array, so for me I'd stick to RAID-5 with SSDs. One way or the other, you always want your data from your RAID set backed up to some other drive or media, in a critical data environment anyway, which reduces the fear of this happening anyway.
If you buy enough drives then you can play with 50 and 60 but unless you have a LOT of disks I wouldn't bother with this. They're typically used once you hit over say 12 drives, and the choice of utilizing these RAID types and how many drives go in each are highly dependent on a LOT of factors, including total number of drives, size of the drives, speed of the drives, reliability of the drives (URE ratings & MTBF) and the use of the server. Again, not really what you're messing with here. For instance, I'm working on a file server build that will have over 50 hard drives and we're looking at building multiple RAID-60 sets on multiple cards using enterprise hard drives. A LOT of design considerations went in to this.
Note that the LSI cards don't do true JBOD. If you want a single drive to be presented to your OS, you actually have to configure that drive in its own RAID-0 array, then that logical volume is presented to the OS. The main caveat to doing it this way is that there's RAID data on the drive, so if you were to take that drive out later and connect it to an external SATA connector or put it in another computer, that computer's OS wouldn't be able to access the data you had put on that drive. Other cards (Areca, Adaptec) have native JBOD support on most of their cards. Note that comparably priced Areca cards, to the LSIs you've looked at, will have 2GB of on-card RAM and are also very nice. I wouldn't hesitate giving them a try.
So, if you're going to stick to SSDs, I like RAID-5 with write-back. If you want that 2nd bit of protection, you could do RAID-6. Or, if you can afford it, RAID-10 is always great to have. If you're going to use spin discs, the choice can change and of course it will take 4-5 HDDs in a RAID-5 (for instance) to equal the sequential performance of just one SSD and the IOPS still won't even be close for random reads/writes. If using a newer version of windows that has storage spaces, you can get even more creative with SSD caching at the OS level before writing back to your hardware RAID set if you want to really mess around (which I have yet to try).
All this being said, if you just want some redundancy, and this is just as a gaming drive, just grab a couple 500GB (or larger) SSDs and put them in a RAID-1 and call it a day! haha! It will probably feel about the same for load times as 4 x SSDs in a RAID-0. Or, scrap the RAID idea and grab a PCIe SSD (950 pro), put it on a PCIe adapter card and put that in your PCIe slot instead. Now you've got an SSD that's faster than 4 x SATA SSDs in a RAID-0 anyway, and 1/4 the failure potential.
Ok I'm done...