SSDs In RAID: A Performance Scaling Analysis

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

larkspur

Distinguished
[citation][nom]emperornicon[/nom]no mission critical data on my own pc[/citation]

Ok good, I think I just misunderstood you. I think the best overall balance of value (between economy and performance) in today's market comes from mixing spinning disks and SSDs. I find that a single ~80gb SSD that supports TRIM works great for a system volume. Two (or more) ~120gb SSDs in RAID-0 as a demanding-app application volume (as this article shows, they scale nearly linearly). And two ~1tb spinning discs in a RAID-1 as a data/all-purpose volume. You routinely back-up the system drive to the RAID-1. And you only install high-demand replaceable data on the RAID-0. I've found onboard RAID controllers to be adequate for this task as long as you use a good UPS. This arrangement gives the home PC the best of both worlds.
 

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
Really good points, larkspur!

On all of our workstations, we have shrunk C: to 30-50GB and moved all non-system data files to redundant HDDs. This also results in "short-stroking" all of our C: partitions.

We also do routine drive images of C:, usually after "Update Tuesday", and then copy the latest drive image to every other drive letter, also for redundancy: we wrote a simple batch file to do this copying.

Now, we're ready for a total failure of the primary HDD hosting C: because it is easily replaced, and the entire set of system files is easily restored e.g. with the GHOST restore task.

Also, if/when a primary HDD exceeds its factory warranty period, it's also just as easy to replace it e.g. with the free Acronis True Image WD Edition Software: at least one WD drive is required for this freeware to work, however.

For Users who are interested in shrinking C:, there is another excellent freeware package in Partition Wizard: www.partitionwizard.com

Both programs also defragment when shrinking or copying a system partition.

As far as Nand Flash SSDs are concerned, we are waiting for the industry to mature and catch up with the current SATA/6G standard: the Sandforce SF-2000 series SSDs are starting to look very attractive, for that reason e.g. OCZ Vertex 3.

And, if you know where to look, Intel's P67 chipset appears to drive SATA/6G SSDs a lot better than the retrofits that various Tier 1 motherboard manufacturers did with the X58 and P55 chipsets.

But, as far as I know, TRIM is still not available when SSDs are assembled into a RAID. One experiment that could be done is to configure multiple SSDs as JBOD / Dynamic, then enable an OS software RAID 0. We've done the latter with 2 x WD SATA/6G HDDs, and it appears to work fine. It would be nice to know if TRIM works with the latter setup.


MRFS
 

emperornicon

Distinguished
May 11, 2009
34
0
18,530
thanks ive known how fragile raid-0 is thank goodness i bought a few spare replacements long ago and routinely duplicate my volumes i was under the impression that ssd drives wear out compared to legacy storage
ive lost 1 drive from a 2year old from jumping up and down on the floor next to the desk that tower was sitting on my little one cased an hour of down time wasnt bad.
 

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
> i was under the impression that ssd drives wear out compared to legacy storage

Yes and, as a matter of fact, at least one report has stated that 34nm MLC Nand Flash typically supports 5,000 write cycles, whereas the latest 25nm MLC Nand Flash only supports 3,000 write cycles:

http://www.storagereview.com/ssds_shifting_25nm_nand_what_you_need_know

Quoting: "34nm MLC NAND is good for 5,000 write cycles, while 25nm MLC NAND lasts for only 3,000 write cycles"

And, OCZ got into a lot of trouble with their customers when they released 25nm SSDs but did not change the packaging labels: the 25nm SSDs performed significantly slower than the 34nm SSDs!


MRFS
 

slothy89

Distinguished
Jan 9, 2011
75
0
18,640
[citation][nom]ssdlkje[/nom]...words...[/citation]have you not heard of TRIM? Yes normally an SSD would lose performance over time due to the cells becoming "dirty" but TRIM has been around a while and cleans up those dirty cells helping to maintain the high performance.
Toms did an article demonstrating this a little while back and showed about a 1-2% loss of performance compared to a brand new drive with TRIM enabled.

TRIM is not a new tech, so I'm concerned that you were unaware.. Makes me not believe anything you said.
 

alidan

Splendid
Aug 5, 2009
5,303
0
25,780
@emperornicon

look at ram, the speeds that can achieve. than look at ssd... yea its in its infancy, but its not in the "only a moron would buy this" infancy.

1. will you realy use the preformance gain to its potential?
2. see #1
3. well with mission critical thins, you SHOULD have some sort of backup, the frequency is up to you. a hdd can fail on read, on write, and many other ways, a ssd really only fails on writes, and that is an expected fail, where a hdd has catastrophic fails, a ssd tends to be more reliable.
4. depends
5. see before the #'s
6. hell yea. look at it performance wise. you pay for it right now. in the future, a hdd will be up to 10tb, and you will have 1tb ssd for about the same price. the hdd used for storage, the ssd used for stuff often accessed.
but we are looking at 5-10 years down the line here, possibly more.

they will never be same price for same storage competitive, but they will be competitive.
 

alidan

Splendid
Aug 5, 2009
5,303
0
25,780
[citation][nom]MRFS[/nom]> i was under the impression that ssd drives wear out compared to legacy storageYes and, as a matter of fact, at least one report has stated that 34nm MLC Nand Flash typically supports 5,000 write cycles, whereas the latest 25nm MLC Nand Flash only supports 3,000 write cycles:http://www.storagereview.com/ssds_ [...] _need_knowQuoting: "34nm MLC NAND is good for 5,000 write cycles, while 25nm MLC NAND lasts for only 3,000 write cycles"And, OCZ got into a lot of trouble with their customers when they released 25nm SSDs but did not change the packaging labels: the 25nm SSDs performed significantly slower than the 34nm SSDs!MRFS[/citation]

in terms that are easier to undersatand,

3000 writes, assume a whole hdd worth of writes a day (80-120gb) would take almost 10 years to kill the drive, how many pieces of tech equipment that are digital have you not upgraded in 10 years? or at least could, but don't want to?
 

susennath

Distinguished
Mar 28, 2011
37
0
18,530
technical data should be attached with this by which we can calculate all logical deduction with those interpolation, otherwise it acts as history like materiel.
 

jauffma2984

Distinguished
Mar 28, 2011
1
0
18,510
My understanding is that if I take two or more SSD drives and put them into a RAID 0 array, Windows 7 will disable the TRIM command and I may see degraded performance over time. Is this true?
 

larkspur

Distinguished
[citation][nom]jauffma2984[/nom]My understanding is that if I take two or more SSD drives and put them into a RAID 0 array, Windows 7 will disable the TRIM command and I may see degraded performance over time. Is this true?[/citation]

Yes, basically. It won't disable the TRIM command completely, it just won't send TRIM commands to any drive that is part of a RAID volume. That means that other SSDs on the same system that are not configured in a RAID will still receive TRIM commands.

Also, the degraded performance is only with WRITE performance. If you notice degraded write performance on a RAID volume, you can still back-up the data and use a utility (provided by the SSD manufacturer) to FULLY erase the SSD RAID volume. When the volume has been fully erased (using the utility NOT Windows), its write performance will be returned to brand-new speeds. Remember that erasing = writing, so this isn't something you want to do all the time since SSDs have a limited number of writes. Only do it if you notice performance loss.

I've been running 2 x OCZ Vertex 2 (the 34nm version) in a RAID-0 for a year with moderate writing and have yet to see a performance loss in my benchmarks and have therefore had no need to do a full erase. For the typical computer, it's all about deciding what apps/data actually NEED the speed of the SSD RAID. Just use regular spinning disks for the rest.
 

larkspur

Distinguished
[citation][nom]larkspur[/nom]Also, the degraded performance is only with WRITE performance. [/citation]

I meant to say MOSTLY with WRITE performance... sry to be confusing, READ performance is also affected though usually not as much...

Tom's has some articles that attempt to examine performance degradation over time in the storage section. The rest of that post is still valid.
 

dgingeri

Distinguished
From direct experience, I can tell you two things:

1. For a home system, RAID0 on 2X Vertex 2 drives is absolutely awesome. 9 second boot times, 4 second load times for Photoshop CS4 64-bit, less than 1 second load times for Word and Excel 2007, and 5-6 second load times for WoW. I will never go back to a regular hard drive for my OS and programs.

2. in IT, this will have limited use. It would be great for a server OS for domain controllers and authentication, but it would have limited use for storage or database use. In every single situation I have seen in IT, (I worked as a desktop support tech for 13 years and have worked as a server software testing lab admin for a year) the big limitation is capacity and budget. Sure, an array of 36 256GB SSDs would be great for an HR database, but really, who would have that much rack space and budget and need that kind of performance on an HR database? In order to get a usable file server, you'd have to put together 480 256GB SSDs, and that cost, in both money and rack space, would be very prohibitive. A 120TB set of storage arrays is $144k, and would require going through a year long process to get approval in my company. My last company was totally out of storage, with limiting users to 2GB mailboxes and 250MB on their user directory, for over 6 months before we got board approval to get more storage. (I don't know how much was spent at that time, but it was before 1TB drives were available, and we got 30TB of effective storage.) The cost for SSD arrays to match that would be incredibly high cost, and would probably not get approved. There is only one situation where I can see this having a use: a database server having very small storage needs, but very high performance needs. With these restrictions, I can only see 1 way that would have any use in IT: domain controller for a very large domain.

On the other hand, my company's products include a virtual tape library appliance with deduplicating functions that use 2 SSDs in RAID 1 for the OS and 2 SSDs in RAID0 for the deduplication key data.

That's my take on the matter.
 

dgingeri

Distinguished
[citation][nom]larkspur[/nom]Yes, basically. It won't disable the TRIM command completely, it just won't send TRIM commands to any drive that is part of a RAID volume. That means that other SSDs on the same system that are not configured in a RAID will still receive TRIM commands.Also, the degraded performance is only with WRITE performance. If you notice degraded write performance on a RAID volume, you can still back-up the data and use a utility (provided by the SSD manufacturer) to FULLY erase the SSD RAID volume. When the volume has been fully erased (using the utility NOT Windows), its write performance will be returned to brand-new speeds. Remember that erasing = writing, so this isn't something you want to do all the time since SSDs have a limited number of writes. Only do it if you notice performance loss.

I've been running 2 x OCZ Vertex 2 (the 34nm version) in a RAID-0 for a year with moderate writing and have yet to see a performance loss in my benchmarks and have therefore had no need to do a full erase. For the typical computer, it's all about deciding what apps/data actually NEED the speed of the SSD RAID. Just use regular spinning disks for the rest.[/citation]

If the system is set up with ICH10R RAID0 or 1, and the drivers are up to date, the TRIM commands are enabled and pass through the RAID drivers. I have the same setup as you do with 2X Vertex 2 (120GB, 34nm) drives, and my performance hasn't degraded at all either. I have them hooked to the ICH10R with the latest drivers, so I believe they won't get degraded as much over time.

I have an 3Ware (LSI now) 9650 SATA 8 port RAID controller in my server, and I can tell you performance is a lot less on that thing than ICH10R for an SSD RAID0. It works great for a 4X 750GB Caviar Black array in RAID10 mode. :)
 

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
> If the system is set up with ICH10R RAID0 or 1, and the drivers are up to date, the TRIM commands are enabled and pass through the RAID drivers


That's NOT what Intel says. Repeating from above:

http://www.intel.com/support/chipsets/imsm/sb/CS-031491.htm

Intel® Rapid Storage Technology (Intel® RST)

Is there TRIM support for RAID configurations?

Intel® Rapid Storage Technology 9.6 supports TRIM in AHCI mode and in RAID mode for drives that are not part of a RAID volume.

A defect was filed to correct the information in the Help file that states that TRIM is supported on RAID volumes.

[end quote]


MRFS
 

dgingeri

Distinguished
That is news to me. It looks like they tried to make it work, thought they had it, released the info, and then someone else found it didn't. So, they had to release a correction. I had never heard about this correction. Good to know. I won't be telling people it works anymore. :)
 

enforcer22

Distinguished
Sep 10, 2006
1,692
0
19,790
[citation][nom]oxxfatelostxxo[/nom]To ssdlkje 1: Money vs gb.. they arn't really that expensive anymore, after rebate i spent 180$ for 2 60gb ssds, and put them in raid 0 for my OS. [/citation]

Thats exactly what he meant. You spent enough to almost get 4 TB to only get 120gigs. Per gig SSD are still retardedly expensive.

 
G

Guest

Guest
I am sick and tired of people talking up SSD drives.

after about 2 years ,they will grind to a halt on raid, unless the raid card supports TRIM. without it, the drives will slowly run out of write space.
 

toddbailey

Distinguished
Oct 21, 2006
34
0
18,530
While SSD's are certainly faster than the mech hard drives, I'm still waiting for someone to post data on their lifespan, just how many write cycles can one expect from a ssd and how does that relate to a real life typical use case where a sole ssd is used as the system disk. My laptop would greatly benefit from the low power, reduced heat and shock resistance of a ssd, but suppose you run a test where you are continously reading and writing to a ssd and a hdd. Which one dies first?
 
Status
Not open for further replies.