Slow Performance: 4x OCZ SSDs and Adaptec RAID Controller

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I'm guessing that Intel stuffed about 128MB of DRAM (volatile) write cache onto the X25-E, which would also explain the high power-consumption of the device.

http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403&p=10

Take a CLOSE look at the Samsung DRAM part number for the X25-M.

http://www.bit-tech.net/news/2008/08/20/intel-x18-m-80gb-ssd-smiles-for-the-camera/1

16MB for X25-M. Which is what modern hard drives have.

http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403&p=8

11k iops random write 100% with queue depth of 1

"which we presume acts as a buffer and helps the Advanced Dynamic Write Levelling technology do its thing."

"Finding good data on the JMicron JMF602 controller is nearly impossible, but from what I've heard it's got 16KB of on-chip memory for read/write requests. By comparison, Intel's controller has a 256KB SRAM on-die."

And X25-M is the MLC, which is known to have more write problems fundamentally.
 
Legacy OS like Windows Vista, XP, and Applications like Microsoft Office 2003, 2007, etc. have built in, inherent flaws with regard to SSDs.

Specifically, optimizations of these OS for mechanical hard drives like superfetch, prefetch, etc. tend to slow down, rather than help performance and is unnecessary to speed up reads in an SSD, but slow it down with unnecessary writes of small files, which SSDs are slower than a regular hard drive.

Things like automatic drive defragmentation with Vista does nothing for SSDs except to slow them down.

Properly optimized, even low cost 2007 generation SSDs test out as equivalent to a 7200 rpm consumer grade drive, and typical SSDs made in 2008 or later tend to outperform mechanical hard drives.

See the discussion here for a detailed discussion of SSD performance tweaks and what it takes to make them perform well with legacy OS and Applications.

http://www.ocztechnologyforum.com/forum...display.php?s=&daysprune=&f=88

 


What i think is that term "Legacy OS" In Windoses means OS's before Wk2.

Like 9x and NT's.

Other than that i fully agree.

There is plenty of optimization tips for SSD's in OCZ forums and most of them work also in Intel SSD's and some to spinning drives too.

Like disabling 8.3 names and indexing.

I am still trying to look that little fact that Which is better cluster size for SSD drives -smaller or bigger ones?

If they have vastly faster seek times one might think that small clusters. Somebody said that it does not have matter.. =/
 


I thought that Superfetch means using something like USB memory stick as alternative store for system files, because even USB is faster than normal harddrives when booting computer on.

.j
 
That's readyboost. Superfetch pulls commonly used files into available RAM when the system is idle, so when you use them, the loading is practically instant. It's also why Vista appears to use so much RAM, although it will instantly free up all of the RAM used by superfetch if another program needs it.
 
Thanks for informing. I have not currently moved from XP so it is a bit irrelevant to me. =)
But sooner or later.. . .
 
I just yesterday ripped out my Adaptec 51645 SAS/SATA Controller with battery backup on card. I had 8 OCZ SSD's hooked up to it, 4 WD VR drives and 4 Savvio 10K2 drives. I have tons of benches on the Savvio's and the VR's and never really had problems. The Adaptec gets hot though but not over heating withing the SSD drives. I then hooked up the SSD drives and suddenly my alarm keeps going off. The card seems to suddenly get much hotter when trying to run these things and I had to put a fan by it. So that is not to bad and I proceeded to test some more. I kept getting lockups or suddenly missing drives from the array. After rebooting they were back though. I was getting good performance when running but just could not seem to get over the problem. So I hooked 6 of them up to my motherboards Intel controller running a RAID 0 and have not had one problem at all. When I run benchmarks though I am getting low performance but they seem to be responding quickly so not sure if the benchmark is telling real throughput.

I am kinda bummed about this card. I now have a Adaptec 31605, Areca 1680ix-16 (with 4GB cache) and this Adaptec 51645 and none are really perfect. The Adaptecs are much more refined than the Areca but still have problems. I am wondering if maybe the simpler raid cards are better for SSD's or something.

Sorry if there is nothing in here to help the original poster but I just throught I would share my experience. I too was not happy. I was getting insane benchmarks but could not keep it stable.
 
I think that the "read write" penalty comment on the Intel, although along a good path, is also very misinformed.

Many single threaded workloads cannot benefit from the parallel operations of modern RAID arrays. An SSD than can perform 10,000 random mixed IOps at 8K has a MAJOR advantage over 50 HDD's that can perform at 200 IOps. Sure, the 50 drives probably have a sequential read/write capability that is enormous, and they have incredible $/MB. But that's not why you buy SSD.

An Intel E drive is for a workload in which single or low threaded performance is necessary. Try performing 50/50 read write, 8K random workload (OLTP) on your 50 drives, and you'll find out that the latency isn't much better than it is on one. About 5 to 7 ms on an enterprise drive and >10ms on a large SATA.

You should never place an SSD behind a RAID card if possible. It actually reduces IOps. Instead, put it as close to your system bus as possible. Sun is putting six SATA busses on their Intel based (4150) servers, almost directly connected to the I/O chip.

One a single E series, we see almost 14,000 IOps sustained random, 8K. We see 14,000 IOps read. Of course it's a little less if you mix in some streams. The the point is that we see <200uS (micro seconds). That's about 35 times that 10K RPM F/C, and a heck of a lot better than a 7,200 RPM drive.

So the rhetoric works only when you're considering a certain workload. In situations where latency and single threaded random performance is needed such as OLTP, SSD is king.

Also, Intel SSD's have super-capacitors to retain DRAM memory in case of power outage (not sure how long).

Finally, if you're interested in using this in your computer and you fear:
-- Power outages / data loss / untried technology...
--Writes getting in the way of reads.
-- High cost per MB...
--Sequential performance suffering/ interfering...
You should be looking at Solaris ZFS with ZIL and L2ARC.
http://blogs.sun.com/brendan/entry/test
http://blogs.sun.com/perrin/entry/the_lumberjack

You can build a massive volume and get great read latency where it matters, and microsecond synchronous commits by adding a few cheap SSD's...

--Ken
 
This issue is obvious in the first post! The raid card in question is an 8x pcie and is in a 4x only slot. You might have well just used the onboard because you are killing the raid card in IO performance. I now have 4 Samsung SLC SSD's (the same ones that OCZ uses) in raid tested both on an Adaptec 5405 and a Highpoint 3510. My board is an ASUS P5Q Pro using a Q9300 and 2 GB 1066 ram. The 2 drives first started on the ICH10R in raid 0 maxing out at 134mb/s reads. I got the Adaptec 5405 and 2 more drives, and that instantly hit 435mb/s reads in the 8x slot. I got really disapointed by the long boot time of the adaptec controller, saw some reviews indicating the highpoint did not have this issue. The Highpoint did not match the adaptec with it's shipping bios, only giving 380ish reads. I flashed to the latest and now have the 430mb/s reads back and a much shorter boot. I will say I'm not knocking the Adaptec, it's a better card and way better management software! Just that is takes longer to get past it's bios, and I don't see that changing with newer firmware, unless Adaptec completely overhauls the whole software firmware. The Highpoint should be downgraded either, it just does things diferently. The HPT 3510/20 only allows you to mess with the drives and arrays in the mangement software, but not card settings! The card setting must be changed in bios at boot time. Adaptec lets you do pretty much anything in the management console. Both card are adequate in the home or server environments. The Adaptec has the faster 1.2 ghz chip vs. the 800 on the 3510/20 and both has 256mb DDR cache. Both chips have similiar power disipation 11 watts for the 800 and 12 watts for the 1.2, and the Adaptec has a slightly larger heatsink. The adaptec is ever so slightly faster, and has better management software but is slower at boot, and cost me $47 more. The 3510 is also sold on the Egg were you have to go somewhere else to find the adaptec. All this being said, get a board to support the raid card and SSD investment. I have also tried both cards on an older AMD rig using an M2N32 -SLI Vista Premium, x2 6400+ 3.2 ghz, and got the exact same speeds from both cards and the onboard raid (590 chipset) sucked at 130ish reads. I have played with all the diferent stripe sizes and 256 on either raid card is the best with SSDs. I should also point out I have a pair of raptors (not veloci) that do fine on the onboard chipsets in raid. Can't explain why, but the SSDs MUST be on a true HW raid card capable of full bandwith. No pciE 4x junk allowed, although theory says 4 x lanes should support 400mb/s is doesn't work out that way! I have no way of testing a 4x bus with any of my rigs, but I am sure this is your issue.

I have also seen reports that MLC can do factory advertised specs and beat my SLC drives if they are on one of these two raid cards. $300 seems to be the entry price and an 8x pcie slot! Looking back, 4 MLC drives and the raid card would be cheaper and faster (no write stuttering issues on these cards) at reduced lifespan (MLC write amplification).
I should alo note, that I strongly recommend the battery cache backup module for both of these cards, especially is you use MLC drives, because we KNOW the cache will be waiting to write to the drives! Power loss could pretty much guarantee data loss on a write at the wrong moment!
 
This is my raid setup, i think its the fastest for money period. I use six ocz throttle esata 8gb in raid zero. The drives are rated at 90 mb/s read and 30mb/s write. I get one 45gb Partition. I put windows 7127 x64 and a couple of games on it, gta4. Write cache is enabled in the ich10r intel storage manager. The onboard raid is great. Doesnt use up my slots... The theoretical max speed is 540 mb/s but i get 520 mb/s which is equals a 7.7 rating in disk performance. The write performance is equal to 180 mb/s which seems not to drop 20 mb/s like the read speed. EP45-DS3L


http://www.ocztechnologyforum.com/forum/showthread.php?t=55433
 
The Intel X25-E uses most of the DRAM to cache the page map and only has a small amount for caching user data.

7000 IOPS with 2:1 split is like : 4668 Read iops, 2334 write iops based on iops not time. It would take (4668/35000) ~0.133s for the required reads, and (2334/3300) ~0.707s for the writes for a total of 0.841s with .159s added overhead based on quoted stats (4668+2334=7002). The theoretical max would be 5552+2776=8328.

SSD may like smaller stripe setting for best alignment and smaller minimum write. Does a 4KB write to a 64KB stripe cause all 64KB to be read and re-written ?

Intel SSD operates the 10 channels independently so you need a IO queue larger than 10 (or maybe several MB per IO) otherwise performance is closer to that of 1 channel.