How 2 configure ASUS P6T w/lone SSD + SATA RAID?

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

stockstradr

Distinguished
Aug 3, 2009
14
0
18,520
An obscure question, but Tom's Hardware forums is obviously filled with some of most experienced people that I've ever seen post on these topics, so I appreciate your very valuable advice!

Two questions, both related to setting up on storage on ASUS P6T (std, not the Deluxe version) motherboard on the homebuilt system I'm now building:

1) How to best configure the combination of single SSD (to hold OS and most programs) plus a separate RAID of SATA's when using the ASUS P6T, CPU: i7 920, w/Windows Vista?

Now the details. I bought the Intel X25-M 80GB Solid State Drive to hold Windows Vista Ultimate 64-bit and most of my programs. With the SSD I'm looking for SPEED both on starting up OS, and starting up programs and for fast local read/write: I'll edit HD video local to the SSD, and will move finished work onto the SATA RAID.

I only need 2 TB of SATA drive storage space. For fun, and for speed, I like to set up the SATA drives as a RAID. The question is where to attach the SSD to the P6T?

Here's where it gets complicated:

The P6T has:
A) 6 X SATA 3GB/s ICH10R Southbridge ports which (using Intel Matrix Storage) supports SATA RAID 0,1,5,10.

B) Plus it has the JMicron JMB322 2 X SATA 3 GB/s

Obviously the SATA RAID has to go onto the ICH10R Southbridge ports. However, multiple online forums / test reports are saying or implying the JMicron controller has issues that effectively limit the performance of SSD's. (Read http://www.anandtech.com/storage/showdoc.aspx?i=3531&p=1)

(Yes, I bought the wrong board. I should have bought the P6T DELUXE which doesn't use the JMicron as the second SATA controller.)

NOTE: I do not want to buy a $300 RAID controller card. (Financial controller also know as "wife" says PC cost ceiling has been hit).

QUESTION: so can and should I put the Intel X25-M SSD onto the first of the six ICH10R Southbridge ports, and then put my four WD Caviar Black 1 TB drives onto the next four ICH10R Southbridge ports and configure those SATA's as RAID?

2) Second question is simply: assuming I only need 2 TB RAID storage space, should I go RAID0 + 1 (RAID 10), or should I go RAID5 (which gives me 3 TBG using the four drives)??

Why do I ask? I had thought of doing RAID5 with 3 x 1 TB SATA = 2 TB of storage space. Then I read many online test reports indicating performance bottlenecks when trying to implement RAID5 using onboard RAID controller chips. Those articles didn't specifically use the P6T, but generally concluded you need a dedicated RAID controller card (Ex.: Adaptec) to do a RAID5 that meets performance expectations. In any case, the RAID0 +1 is always faster than RAID5 on all parameters.

So I thought, "Buying a forth 1TB drive is less expensive than buying a RAID controller card. RAID0 beats RAID5 on performance, so I might as well do a four-drive RAID0+1, to yield 2TB of very fast AND mirrored drives."

Any holes in that theory?

DETAILS ON MY SYSTEM:

CPU: i7 920
OS: WIndows Vista Ultimate 64-bit
MOBO: P6T (standard not the Deluxe or the V2)
Memory: OCZ 12 GB Gold 1600 MHz DDR3
Case: Cooler Master 932 HAF
PSU: Ultra 750 Watt
Video Card: GTX260


My goal is a system that edits reads/writes HD video very fast, locally on the SSD. Then I move completed video files over to the SATA RAID, but I also would like that SATA RAID to be fast.
 
Solution
I just redid my storage on my desktop, and it also ended up being similar to this discussion. I have the OS running on a 60GB SSD, and I have 2 500 GB drives set in RAID 0 as the secondary storage. The options in the ASUS bios should be set to RAID, this will not affect the SSD in non-RAID. The RAID array is built in the intel BIOS RAID configuration tool, Ctrl-I during boot up. When you set the members and RAID level, you should see all your drives currently connected to the SATA bus. Select the drives you want, and the RAID level. Once the array is defined, the regular BIOS will treat your hard drives as a single logical disk. Under "Hard Drives" make sure the SSD is selected first, and then back out to "Boot Order" and make...

octopus77

Distinguished
Jan 15, 2010
8
0
18,510
Hello:
I found this thread when facing the same problem as many of you, want to set up a raid 1 config with windows 7 allready installed, and i found the solution given here by " Ultimaswc3" to work like a charm, THANK YOU!!!.
But now i´m concerned about the TLER thing you pointed out.
I have 2 WD black caviar 1TB disk just bougth, so i tried the TLER utility just to find out that is disable as pointed here :(.
So, do i just try to return the drives and buy another brand? i want to do a simple raid 1 setup with 2 disks, would this be a problem if i don´t have TLER enabled?
Thank you in advance
 

rob_v

Distinguished
Nov 23, 2009
23
0
18,510
First off - tnx to Stockstradr for a very detailed post. I wish I would have found this a bit sooner!!

I just picked up the same SSD drive you mentioned (Intel X25-M 80GB ).
My current set up is 1 HD for my OS and Apps w/ a 500G raid array for storage.
The plan is to use the new SSD for the os and apps.

I have a Gigabyte EX58-UD5 mobo - it looks like its the same Sata connectors as your board.
6 X SATA 3GB/s ICH10R Southbridge ports which (using Intel Matrix Storage)
Plus the JMicron JMB322 2 X SATA 3 GB/s
Also Im installing Window 7 Professional.

Everything Ive been reading says that the SSD shouldnt be installed with the controller set to RAID but to AHCI

My first install - I put the SSD stand alone on the ICH10R ports w/ the bios set to AHCI.
Added the OS using USB (instead of the disk).
Booted fine.

Then switched the setting to raid to add the array. Got to the point where the windows logo came up - and it hung.
removed the raid HDs - put the setting back to AHCI - booted up.

OK - next I put the setting back to raid - and did a reinstall.
this time - os installed fine.
added the 2 drives for the raid - ctrl-I - created the array - no problems.

I installed my apps - but Im noticing some serious lags and hangs. The system just seems very unstable.
Not really sure if its b/c of the reformat on the new install or what - but Im going to try it on more time - but first wipe everything clean with HDDErase then follow your instructions.

My question is - you mention during the OS install that you paused it....

"
SOLUTION FOR SETUP:

NOTE: with Vista and Windows 7, the setup is easy because those OS allow you to conveniently pause during OS install and load the driver for the ICH10R Intel Matrix Storage Manager AND don't force you to load only from the floppy A drive.
"
.
.
.
5) Load the OS and pause (it will give you the option during install) so you can load the ICH10R driver from the ASUS disk that came with the MoBo. Then continue installing your OS onto the SSD.

NOTE: remember, you must NOT setup that SSD as a standard drive in the ASUS setup; it has to be listed under the RAID configuration as a "Non-RAID Disk" then you'll install the OS onto that SSD.
"

Not to sound like an idiot - but Ive reinstalled this os on this machine about 5 times now - and Im just not seeing the option to pause.

Also -as a side note...

Im assuming that since the SSD is controlled with the settings set to RAID - that trim is not enabled? Or is this only the case when the SSD are part of a raid array?
If that is the case, are you using the Intel SSD toolbox and running the optimizer manually?

Thanks
Rob

 

Deviling Master

Distinguished
Feb 13, 2010
2
0
18,510

Hi, i have the same situation (SSD + 2HD at Raid1) but i have ASUS P6T Deluxe v2. Can i follow this guide?
 

exm

Distinguished
Nov 24, 2009
43
0
18,530



Update to everyone having this issue. This fix worked for me as well (GA-P55A-UD4p using P55 controller; SSD with WD750GB RAID1 setup). Here's the copy-n-paste from the link above that fixed the issue for me:

Changed the BIOS back to IDE Enhanced so Windows could boot.

In the Windows\System32\Drivers folder is a file called iaStorV.sys installed by OS by default.

Into the registry we go.

Navigate to:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Servic es\iaStorV

Change REG_DWORD "Start" from 3 to 0

Reboot

Go into the BIOS and change Sata Configured To to RAID
 

stockstradr

Distinguished
Aug 3, 2009
14
0
18,520
Yes, its me the guy originally started this thread. I'm adding this final post:

HOW WOULD I CONFIGURE MY RAID KNOWING WHAT I KNOW NOW?

That's easy to answer because several months ago I did just that: I completely tore down my RAID and rebuilt it from scratch the right way.

(The recommendations below will be no surprise to IT professionals who frequently configure RAIDs all the time.)

1) Go with reputable hardware-based RAID. I finally went with the LSI MegaRAID® SAS 9260-8i, with great results
http://www.lsi.com/storage_home/products_home/internal_raid/megaraid_sas/6gb_s_value_line/sas9260-8i/

NOTE: It pays to get at least 6 channels (9260-8i has 8) because if you have the bucks (and overkill mentality) to spend THAT much on hardware RAID for a home PC, then you'll end up (now or eventually) with a RAID involving at least four SAS (or SATA) drives. Furthermore, anyone with that overkill mentality for home PC will have surely put the OS (and smattering of programs) on a good SSD for blazing fast boot-up and random-access speeds.

So think ahead about how ALSO placing that SSD on a channel on your hardware RAID card will increase its performance significantly (compared to hookup to a mobo-based SATA port). That performance boost for SSD's (such as the Crucial C300) plugged into RAID cards has been confirmed in several online reports.

But the SSD takes up a channel.

And you may want a hot spare drive online (hooked up to your MegaRAID card) because most good RAID hardware cards are smart enough to automatically rebuild to a hot spare, if it detects a drive is dropped.

So right there, a channel for that hot spare, plus that additional channel for your SSD. Add that to your 4-drive array and you need at least SIX channels.

2) As mentioned above, put your OS onto a great SSD (such as Crucial C300) and hook directly onto a RAID card channel

3) Build your RAID using some better desktop series drives that have shown good success in RAID setups, such as the Hitachi Deskstar 7K3000 series, or for real reliability go with "enterprise" tier drives from any reputable HDD manufacturer like WD or Hitachi, etc.

I'm real happy with the speed I'm getting from the 2GB version of the Hitachi Deskstar 7K3000 series. Getting about 270 MB/sec read and write speeds in a RAID10 configuration. You can imagine what you might yield in pure striping configuration such as four of these in RAID0.

But of course I discovered the MAIN benefit of doing (mirrored) RAID the right way is PEACE OF MIND regards keeping data safe from HDD failure
 

gcheris

Distinguished
Jun 30, 2011
2
0
18,510
Having read all the above posts, albeit after I configured my Raid 10 with 4 - 1tb Hitachi drives, I have a drive the P6T won't recognize. I also have 2 dvd's which have to play in the Intel connectors since the Jmicron won't play atapi. So that fills up the Intel connectors. My problem is I have another 1tb hitachi loaded with images that I'd like to keep inside the tower. I plugged it into the Jmicron port and the system will not recognize the drive. It had worked just fine before I switched to Raid.

Any ideas?

Thanks!!
 

lrk322

Distinguished
Jan 3, 2012
1
0
18,510





i had a 36 gb drive 10000rpm in Raid 0 and when i what to Raid 10 with 4 of the same drives i got a better score on my PC i when from 5.2 to 6.3 in proforments if that helps you any
 

bookworm370

Distinguished
Jan 22, 2012
1
0
18,510
Yes, that's exactly what you will see if you go to a raid 0 to a raid 10 architecture with any kind of cache prefetching or write-thru enabled.

Remember, for example with a 2 drive raid 0 or a 4 drive raid 10 configuration, that with 2 drives the controller only has 2 actuators but only 1 choice as to where to get the data from. As the data is stripped, on reads or writes if the sectors are anywhere near sequential, then the controller seeks the arms to the first read and if then it will give the command for the 'sister' arm to go to the next sector on the other drive. That way when the data comes in for the first sector the 2nd drive is already given the command to start streaming the data from it's sector. That's way raid 0 is so fast. And if it's on the same track, the arms don't even have to move, by the time the rotational delay transfers the data from the 2nd drive the 1st drive's 3rd sector is coming under the head.

Now, with Raid 10, the controller has 4 arms and 2 choices as to where to get the data from. As the data is mirrored, again in read situations, it really doesn't matter which ones of the 4 drives (or the 2 brother drives) it gets the sector from. In this case, the controller can get sector 1 from drive 1, sector 2 from drive 4 (1,3 - 2,4 mirrored, while 1,2 & 3,4 are stripped), sector 3 from drive 2 sector, 4 from drive 4, etc.. Eg, for each sector, it can pick a drive who's arm is the closest to the data.

Well, for a single app, maybe not all that much of a performance boost, but consider 2 apps that want data. If both are reading then the controller has requests from multiple threads. It can position the arms on one stripped mirror pair for optimum reads for thread 1 and use the arms for the other mirror to get the data for thread 2. Remember, we're talking reads here. But the same applies for writes. With write thru caching enabled on the controller and using the drive caches Raid controllers write metadata to the tracks. That way they can check on data integrity and if they drives are synced. So a really smart controller like the 3Ware/LSI can stagger the writes as well. Giving priority to one of the writes and then caching the mirror write to the other drive. The data still remains in the cache so if anyone wants to read it, it's really not a read seek operation to get it, the controller just returned the fresh cached data.

So you can see that Raid 10, in apples to apple comparison 2x2, 4x4, will out perform the same amount of stripped drives any day with decent logic in the controller (eg, I'm ignoring Marvel for this discussion).

Now, add to the about the ingenious methods used by WD and Hitatchi in their higher drives (like the WD Black editions). Not only does WD double each dirves on board cache, but in the WD case, each drive has 2 (TWO) processors on the backplane. One handles the electromechanics, while the the other, absent of any pending pre-seeks from the controller, itself tries to figure out the access pattern of itself and proactively will seek the arm or in the case of a sequential read, will just suck in an entire track of data assuming that the controller will ask for the next sector. That way when the controller does ask for the next sector, bamm, it's already in memory and it's just a Sata Channel full speed transfer rivaling SSD speeds instead of waiting for the disk to spin. If the next sector is not what the controller wants, no harm no foul, it just does what any non-intelligent drive will do, discard the data and go on it's own way to get the new data.

So in most every case, a RAID 10 will outperform a RAID 0, Raid 5, period. It's the fastest you will get in arrayed technology. You pay, as in Raid 1, for the 1/2 usage mirrored copies but you also get the payoff of fully redundant drives, fault tolerance, and as I described above, even better performance than a Raid 0 equivalent configuration.

You'll also notice a dramatic increase in using something like VMWare ESXi. As the hypervisor schedules the read, writes, for the same amount of required data storage, the performance factor is outstanding if you were to compare the difference between say, 4 1TB drives (2x2) in a raid 10 (2 TB of useful payload) or 8 drives 500Gig drives (4x4) for the same 2TB useful payload. The same workload will takeoff and fly!

Now, add the battery cache to the 3Ware controller card so that the card knows that even if there is a power failure it can finish the delayed writes when the power comes back. In a home grade system, the controller has no control over the power, the drives are going to spin down anyway so just save the data changes until power is reapplied. In Enterprise SANS like EMC, there are honking big batteries that have the capacity to keep all the drives spinning for the 10-20 seconds that may be required to flush the caches to the disks, soft power them down by parking the heads and then issuing a spindown command then then the EMC can then power down the SAN.

I hope the above rambling now sheds the light as to if someone should install Raid 0, 1, and/or 10.
 
Status
Not open for further replies.