The Southbridge Battle: nforce 6 MCP vs. ICH7 vs. ICH8

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
"I still don't quite understand why RAID 1 isn't faster than RAID 0. Or is it when it comes to read and not write. Say for gaming wise?"

data is scattered on the drive - you system must constantly look for data before it can read it, since you have many operations going on once, the drive is constantly alternating. RAID allows stream lining the read seek process - hence raid 0 writes blocks of data to different drives so while one drive is seeking data the other drives is reading data. The data can be made in different sizes - so if you are a retailer who need small chunks of data of server maybe 4-16k size while if gamer with lots of music you use 64-128k chunks of data. depending on the controller and the number of drives the best size changes.

raid 1 is 1 drive the second drive is copy that does nothing but back up the first - so if anything its slower then 1 drive since it must also run raid

if you looking for fast read/ write keep in mind:

if you want a really fast system you need to set up 3-4 drives in raid 0, if those drives are 10k drives then speed is increased even more since the seek time is lower. I used a 3 raptor raid and it is very fast.

2 7.2k sata drives in raid seem slightly faster but if you raid 3 raptors you notice a huge decrease in boot and load time.

if you use a card you get better results regardless - search around the new there is really good article somewhere comparing raid cards to chip set raid - cards are alot faster.

warpedsystems "the need4speed"
 
No not at all. If you run RAID 0 on Sata1 and switch to RAID 1 on SATA2 it will be slower. The bandwidth difference only counts if your PC can handle the through put, and my current machine can not so I would not see a benefit. I went from IDE drives to SATA1 and found no reason to go for RAID 0 as my load times were fine for me. Could get a drink while I waited. I will switch to SATA2 shortly but I still see no reason to switch to RAID 0. But then again I am not fixated load times. Also with all things new it seems fast but once you use it for a month or so you will still not be happy with load times and the overall performance will seem to no longer be fast.
 
As I said above, I do a lot of database work. When I run queries that have to chew thru a couple of million records, I need all the speed I can get. That is the only reason I use Raid 0. As far as load/boot times, I could care less. Trust me, that is the least part of my day.

I understand the security issue of Raid 0. I only have had an array crap out once (my stupid fault). But it was painful enough that I back everything up on 2 different computers and a network drive. Reconstruct 50+gb of data sometime and you learn real fast!

If indeed the ICH8 is 60%+ faster than the Nvidia offering, this becomes a no-brainer!
 
So are you saying that with SATA 2 and RAID 1, that is equivalent to running RAID 0 with SATA 1?
Just to be sure we're both on the same page, let me explain what RAID0 and RAID1 both are. Say you're downloading some pictures of... whatever you download pictures of. Let's use Ms. Alessandra Ambrosio as a nice example. You download a picture of her, and where does that data go? Why to your hard drive(s), of course! You already knew that, but now let's show how the different RAIDs store the picture of Alessandra. Let's assume that you download a pic of her that takes up four blocks of dataspace on your hard drive...

raidzeroff5.gif


Here you see Alessandra is split across the two Hard Drives in RAID0. How is this fast? Well, you can see that each drive is only holding half of Alessandra's picture, and this means that both discs are writing the photo data to the discs at the same time. So your write speeds (theoretically) double because you have two discs to split the data across. Your read speeds will also increase because the computer can grab data from both discs at once. Now onto RAID1!

raidonekz3.gif


Notice that there are two copies of the same picture - one on each drive. This is why RAID1 is more secure. If one hard drive fails, then the other still contains Ms. Ambrosio's photo completely intact. However this security comes at a cost. Even though you have two hard drives, you only get the capacity of one drive, and because you have to write the entire picture onto both drives, write speed has not increased at all. However read speed can improve because you can, say, read her eyes from disc 0 and her hair from disc 1 at the same time.



And SATA1 ~= 150MB/s interface speed (bandwidth) per drive, SATA2 ~= 300MB/s of bandwidth. Since hard drives can't supply data at speeds this fast anyway, it doesn't make much difference which is used.
 
i ONLY RAID 10/0 AS test rig - so i can build them for other people. The first raid is 10 the second is 0. I do not think that is the best way to go.

RAID 10 have crashed for other people even though mine is still up. Both nvidia and intel chip set raid 10 have crashed - nforce5 and ich7. Again I think its small memory errors - I was hoping someone would read my posts and explain why and tell me?

I think, the best system is normal drive no raid with (even better with a raptor) for your os. Then a second raid system for data. You back up your primary drive (single drive) on the raid or a better yet - a external drive.

I used to build gaming systems with a ide os drive and then secondary raid for gaming. That's, 3 drives 1 os and 2 gaming raid 0.

A new system I am going to test, as follows, I have not yet built - only since i am lazy! I like to try a 5 drive system with 3 raid 0 drives and 2 raid 1 drives. The new boards like the asus p5w-dh, the primary sata controller (ich8) has boot toggle on/off in the bios. The secondary raid j-micron (i think) also has a boot on/off. You can set a fast raid with 3 drives in raid 0 and turn off the boot for ich8 then the j-micron controller should boot up fine. you can even set up a raid 1 with the j-controller. 5 low cost seagate sata ($60-70 at new egg for 160-250 gig) with get a 3 drive raid 0 and 2 drive raid 1.

The problem with the raid 10/0 with is the raid 0 is the secondary raid so its on the inside of the drive platter. The outside is the fastest since it has more surface area per rotation. So your raid 0 uses the slower part of the drive. Then if you reverse it the raid 10 uses the slower part of the drive. That's why i like the 5 drive set up above.

WarpedSystems "the need4speed"
 
So the question regarding Raid0 on the 680i has not been answered yet.

How does this effect real world preformence? considering its sequential read/write speeds, i would imagine any type of data transfer would suffer from this.

Is this a problem with NCQ on the motherboard conflicting with NCQ on the RaptorX hard drives?

Anybody? Beuller?
 
Thank you. I knew the difference between the RAIDS and the difference between SATA 1 and SATA 2. I was just wondering if you could take two hard drives. Partition them so that you have 2 different partitions. And then make one partition RAID 0 and the other partition RAID 1.

As of now, one Seagate 160 is partitioned into 3 parts: Windows XP, Programs, and Movies. The second Seagate 160 is partitioned into 2 parts: Windows Vista 64-Bit and SUSE 10.1 64-Bit.
 
I was just wondering if you could take two hard drives. Partition them so that you have 2 different partitions. And then make one partition RAID 0 and the other partition RAID 1.

As of now, one Seagate 160 is partitioned into 3 parts: Windows XP, Programs, and Movies. The second Seagate 160 is partitioned into 2 parts: Windows Vista 64-Bit and SUSE 10.1 64-Bit.
I'd reckon you could do this by following their "Intel Matrix RAID technology" plan on page six of this article. Here's another illustration (yay pictures!):

matrixstorageme1.jpg


You'll notice that you have two drives split into two partitions each. Drive 1 has partitions "P1" and "P2" (as does drive 2). Since the computer sees each partition as a different physical drive, I don't see any problem RAID1ing the P1 partitions together and RAID0ing the P2 partitions together.

What would happen if a hard drive fails, though? Here's an illustration of what you would have if, say, hard drive 2 fails:

matrixstoragebrokenmy8.jpg


Sice RAID0 doesn't have any backup/redundancy, you'll end up losing 3/4 of your storage space if a hard drive dies. Which is fine, if that's what you have in mind. The stuff you keep in your RAID1 partition will still be safely stashed away. 😛

Now seeing as you have five partitions amongst two drives, it would be a bigger juggling act for you, because it is best to keep all the partitions in the RAIDs the same size (otherwise you'll waste hard drive space). I don't know if you would benefit from this at all, seeing as you have three different OSes in different partitions - it might be more hassle than it's worth to set up your partitions to RAID0 some of them and RAID1 some of them. :|
 
Regarding the nVidia RAID bottleneck:

2 drives in simple RAID0 should be slightly affected, since the drives have a maximum output of about 50-70MB/s, so two drives can only put out 100-140MB/s max or so, and the 120MB/s wall of the controller is not much faster.

4 drives used in the demo are capable of blowing past the 120MB/s wall that the nVidia setup showed, maxing out at close to 280MB/s. This setup would obviously be very limited by the nVidia wall.

Three things to consider:

Other users have posted that they did not encounter this 120MB/s wall. The out of date BIOS used in benchmarking could be a factor, as could incompatibilities with this specific HD.

Most users will not be putting 4 HD's into a RAID array for home use, so they will not encounter this bottleneck.

Even if you do have 2 Raptors in RAID0 benchmarking at 145MB/s and being bottlenecked to 120MB/s, very few tasks on a computer will cause a sustained read at max throughput for any amount of time. To sustain those throughputs you would have to have another RAID array in the same box and be copying large files between those two arrays, or be running very large queries that require tablescans in a db, or a few other specialized tasks, to notice the 15% increase in max throughput.
 
Sweet! Thanks for the great info *bookmarks page* I'll try it later when I'm not so nervous about it.

I think I'll need bigger hard drives to do what I want though. Like a 500 gig. That way, I can have 3 partitions with OSs on each one, 1 partition for Programs for XP (probably not smart to store XP and Vista programs on the same partition :tongue:) and then one more partition for DVD encoding. *sighs* I wish I had another 250 GB hard drive right now so that I can go back to FAT32. Only reason being for Linux use. Then again, I haven't read up on it, but will, about whether FAT32 is supported by Vista or not.

Once again, thanks for all the info =)
 
Regarding the nVidia RAID bottleneck:

Even if you do have 2 Raptors in RAID0 benchmarking at 145MB/s and being bottlenecked to 120MB/s, very few tasks on a computer will cause a sustained read at max throughput for any amount of time. To sustain those throughputs you would have to have another RAID array in the same box and be copying large files between those two arrays, or be running very large queries that require tablescans in a db, or a few other specialized tasks, to notice the 15% increase in max throughput.

The problem with this assertion is that even smaller reads of only a few MB will be slower on a 120MB/sec limited setup. Encoding a 20 gig HD movie or something similar will eventually have to read all 20 gigs of data, then write 4 or 5 gigs or more back to the drive. If it takes 100,000 reads to read that entire 20 gigs, even a minute time savings per read adds up.
 
The problem with this assertion is that even smaller reads of only a few MB will be slower on a 120MB/sec limited setup. Encoding a 20 gig HD movie or something similar will eventually have to read all 20 gigs of data, then write 4 or 5 gigs or more back to the drive. If it takes 100,000 reads to read that entire 20 gigs, even a minute time savings per read adds up.

True, but you now have additional factors which mean that you will not see 15% performance increase. With a seek time of 4.6ms (Raptor) factored in for the 100,000 reads, you're going to have:

ICH8:
20GB/145MB/s = 138 secs of reading
100,000*4.6ms = 460 secs of seeking
598 seconds total HD activity

nVidia:
20GB/115MB/s = 174 secs of reading
100,000*4.6ms = 460 secs of seeking
634 seconds total HD activity

That is 40 seconds, but only 5% faster. If the data is in the middle of the platter, or in the inner area, then this 5% decreases or dissappears. And this isn't perfect math either, because well implemented RAID systems can decrease perceived seek times and poorly implemented ones can increase them.

It's kinda like RAM timings. Some people brag about the extra $300 they spent getting timings that give them a few % increase in memory benchmarks, but I'd rather have that $300 for something more useful. If the nVidia chipset has features you want or need and you intend on getting two Raptors in RAID0, you should know that you might be losing a few percent on your HD throughput, but when you post a question about your system on the forums and some troll replies, "THAT NvIDIA BOARD IS KILLING YOUR HD PERF!!! I GOT A 975 WIT ICH8 AND TWO 60GB SAMSUNG 7200S THAT PWN IT!!!" you'll know the truth.
 
So would doing a RAID 0 with two raptors (SATA 1) be equivalent to RAID 0 with two SATA 2 hard drives since the raptors have lower average latency?

SATA v SATA2 will only show an increase in performance for the emptying of the cache, since no single drive can even sustain 100MB/s yet. Raptors have crazy fast seek times and the highest throughput of any common drive, so they will definitely be faster in uncached large reads. In small, cached and partially cached reads it is a lot harder to tell.

Of course, a 4x7200 RAID setup will generally beat a 2xRaptor RAID setup, so bang for the buck (not including electricity bills) that would probably be the best bet.
 
Sweet! Thanks for the great info *bookmarks page* I'll try it later when I'm not so nervous about it.

I think I'll need bigger hard drives to do what I want though. Like a 500 gig. That way, I can have 3 partitions with OSs on each one, 1 partition for Programs for XP (probably not smart to store XP and Vista programs on the same partition :tongue:) and then one more partition for DVD encoding. *sighs* I wish I had another 250 GB hard drive right now so that I can go back to FAT32. Only reason being for Linux use. Then again, I haven't read up on it, but will, about whether FAT32 is supported by Vista or not.

Once again, thanks for all the info =)
I'd say get more hard drives instead of bigger ones if you're gonna have various RAIDs. 😀 If you have 4 drives than you can have RAID1 on two of them and RAID0 on the other two, then you won't have to worry about partitioning any of them (unless you really want to).

From what I've read, Vista isn't able to run on a FAT32 filesystem, but it has "support" for it, whatever that means. I'll have to read up on this subject as well. :wink:


(100th post FTW!!!!!!!!!!!!!
rockca5.gif
)
 
I also noticed the use of the outdated BIOS used on the NVIDIA board. I'd really like to see some results with the current (P23) version. And soon...I'm getting down to decision time on my next purchase!
 
Thanks mr_fnord. I guess I'll just stick with my setup.

db101, I don't know if you know, but the ASUS P5W DH Deluxe has an odd configuration when it comes to SATA ports. That's why, I'll probably never use more than 5.

Possibilities in the future: 2 Hard Drives for XP/Vista in RAID 0, 2 Hard Drives for Data in RAID 1, and then 1 Hard Drive for Linux.

That's assuming that the two orange ports on the bottom will work right since they're using JMicron or whatever instead of the Intel Matrix Storage :?

NOTE: I know the RAIDs have to be on the same chip and can't be spread ex. Having hard drive 1 on the Intel chipset RAIDed with hard drive 2 on the JMicron chipset will not work.

I've read some forums where people couldn't have 2 RAIDS because of the issues between the JMicron and the Intel Matrix Storage. *shudders*

I guess I should be able that computer runs right now without any problems *knocks on wood*
 
The use of an older bios for the testing, has been mentioned previously. Does anyone have any good information if the "wall" for the Nvidia Southbridge is a problem with the Southbridge itself or is it possibly a bios problem that "may" be resolved in future releases?
 
Am I reading something wrong?

The RAID5 numbers are way better than the Raid 0+1 transfers on all the chips in the summaries.

I want matrix raid so I could do Raid0+1 for the OS and swap, and Raid 5 on another partition.

probably 4x400 or 4x320GB, maybe even 4x500GB.

But if Raid)0+1 can't beat the raid5, might as well do the whole thing in raid5.

I'd like to avoid RAID0 across four drives.

Raid 0+1 should be == to Raid 0 on reads, and half as fast for writes. No parity computes, so it should be faster than RAID5.
 
So the question regarding Raid0 on the 680i has not been answered yet.

How does this effect real world preformence? considering its sequential read/write speeds, i would imagine any type of data transfer would suffer from this.

Is this a problem with NCQ on the motherboard conflicting with NCQ on the RaptorX hard drives?

Anybody? Beuller?

Hello, i'm french
I have a EVGA 680I motherboard and i have 3 disk raptor 74 Go in raid 0
I found the solution to pass the limit of 110 MB/S
You must disabled "enabled read caching" for all disk of your raid 0 in the serial ata controller.
Sorry for my limited english
 
I'm currently looking into building a couple new systems for Vista, any information on how well these chipset based RAID systems work under it?