The Southbridge Battle: nforce 6 MCP vs. ICH7 vs. ICH8

Page 6 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I just rebuilt an array with 4 x 400 GB WD 4000 RE2s, and that took about 3-4 hours. Normally it takes longer - like 10 hours - but I turned on the hard disk memory caches this time and it snapped through. When one of them was a 500 GB WD 5000 RE?, it went even faster.

The Intel onboard RAID doesn't use CPU for parity calc I don't think. CPU load stays very low. It's possible I'm wrong, but I would then expect the CPU to peg whenever I write anything to disk in large amts, and that's just not the case at all.
 
I don't think you can do that. When you create a disk array, that disk array is used for either one sole partition or both partitions if you matrix. The extra space won't show up in Windows Disk Manager as unused space or anything.

But more to the point: By doing part RAID 0 and part JBOD, you're going to cause your seek heads to scan all over the place. Not knowing your intentions, it sounds like you want a fast partition that doesn't require reliability, and then you want two other partitions for data and O/S or something. Better to just split up your two disks so that O/S is on one, and data and page file are on the other.
 
thanks,
I have a RAID 0 now with the two disks (500GB) on an old motherboard, it works really fast for me but I'm worried about the data I have on it. I'm plannig to get a new pc probably with an intel motherboard.

What I want to do is create a RAID 0 for the OS and apps (I don't mind reinstalling if the raid crashes) not occuping the whole pair of disks, and use the rest of the space to store files (media, music, work files, etc). Then if the raid crashes or I want to change the OS, reinstall or whatever I can still acces the files (even on a different computer, thats the point). Thats why I want to left the unused space as "normal" NFS partitions.
Anyone has tried to do just one raid?
if I do a JBOD (wich I understand that is not really a raid) using the space left, could I unplug the hard drive and acces the partition on another computer? (loosing of course the first raid 0)

Thanks!
 
From what I understand about Intel matrix raid, you can't have part of a disk in an array, and another part open to the O/S. Also, even if you matrixed part of both disks as RAID 1, you still wouldn't necessarily be able to open that on another computer, due to the need for the southbridge's controller, as I understand things. You would, however, be able to install an O/S on the current computer and get the data off the RAID 1 disks that way.

I'm curious to hear what solution you elect.
 
this is what i get when i used 4 segates in RAID 0 64K stripe size and i Turnd Off Command Queing

Nvidia RAID 590 AMD SLI chip set
Vista 64 (got the same results on vista 32 and XP 32)

4x80gbsegates72009raid0wu1.png


the jumping up and down seems very strange as i did not get this on the maxtors as much

i got allso results of 2x maxtors as well (need to find 2 more as Maxtor is now rebranded .9 Seagtes)

2x80gbmaxtordimond10raimv7.png


seems
to be an bug in nvidia's Raid drivers as it all so reports each one as PIO mode but does not seem to be running in that mode as test show's
as will as the Command Queing make Lots of CPU load when its turnd on (by default) it makes 15-30% Kernel use on My setup when command queing is on an mer 4-6% cpu when its turnd off

M$ Vista > Device manager > NVIDIA nForce Serial ATA Controller (on all of them) > Right click properties > Port 0 and 1 Untick command Queuing

XP its SCSI instead of storage and Prime ans Sec instead of Port 0 and 1 (allso do not restart the PC on XP untill you done all of them as it just Wast time rebooting when you can get it done in one go, VIsta Settings are appyed Strate away that seem quite cool)
 
From Intel:

Intel® Matrix Storage Manager
Can I add an additional hard drive to a RAID array?

Adding an additional hard drive to a RAID array in order to increase the capacity is known as array expansion. One example of array expansion would be adding a fourth hard drive to a three-drive RAID 5 volume.

The Intel® Matrix Storage Manager does not support array expansion.


http://www.intel.com/support/chipsets/imsm/sb/CS-022321.htm

And you may wish to look at this URL also:

http://www.intel.com/support/chipsets/imsm/sb/CS-020785.htm


Tim

Ahhhh I too have a Intel BOXDG965WHMKR motherboard (Intel G965 and the ICH8R ) with currently 3 x 320GB WD SATAII drives in a Raid 5 Array. I've added a 4th x 320GB (and later 5th / 6th) physically to the system. Is there ANY way of increasing my array WITHOUT losing my data / rebuilding the whole arracy (3rd party tools, hacks etc???).

Thanks
Nathan

Not supporting array expansion seems a major limit to me!
 
I just found out that the 2TB limit has to with LBA addressing, and it seems to be an “on silicon” (chip) issue that has been with us since the ICH7 family was introduced.

Tim
Seems to me this issue has been resolved since you wrote this. One of my rigs currently uses 6 SATA II drives on the ICH8R controller : 2*400GB + 4*500GB. I've split the 2*400GB RAID-0 array in two volumes (40GB+705GB) and the 4*500GB RAID-0 array in another two volumes (1200GB+663GB). Running WinXP SP2 and intel Matrix Storage Console 6.2.1.1002.


When I first built the 4*HD501LJ RAID-0 I went on different partitions sizes (40GB+1823GB), anyway -- as the topic discusses ICH8R performances, here are the benchmarks I got at that time:



(oh.. and yes, I do make backups of these RAID-0 arrays 😉)
 
So, we can RAID-0 two Raptor drives and not be able to access the full throughput via the nVidia 680i chipset.

May I say, "WTF?"

I understand that under most circumstances drive utilization will not hit these levels. However, I find it extremely disturbing that a chipset can’t keep up with the speed of hard drives; the historical bottleneck of all systems. HD’s can now provide some serious hurting (as per this article) and therefore, when a bleeding-edge motherboard (680i) can’t provide the throughput for US$300+, I have a serious problem.

I tend to do a lot of funkiness with my computer; I’m both a gamer and a M$ developer, so I equip for gaming, and make liberal use of Virtual PC environments. I need this throughput: but at the same time I don’t want to sacrifice the ability to SLI nVidia cards, as I have the feeling that will be more and more prevalent in the future.

Frankly, I don’t understand how nVidia can justify charging what they do for their chipsets (I’ve been looking at the MSI P6N Diamond) and then totally flake on the biggest bottleneck of computers.

Moreover, I’m extremely irritated with the lack of coverage all over the net of this subject. There’s been little to no coverage about this “bug”—be it software on the side of Tom’s, or hardware in that nVidia has a serious problem with their MCP (storage is only going to get faster, it’s time to step up). But, no one else covered this. At all.

Therefore, I’d really like to see some concrete evidence as to if this is true: run benchmarks using other software, contact nVidia, do *something*. But no one seems willing to cover this issue. This is something that will make or break my purchase of the 680i in my next computer, and I can vouch that I’m not the only one thinking the same after reading this and the previous article.

I’ve even posted this at the nVidia forums to try to get some sort of answer, to no avail.

I don't consider myself an Intel fan, as... their chipset selections are all of about suck, particularly as I haven't seen one that explicitly supports SLI at this point. I will admit I have a bias against ATI, as I've had really bad luck with them in the past.

Thus, can anyone prove that this nVidia 680i MCP bug is real, fake, anything…?
 
So, we can RAID-0 two Raptor drives and not be able to access the full throughput via the nVidia 680i chipset.

May I say, "WTF?"

I understand that under most circumstances drive utilization will not hit these levels. However, I find it extremely disturbing that a chipset can’t keep up with the speed of hard drives; the historical bottleneck of all systems. HD’s can now provide some serious hurting (as per this article) and therefore, when a bleeding-edge motherboard (680i) can’t provide the throughput for US$300+, I have a serious problem.

I tend to do a lot of funkiness with my computer; I’m both a gamer and a M$ developer, so I equip for gaming, and make liberal use of Virtual PC environments. I need this throughput: but at the same time I don’t want to sacrifice the ability to SLI nVidia cards, as I have the feeling that will be more and more prevalent in the future.

Frankly, I don’t understand how nVidia can justify charging what they do for their chipsets (I’ve been looking at the MSI P6N Diamond) and then totally flake on the biggest bottleneck of computers.

Moreover, I’m extremely irritated with the lack of coverage all over the net of this subject. There’s been little to no coverage about this “bug”—be it software on the side of Tom’s, or hardware in that nVidia has a serious problem with their MCP (storage is only going to get faster, it’s time to step up). But, no one else covered this. At all.

Therefore, I’d really like to see some concrete evidence as to if this is true: run benchmarks using other software, contact nVidia, do *something*. But no one seems willing to cover this issue. This is something that will make or break my purchase of the 680i in my next computer, and I can vouch that I’m not the only one thinking the same after reading this and the previous article.

I’ve even posted this at the nVidia forums to try to get some sort of answer, to no avail.

I don't consider myself an Intel fan, as... their chipset selections are all of about suck, particularly as I haven't seen one that explicitly supports SLI at this point. I will admit I have a bias against ATI, as I've had really bad luck with them in the past.

Thus, can anyone prove that this nVidia 680i MCP bug is real, fake, anything…?

nVidia MCP bug is real. nVidia has no intention to fix the bug yet. Intel makes pretty good chipsets especially in network and RAID controller field, how suck can they be? If you had bad luck with ATI in past, you won't have better luck with nVidia either now. Now, you can build another bias against nVidia.
 
I found the article a little contradictory as well at times. Near the end they say the 680i loses in almost all cases yet their charts show the 680i coming mostly between the two other chipsets, not losing to both.

Its almost as if several people peacemealed the article together without discussing each others results before jamming it all together. Shrug.

Are you kidding me? Which article is that? How is 680i coming mostly between the two other chipsets and not losing to both? It is the SLOWEST chipset compared to ICH7/8. Its performance isn't even close to ICH's performance, and IMO, it performed ridiculously slow.
 
I have 2 seagate 7200.11 (500GB) in nvidia raid 0 (mobo: asus p5n32-sli), and yeah i noticed a shocking under-performance in HD Tach, I was hoping for something amazing but this is not the case. I read a review of 2 seagate 7200.11 in raid 0 using intel chipset, and I was like so impressed I expected to get something similar or better from nvraid. I did recover some performance in HD Tach by disabling ncq and cache reading, but I feel like I shouldn't really have to to this, I feel cheated in some respect.

It's been a few months now since anyone posted in this thread and nvidia seems like they still haven't addressed this issue, well to my knowledge they haven't addressed this issue.

Anyone figure out how to fix this, like using an older version of nvraid.sys or something?

thanks and peace out ;-)