The Southbridge Battle: nforce 6 MCP vs. ICH7 vs. ICH8

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
One CRITICAL ITEM (?DEFECT?) was not mentioned in the published regarding the RAID capabilities of the various ICH7/ICH8 RAID supported chipsets, as well as the Intel Matrix RAID technology; the 2TB “wall”! And it supports only 4 drives not 6 drives!

With Seagate shortly releasing the 1TB drive, and six SATA300 connectors on the ICH8R (and other ICH8 RAID family members), one would think that a 6TB (5TB RAID5) volume was possible. Not so, the 2TB limit comes into play; not even six 500GB drives will work (four OK at 2TB/1.5TB RAID5).

Most P965/G965 motherboards only have one PCIE x16 controller slot, usually used for a video controller; thus even a PCIE x16 storage controller is not an option, unless you can find one that offers full performance with a PCIE x4 controller slot.

Intel has had this issue from day one of the release of the ICH7R, it has yet to be resolved, and it is not even addressed as a “defect” on the ICH8R that will be “fixed” in a forthcoming hardware/software revision/release!

Please see below response that I received from Intel:

This is in reference to case number 7176817 with Intel (R) Desktop Board DG965WH and Intel (R) Matrix Storage Manager.

Here is the response to your concerns about the release notes that were not listed in the following release notes for the Intel (R) Matrix Storage Manager.

As to the Reference 2015920 "Volume size is limited to 2TB" in all OSs in the Intel (R) Matrix Storage Manager Production Version 6.1.0.1002 that were not mentioned in the later release of Intel (R) Matrix Storage Manager Production Version 6.2.0.2002, signifies that the issue was not resolved. This means the limitation is still there. At this time there's no expected time of resolution.

This limitation also pertains to the total RAID Array size as well.

The Intel (R) Desktop Board DG965WH is limited to 4 hard drives. You may be able to add another controller card to support more than 4 hard drives, however, Intel cannot support this configuration.

Below are links to the last two release notes for the Intel (R) Matrix Storage Manager.

http://downloadmirror.intel.com/df-support/11309/ENG/release%20notes.html
http://downloadmirror.intel.com/df-support/12093/ENG/releasenotes.html



Tim
 
barturtle,
Aside from possibly having driver issues, Vista should not have an affect on the RAID you set up and vice-versa. Vista will see the RAID(s) as one logical disk and you use it as normal. This is where the Mobo-provided disk monitoring utilities come in handy. They will provide the health statistics of your physical disks while Vista alone will not give you the full picture.
 
Are you seriously complaining that you can't have more than 2TB? What home user needs or even wants >2TB? What possible application could it be used for? The only thing I can think of is if you had your own particle accelerator in the basement... But then you'd probably have a server with a lot more than 2TB anyway... so what on earth is your problem with this?
 
Plankmeister - That's what people were saying ('round about) 10 years ago when 800MB disks were the norm. 2TB will be the norm before you know it. With the recent agreement for movie downloads, people will have their entire video libraries permanently stored on their PCs. Games are also getting bigger. The voices of experience will tell you that any size disk made will soon be filled.
 
Plankmeister,

Next month you will see retail 1TB drives from Seagate, several other manufacturers will also introduce drives larger than 500GB. All at the time of the MS Windows Vista launch . . .

You missed the second serrious point, you have six drive connections (SATA300), but you are supose to use only four of them for fixed disk drives. I gather Intel had optical drives in mind for the other two drive connectors.

But nowhere are these limitations clearly stated on anuy of the public sales documentation, or even on the retail box itself. Intel was pulling a "fast one" with specifications.


Tim
 
Ok. So, let's say someone (legally) downloads 200 DVD9s... loads of films, TV show box-sets... etc etc... Plus they download... I dunno... let's say 5000 mp3s. That's about 2TB. There's VERY few people who have that many DVDs.

Seriously... how much do you need? I've got 160GB, and the stuff I download (legally, of course :) ) I backup on to DVD. After all, if I had 200 DVDs all sitting on my HDD and it died... Well... that's an awful lot of time I'm gonna need to waste downloading the same things again.

Where does it stop? Petabyte storage in your home PC? Exabyte?? Come on... 2TB is ridiculous for a home PC. If you really need that much storage, build a SAN. I work for one of the Big Five, and of all the clients we support here in Denmark, the largest SAN we provide and support is 2TB, and that's for a very large multinational corporation with about 25,000 employees worldwide.

Ok, maybe in 5-10 years this kind of storage space will be needed as new content licensing schemes allow the legal download of digital content, but right now... I really think all of the digital media that's available for purchase by download would only fill maybe 1TB.

(But I agree... Intel's spin on it is obviously an intentional non-disclosure)
 
Am I reading something wrong?

The RAID5 numbers are way better than the Raid 0+1 transfers on all the chips in the summaries.

[..]

Raid 0+1 should be == to Raid 0 on reads, and half as fast for writes. No parity computes, so it should be faster than RAID5.

I was wondering about the same thing, especially with the ICH8R. The Intel ICH8R RAID10 write performance is a bit lower than I expected, but still seems OKish (averaging about double performance of a single drive), but read performance is very mediocre with only about 78 MB/s, so I really wonder if these results can be correct?
Also, it would be interesting to see how much CPU% the Intel software RAID5's require.
 
The RAID5 is a "pig", as it is not fully buffered with its own cache as is with most PCI-X/PCIE-X4 storage controllers. For some reason Adaptec stayed out of the SATA II RAID fray with a PCIE offering, leaving 3-ware-AMCC and PROMISE.

I probably will use one, as the ICH8R is sounding more and more, like a piece of crap.

I am a reseller, and in the US, 2 - 750GB's or 1TB's drives for a basic Windows Vista Ultimate using specially configured (and a "approved" by the MPAA et al.) systems by Gateway, Dell, and HP (at first) will seems small in one year. As it stands now, the "special build" system with the encryption hardware will only by offered as built and sealed systems by Gateway, Dell, and HP (at first); it will include CableCard (CABLE LABS) capability.


Tim
 
Raid 0+1 should be == to Raid 0 on reads, and half as fast for writes. No parity computes, so it should be faster than RAID5.
RAID0+1 in a perfect world would be equal to RAID 0 and may approach RAID 5 only in writes because of parity, but because it is split across 4 drives instead of 2, it performs better. It would have been interesting for them to include some of the RAID controllers recently reviewed for comparison, since I am thinking about doing a RAID 5 array. How much performance increase would I see when going from the onboard RAID to an add-in card? Do onboard RAID controllers not make good arrays? Final question, if I were to use an add-in card would a pci version be bandwidth limited and what is a reliable, relatively inexpensive RAID card?

The RAID5 is a "pig", as it is not fully buffered with its own cache as is with most PCI-X/PCIE-X4 storage controllers.
I realize that add in RAID cards have an advantage because of no cpu overhead, but what is the advantage of it being fully buffered with its own cache.

Plankmeister, you have to realize that it is completely legal to record television on your own with a tv tuner, a 40 minute tv episode with commercials cut and after being compressed normally will take up about 350 mb. In raw form, which average users will keep it in, it takes up about a couple gigabytes.

On another note, when is CableCard supposed to come out and are we going to be able to build our own pc's with CableCard?
 
AMD today unveiled the ATI TV Wonder Digital Cable Tuner - the industry's first and only device that enables users to watch and record premium HD digital cable content, such as HD ESPN and HD HBO, on their PCs. The ATI TV Wonder Digital Cable Tuner turns a PC into a Personal Video Recorder (PVR) with easy to use Microsoft Windows Vista Media Center menus and interfaces. It is scheduled to be available starting January 30, 2007 inside desktop and notebook PCs from the industry's top PC manufacturers.
 
That will definitely eat up some of that disk space. I'm sure it comes with some software that compresses it pretty tight but it will still use more than current formats.

Is it just me or does it seem we are getting a tad off topic in this thread. It is all good info though. I'm sure there will be many announcements this week with CES ongoing. Thanks to TheGreatOne.
 
Back to topic please :)

My question as to why the RAID10 read speed is so low still remains unanswered... (*hint* *hint* 😉)
 
1TB drives are "on topic", and there is no consumer motherboard with an onboard controller that seems to be able to support more than two of these. Nor at present, does there seem to be any consumer motherboard with an onboard controller that supports more than 2TB. Moreover, it seems in Intel’s eyes that four drives are the supported limit, despite six might work, or might not on a case-by-case basis.

The current Intel ICH8R/ICH7R seems to have the above stated limitations, not only on Intel branded motherboards, but on other brands that use the ICH8R/ICH7R controller on their motherboard. Nor is anyone of these manufactures clearly disclosing these limitations.

Most of what I have been stating has been “on topic”, and my postings as of late have been in rebuttal to the original posting by Plankmeister. At first the marking for Windows Vista in the US will take a completely different approach and pricing strategy than the US, and some offerings that will coincide with the consumer release of Windows Vista or be available in a few months. Almost all of these offerings will consume generous amounts of space if the system is not well managed (and that is most consumer systems).

An area of concern with the arrival of fixed disk drives 500GB and larger is “data protection”; RAID1 mirroring is an option but is expensive, RAID5 is pretty much the “industry standard” but requires three equal fixed disk drives offering a good level of “data protection”. RAID6 is very good for “data protection” but a fairly new standard, and gets a tad expensive plus RAID6 requires a minimum of 4 drives to implement.

RAID10 and other “nested” RAID level are usually not for consumer usage, as with the case of RAID10, it is very expensive with high overhead, with very limited scalability at a very high inherent cost. RAID10 is usually recommended for database server requiring high performance and fault tolerance.

The RAID Cache acts as a buffer, retaining records in cache for as long as possible until being replaced by newer transactions. The RAID Cache operates on a first-in, first-out principle keeping as many records in cache as is physically possible. Each time a record is requested, it is placed at the top of the queue so it will be retained longer in the cache. A frequently accessed record may never leave the cache. The amount of the required cache is governed by the intensity and frequency of the read/write activity from the host application. The greater the access intensity, the more the environment would benefit from a larger cache.


Tim
 
That will definitely eat up some of that disk space. I'm sure it comes with some software that compresses it pretty tight but it will still use more than current formats.
windows media center currently doesn't compress the video at all to my knowledge, but there are programs you can get out there that will automatically cut the commercials and compress video from media center.

ok, back to the topic..
so what I've gathered is that more than 4 drives on an onboard drive controller doesn't work and won't go above 2tb. I know you said the 2tb wall was just Intel chipsets, but I believe you said no onboard controllers can do more than 4 drives. The nvidia 5-series chipsets have 6 on sata connectors, not including those controlled by other drive controllers. You guys also make it seem like RAID with onboard controllers is out of question. What are your exact reasons for this belief?
 
NVIDIA may have an edge over Intel's ICH7R/8R, along with allowing six fixed disk drives and an array greater than 2TB, but performance may be another question.

http://www.nvidia.com/page/nforce_600i_tech_specs.html

nForce 680i SLI

Combine up to six SATA drives into one volume for bigger, faster RAID using nForce 680i SLI MCPs.

Supports RAID 0, 1, 0+1 or 5

The ASUS P5N32-E SLI motherboard is one of the few nForce 680i SLI motherboards out there, and it is not cheap.


Tim
 
I had a RAID 0+1 and a RAID 5 at one point. The RAID 5 is fast if the write back cache is enabled, but beware any system instability. If you hang, crash, or for any other reason have to shutdown or reboot your PC using anything other than the standard windows shutdown methods, the RAID 5 array will rebuild. For my 4 x 400 GB drives running on ICH7R (Intel D955XBK), a RAID 5 rebuild takes something more than 8 hours, and both reads and writes in the meantime are very slow on those drives. I eventually got tired of waiting for a rebuild and turned off the WBC. So writes are now pretty slow, but reads are plenty fast. Since I use the array mostly for write-once, read-many types of operations (CD library, video library, picture library), this works fine for me.
 
Hi

I have the same 120MB/sec RAID 0 problem.

I tried 3 and 4 80GB WD80JD drives on Intel badaxe (ICH7R) and asus p5b dlx (ICH8)

in both cases I couldn't get above 120MB/sec~, a friend of mine on p5b dlx could not too.

since, I use 2x80GB WD80JD and get 100MB/sec~

btw on my old nvidia chipset nforce4 i got 3 wd80jd @ 150mb/sec !

any idea?
 
INTEL!
INTEL!
INTEL!

nVidia seems to be particularly adept at making bad southbridges. Like they really want to specialize in doing it poorly. Like a misbehaving youth just trying to do wrong and get into detention, they revel in their bad engineering.

Their southbridges perform so poorly that one questions their very existence. Oh, sure, nVidia is good at making something that gets good benchmarks, so that it seems good on paper, but just wait until you use it. Ugh. It crawls like a worm. So in theory they're good. In practice they have negative worth because they suck the very life out of you when you use them.

Well, being a rebel just means immaturity, stupidity, psychosis, neurosis, selfishness, or other anti-social (i.e. against you and me) behavior.

Their self-indulgent rebelousness shouldn't be condoned, or celebrated, it shouldn't even be tolerated.

nVidia's southbridges are being squashed like a bug under Intel's shoe heel.

I've used a lot of different motherboards with a lot of different chipsets over the years, and that's my opinon.

The graphics cards are good though.
 
First of all, hi to all, I am new to TH forums. Until this very moment of read it has been a very good reading where I have learn some good things. Anyway, I am willing to create a new own computer, as my old one is 6, SIX! years old and six years before now I can tell that the technology of my computer was not edge bleeding one for that time, so really my computer is more that 6 years old in technology.

So I began to be guided by some friends and by other ways to the core 2 duo technology, the nvdidia nforce 6 (680i) ultimate chipset and some other things. I read the first review of the 3 680i motherboards and really fell in love with Asus Striker Exteme. But now, seeing this kick in the ass from intel ich8 to nvidia 680i I am beggining to make me some questions.

I want to play all the good games that I have could not play in the last 6-7 years at the most quality possible, but I hardly can imagine really having two 8800 video cards, so I am not sure that some day will take profit of the SLI feature, although I am sure that my hear wants, but my mind tells it much mony to expend just for playing.

In the other hand I am a programmer, and I like to program, mostly game editors or game engines (not have much time but I like it). So for me is very important compile times, morever if I am taking large projects with several files, classes and maybe the use or templates and big static libraries.

Plus I want to reenter again to the world of linux, not loosing Windows from sight because it has the most applications. So I have much questions that would like to be answered to make me an idea of what finally I have to head to.

Thinking of playing and programming, what chipset would be most ideal, nvidia nforce 6 or intel ich8?

From a point of view of multiple OS, what raid configuration would be best, having into account that I want good OS startup and swap performance but that all data sensible to be lost without worry(mp3, downloaded movies or not very important/already bakced up data) would be in an independent hard disk?
Should I have a raid0 for each OS? Can nvidia nforce simulate intel matrix to make a raid 1 part and a raid 0 part with only using two hard disks.

Returning to the topic. What about the words that the french used said about disabling the cache to remove the "wall" of nvidia raid? Hve some one tested it. Could we tell please to THG that remakes the test with theese settings or the last firmware to really see if the "wall" really exist?

In other post someone said that nvidia will have not the limit of 2TB, thing that makes me be more on the side of nvidia that on intel, but the link he gave about the specs didn't show that. Also the use of the all 6SATA instead of the 4 is a point to nvidia, but still have a cloud in my mind.

Thanks in advance to all.
 
If you could in the future - Limit the amount of questions of a single post to an amount that can be addressed more clearly... that would be greeeaaaaatt Uhmkay?

Personally I don't see much of a problem with 115mb/sec transfer rates... I would love more, but the 680i just has the features I want. It is not the only option, and among the choices you will do good either way you choose... 680 or 975 / 965.

6 - 7 year old games will look good on a Geforce Ti4200 series card so I wouldn't worry about that.

You do not need to go SLI... the 8800GTX or GTS is absolutely fantastic, and will handle anything you throw at it.

Can't help ya with the programming related questions.

As far as Frechies statement, My IT guy at work does not see the disabling of the cache feature allowing much more than a 5% improvement - if that - on one of our workstations... so i would say forget it.
 
First of all sorry for the multiquestion post.

About gaming, when I said I would like to play games that could not play 6 years before I was not saying that I don't want play actual games. If so, then I would end now as I was six year before, with a technology that could only make me able to play games of the past. This is not my idea, so I would need of course a good actual GFX card.

I also think that for programming purposes (compiling times) the amount of traffic in hard disk won't be so hight. Here what matters more I think is processor speed and memory.
 
How accurate is HDTach?

I'm involved in a discussion about the southbridge RAID performance comparisons in the eVGA Motherboard forum. I mentioned that the HDTach read test showed a straight line at about 300 MB/s with no change in speed across an array of six Hitachi Ultrastar 15k 34-gig SAS hard drives in RAID 0 on an Adaptec 4805SAS PCIe 8x RAID controller. I'd assumed that the straight line was because of the memory on the controller smoothing out throughput, but someone else suggested that it was more likely to indicate a bottleneck somewhere limiting the maximum throughput.

After trying various benchmarks and getting average read speeds ranging from 220 MB/s for HDTune to 550 MB/s for SiSoft Sandra XI, I contacted Adaptec tech support. I was told that Adaptec tests using IOMeter and given instructions for running it with the operating system on a different hard drive and no partitions on the RAID 0 array.

When I did this, reads started out at 535 MB/s and dropped to 475 MB/s (these results were obtained using IOMeter version 2006.07.27, Copyright 1996-1999 by Intel Corporation; Intel does not endorse any IOMeter results). I believe that the lower value represents the average speed and the actual lowest speed is lower than this, but I could be mistaken.

I reran HDTach against the array immediately after running IOMeter and got the same straight line at about 300 MB/s across the graph from one end of the drive to the other. That means that there is different of about 45 percent between the two benchmarks under identical conditions.

I noticed that the Tom's Hardware article reported I/O operations per second for the three southbridge chips using IOMeter, but did not report megabytes per second from those tests. I'd be interested in seeing what those figures looked like and how they compared to the HDTach results, especially since the 680i HDTach benchmark shows a similar line across the graph indicating the same speed from the beginning to the end of the array, while the ICH7 and ICH8 show the expected drop.

Mike

Lian LI V2000 case
PC Power & Cooling Turbo-Cool 1KW-SR
eVGA nForce 680i SLI, FSB 300 MHz
Intel Core 2 Quad QX6700 at 3 GHz
Zalman CNP9700LED
Thermaltake Extreme Spirit II on SPP
Thermalright HR-05 SLI on MCP
4 x Patriot 512 meg DDR2-8000 at 800 MHz, 3-3-3-8-1T
Adaptec RAID 4805SAS controller
RAID 0/6 x Hitachi Ultrastar 15k 34-gig hard drives
Seagate 500-gig eSATA hard drive
BFG Geforce 7900 GT
Dell 2001FP monitor
Soundblaster X-fi Xtreme Music
Plextor PX-716AS DVD-RW
 
Wow. What is the nVidia RAID bottleneck? Judging by their performance below the 120MB/s wall, if they could fix whatever is creating that wall they might outperform the Intel RAID setups.

The wall may be in HD Tach rather than the nVidia hardware or software. I ran into a similar bottleneck using HD Tach at about 300 MB/s with a RAID 0 array of six Hitachi Ultrastar 15k SAS drives on an Adaptec 4805SAS PCIe 8x controller.

When I contacted Adaptec, I was advised to use IOMeter, testing the array with no partitions and the operating system on another drive. I did this, and got more plausible results, but that still didn't explain the difference between results in HD Tach, IOMeter, and various other disk benchmarks.

However, while I was going back over the instructions from Adaptec, I noticed the following:

For # of outstanding I/Os, use a value above 1. We recommend 16 or 32. Generally, better performance is realized by increasing this value. Many bench mark utilities limit this value to 10 or less which is one of the reasons we recommend using IOMETER.

I'd been testing using 32 outstanding I/Os. Increasing the number to 64 did not produce any significant difference, so my assumption is that at 32 or some number below it, the number of outstanding I/Os stops having an effect on the results.

The performance stayed the same until the number of outstanding I/Os was below six, the number of drives in the array. At five, performance declined a little; at one, the test showed the same relatively low rate at all points on the drive. I haven't tested the remaining numbers between one and five yet.

I still have no idea why the nVidia soutbridge would suffer more from a lower number of outstanding I/Os than the ICH7 and ICH8 do. It may be that HD Tach adjusts the number of outstanding I/Os for some controllers but not others. I'd be very interested in seeing how the southbridge chips compare in IOMeter tests using several different values for the number of outstanding I/Os.

Mike
 
UPDATE:

Just found out that for the Intel BOXDG965WHMKR boxed retail consumer product, that BIOS Update 1612 [MQ96510J.86A] [December 27, 2006] noted in the MQ_1612_ReleaseNotes.pdf documentation clearly stating, “Reverted back to RAID option ROM v6.1.0.1002”. Intel RAID for SATA - v6.2.0.2002 has been with us since BIOS Update 1545 [MQ96510J.86A] [November 02, 2006]. Question is, what was the reason and/or defect (again, as none was stated or listed in the documentation for the update) for the current reversal to RAID option ROM v6.1.0.1002 that was first release in BIOS Update 0784 [MQ96510J.86A] [July 07, 2006]?

This reversal would affect all Intel ICH8R/ICH8DH/ICH8DO based motherboards.

I wonder what possible implications does the reversal to RAID option ROM v6.1.0.1002 have in using the Intel BOXDG965WHMKR boxed retail consumer product and other Intel ICH8R/ICH8DH/ICH8DO based motherboards with the Microsoft Windows Vista OS or a 64-bit OS?

Therefore, it seems that nVidia is not alone with RAID firmware and software issues . . .


Tim
 

TRENDING THREADS