RAID and GIGABYTE GA-EP45-UD3L

dnsdaemon

Distinguished
Jan 23, 2010
3
0
18,510
Does the GIGABYTE GA-EP45-UD3L have RAID? if so, how does one instal XP (or Ubuntu) without a floppy drive and no "slip streaming" the OS?
 

dnsdaemon

Distinguished
Jan 23, 2010
3
0
18,510
Now that is what confuses me: “RAID”, as is offered by INTEL, is software based. The various XCH’s (where X is a number, like in 7CH, 9CH, 10CH (never heard of an 8CH)) offer more SATA ports as the number X increases. That is the reason why we need a “driver” to be able to even boot off of the SATA drives. At least until Windows XP (I have no Vista/7 experience). Windows offers its’ own RAID experience using Disk Manager. You can Ctrl-Click on a drive to select it to be amongst the pool of drives you want to RAID and, depending upon the number of disks you choose, a right click on any one of the chosen, highlighted drives gives you various options. INTEL’s Matrix manager is mostly a SATA interface driver and an application software that can also be partly accessed during a POST-up. (by INT 9).

This is not the case for a pure, say, Silicon Graphics or a Promise RAID controller where the OS can be installed without going over the “F6” and floppy drive/CD-ROM routine. So to paraphrase and re-position my question a bit, the 8 or so SATA headers in the GIGABYTE GA-EP45-UD3L are hard-wired via the BIOS software as IDE and the Matrix manager is not capable of loading because of this? Does that mean, since windows has the default IDE drivers, it does not need an “F6” or equivalent exercise while loading onto one of the disks attached to a SATA header?

Finally, how does Linux/Solaris see the Raid? Do they have the drivers built into the OS loader?

I shall be obliged if someone enlightens me. I am told intelligent and knowledgeable folks live here. I thank you and am sorry to impose my, probably inane, lack of understanding of such things.
 

bilbat

Splendid
GB Intel boards that have more than four ports and an ICH7, or more than six and an ICH9 or 10, use a jMicron controller (and, for the new 'A' suffix setups, a Marvell SATA3 controller) to 'get' the extra ports. As they only each support two drives, if they are 'RAIDable, you can only get either RAID0 or RAID1 on the pairs... If you have the latest service pack, even Xp will 'see' any of these drives when they are configured as 'just' SATAs - you only need 'F6' drivers for RAID or AHCI configurations... In seven, even the AHCIs are 'native' (at least the Intels - I'm not so sure about the jMicrons [GSATAs]) - you only need drivers if RAIDing.

All this stuff, being, well, [strike]cheap[/strike] less expensive, runs what the linux community has so aptly named 'FakeRAID': what's happening is that the CPU has to do all the work of both running the SAS protocol stack, and the parity calculations, in the cases of RAID5 or 6 - that's why the ICHs are notorious for being really slooowww doing RAID5. Cheap (say, less than $400?) RAID cards do the same thing, and that's why drives on them can generally be 'discovered' by windoze without a driver, unless they are RAIDed - then they, too, require the 'F6' driver procedure. On pricier RAID cards, what's happening is a dedicated processor (an SiS or Intel IOP341) is running the protocol stack and the parity calcs, which 'frees' the main CPU from the burden. When you start getting way up there in price, you might even get an Intel IOP342, with dual cores - one running the SAS stack, and the other dedicated to the parity calcs - and the throughput goes through the roof!

I have never run across a 'realRAID' card that doesn't require drivers to run RAID, but my experience there is limited, as I pretty much 'stick to' Areca cards - I've found they seem to have a wider range of drivers, and better support than than the others in that market...

With the ICHxRs, there is a 'chunk' of their own BIOS added to the system BIOS, which is only activated when you have enabled RAID for the ICHs in the main BIOS, and the southridge detects at least two drives (I believe); this RAID BIOS is where you 'build' the RAIDs - while with the run-time 'manager' stuff (at least the latest 'leaked' RST managers) it appears you can alter the 'build' characteristics, I'd never try it! The manager's main function is to 'watch' the RAIDs, detect any inconsistencies or problems, and repair them (if possible) 'on-the-fly'...

As for linux, FakeRAID has supposedly been supported by the kernel for quite some time (at least, the Debian kernel, which is the only one I have experience with), but results vary - until the recent 9.10 relaese of Ubuntu, I've had serious problems getting linux to 'live' on my RAIDs - it seems to want to corrupt the drive descriptors on the first RAID, once corrupted they become unrepairable (at least, through any means I've been able to find...), and there seem to be problems with it 'following instructions' when told where to place the GRUB loader on a FakeRAIDed drive (because of my multi-boot situation, I cannot have it default to the MBR!). Oddly enough, I've never seen a problem with the subsequent RAIDed volumes (I have two RAID0 pairs of Velociraptors, follower by a RAID1 pair of RE3s on an ICH9R) - it always seems to 'pick on' the first one...