Unable to access raid controller BIOS

Commonbear

Reputable
Nov 20, 2015
4
0
4,510
Hello all,

I am in the process of building a home VM Lab and have run across an issue. With the system booting up, if I try to access the raid controller's BIOS I just get a blank (black) screen and the whole system hangs. The motherboard / raid controller I am using are:

SUPERMICRO MBD-X9SCM-F-O LGA 1155 Intel C204 Micro ATX Intel Xeon E3 Server Motherboard
IBM LSI ServeRAID M1015 SAS SATA PCIe RAID Controller 46M0861 SAS9220-8i

I had done a bit of Googling about this (not the best at that, I will admit) and tried changing the "Boot option filter" in the motherboard BIOS to "Legacy only" but that did not help.

Might anyone have a suggestion of something else I could try? I am wanting the LSI card to put my two 3TD WD Red drives into a raid 1 configuration, to be used as data store for the ESXi setup I am building.
 
The motherboard is new, the raid controller is a "sold as-is, pulled after successful testing" unit from a reasonably reputable eBay seller.

I have attempted a BIOS reset on the motherboard, and this did not help. Not sure how (or even if) this can be done with the raid controller's BIOS.
 
Thank you for the firmware link, drtweak. I pulled the card from the box and put it into a different system (running Windows 7) in order to run the update, but the application informed me that it wasn't compatible with my system. I'm hanging onto a copy of it in case it is needed later though.

A very good question, popatim, and something I hadn't contemplated before. Going into the motherboard BIOS and then IDS/SATA Configuration, I see where I can change the SATA mode selection as well as Hot Plug and Staggered Spin-up per drive, but I am not finding a master "disable on-board SATA/drive controller" option. Hmm...

Update: Since I had already pulled the card and plugged it into a different machine in an attempt to update the firmware, I went ahead and pulled the hard drives as well, plugged it all into the second machine, and tried the access the Raid WebBIOS. Success. Went ahead and configured a Raid 1 setup making sure that all cables were labeled so everything is exactly the same when I transplant it back into the original server.

My one concern. I'm setting up two Western Digital 3TB Red drives into Raid 1. The raid controller claimed that they were 2TB drives, and thus I have a single virtual 2TB drive for ESXi. Am I missing a step somewhere?
 
May want to look up max size on the controller. I'm more of a Dell PERC guy and like the Dell PERC 5/6 cards can only do 2Tb max where as the Dell SAS HBA 5/6 can do up to 4TB confirmed maybe even more. So i would look it up as I don't know much about IBM cards plus don't have time to check it our right now.
 
Yea It should. It is a 6Gbps card and never head a 6Gbps card having a 2TB max (At least not yet) and most of the ones that I know of that have a 2TB limit are still 3Gbps cards.

Also since it said that it is an unsupported OS, maybe because it is Windows 7 and not windows server? If you have a spare drive somewhere just download a Trial ISO or server 2012 and try that? If you don't have a hard drive but lets say a USB drive that is at least 32GB or more you can use WINtoUSB and install a OTG version of Server and boot from the USB drive. If it is a Flash drive it will boot slower than a normal Hard drive even over 3.0 because flash drive sucks at random reads and writes which a OS does a lot. But you can try that and then flash it?
 
Latest update: Performed a flash update on the M1015 card and when attempting to access the WebBIOS (configuration utility) on boot I am now getting an error message stating that the adapter wouldn't load. When I plugged the card into an alternative system (which was able to access it earlier) I get the same error there. At this point the card isn't working and I've already been banging my head on this over two weeks.

Looking at alternatives: is there a software application that could be run within a VM that would perform live, or nightly scheduled, drive mirroring? My thought is to setup the two drives as just JBOD, let ESXi use one as a datastore, and have software mirror to the second drive for data redundancy.