Question Why will my Asus ROG Strix B450-F not boot (no POST, blank screen) when 3 or more GPUs are installed?

egonline

Commendable
Nov 9, 2019
10
1
1,515
I recently purchased a mining rig, on a bare-bones rig frame with plenty of space, with an Asus ROG Strix B450-F motherboard installed. It has six PCI-e slots and has one each of Corsair 850W and 1000W power suplies. Windows is installed on a 256GB NVMe flash storage installed on one of the two NVMe slots the mobo provides. I have PCI-e extenders for each of five the GPUs;
  • Two (2x) EVGA GeForce RTX 3080s FTW3 FHR (10G-P5-3897-KR)
  • 1x EVGA GeForce RTX 3060 Ti FTW3 FHR (08G-P5-3667-KR)
  • 1x MSI GeForce GTX 1070 Ti "Duke"
  • 1x EVGA GeForce GTX 1060 Ti
I am running the latest uEFI BIOS (February 2022) for the ROG Strix B450-F motherboard.


When two or fewer cards are installed Windows, installed on the NVMe drive, boots up fine. After installing three GPUs, my computer refuses to boot. After the third GPU is installed and the computer fails to boot, I am forced to reset CSR (short the two mobo pins). I sometimes also remove the power cable, remove the CMOS battery and do a pin short / boot / off / battery re-install to make sure everything is cleared. I am then able to get up to two GPUs booting along w/ the internal graphics card (IGFX via the mobo's HDMI port) set as the default display, and all are recognized by GPU-Z. The installation of the third GPU however results in the no-boot situation.

Here is a brief summary what I've tried over numerous hours of irritating failure:


  • Enable IGFX as default video output; install one card or even two cards at a time e.g. 2x 3080s, 1x 3080 and 1x 3060, 1x 3080 and 1x 1070, whatever; boots fine. Installation of third fails.
  • Enable IGFX as default video; BIOS CPU options -> unlock all PCI-e lanes; install two cards, same.
  • Enable IGFX as default video; BIOS CPU options -> unlock all PCI-e lanes, enable "Above 4G Decoding", fail above two cards.
  • Disable IGFX, use PCI-e as output, install one card; then install one more card; the third card again fails.
  • Disable everything not being used including HD audio, serial port, various USB and M.2 ports, et al; again fails to boot after 2 cards.
I've tried countless configurations - as this motherboard provides plenty of options - to no avail.

Any ideas what I am doing wrong or has anyone had a similar experience?

Thank you!
 

egonline

Commendable
Nov 9, 2019
10
1
1,515
I assume the NVME slot shares a channel with a PCI-E slot. You can use one or the other, not both.

Just an assumption. Wait for others to chime in.
I will remove the NVMe and try just getting a non-blank screen boot. I think I tried this before and had problems. I also had problems with the boot not POSTing and getting blank screen / no keyboard numlock light when I had removed the NVMe and hooked up a SATA drive.
 

egonline

Commendable
Nov 9, 2019
10
1
1,515
A QUICK RUN-DOWN / SOLUTION FOR RUNNING FOUR (or 5!) GPUs WITHOUT TOO MUCH FUSS!
s
NOTE 1: This was tested on Devuan Linux 3 (aka Debian 10) and a Windows 10 Pro install.

UPDATE: BIOS files for direct import or copying options manually are now linked.

The text file containing key/values for options: https://pastebin.com/raw/d24U64Vu


The .CMO file that can be loaded (base64 copy, goto base64decode.org and upload a saved copy/paste for decode and download - the second option): https://pastebin.com/19RwZakg

For those having problems with either the system display and keyboard numlock not initializing, POST failing, etc. when installing more than 2 or 3 GPUs, or any GPUs beyond the first two or three being recognized by the operating system (tested both Windows and Linux), I did the following. My non-GPU hardware are the AM4 AMD Athlon 200GE processor, three SATA hard drives, 8GB of DDR4 RAM (this is much too low for hashcat and some other programs that utilize all the GPUs; see note below), a 850W Corsair modular PSU and a 1000W Corsair modular PSU.

To get 5 GPUs working successfully on a ROG Strix B450-F motherboard with AMD Athlon 200, I did the following.

BIOS: Simply use the attached files to configure the mobo BIOS options. This can be done either of two ways; one is loading the BIOS configuration "save" files directly, through the load saved parameters option; OR configured manually with the keyboard and optional mouse by referencing the key-value pairs in the attached text file.

Plenty of PCI-e/"GPU" labeled PSU module cables (6+2-pin or 8-pin); as many as possible are are needed because six (6) pins per GPU riser provides the electricity that its respective PCI-e slot would normally provide to the peripheral when plugged in. The adapters for the GPU, from 6-pins for a GTX 1060 to 3x 8-pin (24 pins!) for a 3080 Ti FTW3, are quite pin hungry and consume I think 350w per device when running hot!

Each of the five (5) PCI-e x1-to-x16 risers require 6 pins because they need to provide the same power a PCI-e slot would(most from GPURisers.com, some generic from Amazon), and from Amazon an "ADT Link" brand M.2 NVMe-to-PCI-e-adapter I installed, starting from the PCI-e slot furthest up towards the processor and I/O shield:
  • GPU #1 (EVGA GeForce RTX 3080) on PCIe_x16_1;
  • GPU #2 (EVGA GeForce RTX 3080) on PCIe_x1_1 (this GPU was chosen as the sole provider of output to the monitor for boot, YMMV);
  • GPU #3 (EVGA GeForce RTX 3060 Ti) on PCIe_x1_2;
  • GPU #4 (MSI GeForce GTX 1070 Ti "Duke") on PCIe_x1_3;
  • GPU #5 (MSI GeForce GTX 1060) on the #1 NVMe M.2 slot with a "ADT Link" brand NVMe-to-PCIe adapter;
All five should be recognized by the "nvidia-smi" program on the Linux distribution mentioned above as well as seen by hashcat 6.x with CUDA drivers using the latest official NVidia drivers (5.x) installed through the "<current version>.run" program on their website.


THE LATEST UPDATE ON THE STORY UP UNTIL THE SOLUTION / FIX ABOVE:

Running the latest Nvidia CUDA 5.x on Devuan Linux (Devuan, not Debian but is a fork of Debian) resulted in four out of five total GPUs working. Each of cards #1 and #2 are twin EVGA GeForce RTX 3080 FTW3 10GB (FHR) respectively in each PCI-e x16 #1 (PCIe_X16_1) and PCI-e x1 (PCIe_X1_1) PCI-e slots, card #3 is an EVGA GeForce RTX 3060 Ti FTW3 (FHR) in PCI-e x1 #2 (PCIe_X1_2), and card #4 is an MSI GeForce GTX 1070 Ti "Duke" in PCI-e x1 #3 (PCIe_X1_3) . (Note: I skipped PCI-e x16 #2 aka PCIe_X16_2; see below) "nvidia-smi -L" shows just the four cards excluding the GTX 1060 and all four show up in /dev/dri.

SIDE NOTE: nvidia-smi -L must be executed on a pseudo-tty (pts/X or pttyX) via SSH because it results in local agetty(8) physical terminals to no longer refresh or otherwise no longer refresh the display I cannot switch getty terminals via ALT+Fx either. This might be the result of a NoMachine (nx) setup that I must re-configure with these terminals.

The fifth card, an MSI Nvidia GTX 1060 Ti 4GB, does not display in "nvidia-smi -L" and instead showed up under the kernel message buffer as a card that, kind of hilariously written, "has fallen off the bus". I tried it in both the second X16 slot (PCIe_X16_2) but according to the motherboard manual this appears to be disabled when PCIe_X16_1 is in use with a X16 card. but it did not show up at all in "lspci -vv" nor in the kernel message ring buffer (dmesg); instead it did appear in lspci and in /dev when plugged into PCIe_X16_3 (the #3 PCI-e x16, aka the last expansion slot on the board) but then "[fell] off the bus". I tried numerous settings in BIOS, Googling and searching these forums and others like it to no avail other than finding some guesses leading me eventually to an idea that provided the solution.

It turns out that those NVM.e slots are indeed taking up precious PCI-e space. I found an "ADT Link"-brand NVMe M.2-to-PCIe adapter (at Amazon at https://www.amazon.com/ADT-Link-Extender-Graphics-Adapter-PCI-Express/dp/B07YDGK7K7) solved the problem. As a side note, coming from the server world I'd call this a "break out cable" of some sorts. The motherboard has two built-in NVMe adapters. The GPU did not appear at all - not even given a chance to "[fall] off the bus" - when installed on the NVM.e furthest from the processor (NVMe_2 ?) but appeared on the slot closest to the processor (NVMe_1 ?).

The adapter slides in the M.2 slot and a wide black ribbon cable is run to an x16 slot with a 4-pin 12v power provided by a SATA power cable from the power supply. You don't plug the card directly into this; you plug a GPU riser (e.g. one found at https://www.gpurisers.com/) x1 adapter into the provided x16, with an additional 6-pin PCI-e power cable onto the riser, and plug your GPU into this and any additional PCI-e power support it needs (e.g. 6 pin for my 1060 to 16 pin for the big ones). I did not test without the 12v plugin on the PCI-e M.2 adapter but I assume it's required. (I did not find any documentation about this and went blind).

Attached are the key/value pairs to set your B450-F BIOS manually and also a directly importable .CMO file.
 
Last edited: