Question ( Supermicro MBD-X11SPA-T-O) Problem concerning the PCIE lanes removed when plugging M.2 c03 or c04 ?

Feb 6, 2021

I have a Supermicro MBD-X11SPA-T-O. MOBO MANUAL: HERE

I'm using it for connect 7 RTX 3060 for mining, using the 470.05 driver and the dummy plug. All work's well until I connect m2 on the c03 or c04 slot (like the manual says, on the c01 and c02 the pcie slot 1 is disabled, I don't want it so I prefer to have PCIE 8X instead):

So, theorically:

Slot 1 and 4x M.2 share the 16 lanes.
Slot 6 and slot 7 share 16 lanes.

When I plug the m2 on the c03 or c04 slot, one of my GPU drops the hashrate to half. If I don't have any m2, all GPU work's at max hashrate. The trick should work with PCIE8X link anyways (check this), so I don't know how to fix that. I modified the BIOS too to force GEN3 speed on all slots, without results. My CPU is a Intel XEON W-3235 3.30 Ghz.

I have the above 4g enabled too. Priority Offboard.

All GPU'S are ZOTAC RTX 3060 Edge, bought at same time (no LHR worries). Tried changing risers too (as I said, if I remove the m2 all work's fine, so is not the problem).

So, with m2 plugged on c03 or c04:

Without m2:

What can I do to use the M2 and the 7 RTX 3060 GPUS at max hashrate? Thanks.
Last edited:


Dec 31, 2015
From what I've gathered, it's a configuration of using EITHER of the following two configurations (Slot 1 = S1, Slot 2 = S2, etc....) to leverage the max available 64 PCI-e lanes:
  • S1: x16
  • S2: Unavailable
  • S3: x16
  • S4: Unavailable
  • S5: x16
  • S6: Unavailable
  • S7: x16
  • S1: x16
  • S2: x8
  • S3: x8
  • S4: x8
  • S5: x8
  • S6: x8
  • S7: x8
Only Slot 1 is shared / split for use by C1 - C4 as x4 / each, so you would assume that if you're only using 2x NVMe drives, you'd only be using 8 of the available 16 lanes from Slot 1, right? Not sure if that would work on this platform, though,

The remainder of the Slot sharing is:
  • Slot 2 & Slot 3
    • x16 (Slot 2) / Unavailable (Slot 3)
    • x8 (Slot 2) / x8 (Slot 3)
  • Slot 4 & Slot 5
    • x16 (Slot 4) / Unavailable (Slot 5)
    • x8 (Slot 4) / x8 (Slot 5)
  • Slot 6 & Slot 7
    • x16 (Slot 6) / Unavailable (Slot 7)
    • x8 (Slot 6) / x8 (Slot 7)
It won't disable any of the slots whether you put the NVMe drives in either C1 + C2 or C3 + C4. It's just sharing the bandwidth of the single x16 slot. Have you tried moving the NVMe drives to C1 + C2 and see if the results are the same?

Also, and please forgive me for this being a completely STUPID question (heh...) but you're referencing the slots starting from far left (outside) inwards, right? Not the other way around (which is typical...first slot closest to CPU; opposite in this case).

What about temporarily setting everything to "Auto" to see if it properly distributes the lane sharing how you want it?

I fee ya on this one. PCI lane <Mod Edit> always irks me, and especially when it's something like this where you're like, "WTF?" :p
Last edited by a moderator: