Question AMD RX 6800 runs only at PCIe x1 in VM (GPU Passthrough) ?

Sep 5, 2023
3
0
10
Hello.

I cannot figure out why my AMD RX 6800 runs at only PCIe x1 in VM (GPU Passthrough) while GTX 960 runs at x8 lanes (2 GPUs setup)..

My setup:
  • CPU Ryzen 9 5900X
  • Motherboard Gigabyte x570 Aorus Master (updated to the latest firmware)
  • Host GPU Nvidia Quadro P2200
  • PSU Corsair HXi Series HX1000i - 1000W
  • VM (passthrough) GPU:
    • AMD RX6800 (new)
    • Nvidia GTX 960 (old)

Linux GPU is always Nvidia Quadro P2200. My configuration has a dualboot where I can test the HW on Windows 10 as well. Both cards were tested in both PCIE main slots with the following findings:

Nvidia GTX 960 as 2nd GPU:
  • BIOS for both shows they run at PCIEx8
  • in VM GTX 960 GPU-Z reports also - PCIEx8
  • in VM 3DMark reports 98% score as compared when run as single GPU in my dualboot Win10.
AMD RX 6800 as 2nd GPU:
  • BIOS for both shows they run at PCIEx8
  • in VM RX 6800 GPU-Z reports - PCIEx1.. also AMD Adrenaline software reports PCIEx1..
  • in VM 3DMark reports only about 7% score as compared when run as single GPU in my dualboot Win10.
    • tried swapping PCIE slots with the Nvidia Quadro P2200 with the same results..
    • tried extracting BIOS from the GPU and reference it via VM config directly
    • tried any kind of related configuration in BIOS
    • tried any kind of related configuration in KVM (BIOS extract, CPU pinning, various CPU policies..)
    • in the VM card is always put into x1 speed when used with other GPU
      • it is also put into x1 speed when used standalone (headless host)
  • when tested directly on dualboot Win10 GPU-Z reports - PCIEx8
    • this rules out low power
Host lspci for the AMD RX 6800 where LnkSta:Speed 16GT/s (ok), Width x16 (ok):
Code:
0f:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 21 [Radeon RX 6800/6800 XT / 6900 XT] [1002:73bf] (rev c3) (prog-if 00 [VGA controller])
Subsystem: Sapphire Technology Limited Navi 21 [Radeon RX 6800/6800 XT / 6900 XT] [1da2:e437]
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 64 bytes
Interrupt: pin A routed to IRQ 156
IOMMU group: 34
Region 0: Memory at d0000000 (64-bit, prefetchable) [size=256M]
Region 2: Memory at e0000000 (64-bit, prefetchable) [size=2M]
Region 4: I/O ports at f000 [size=256]
Region 5: Memory at fbf00000 (32-bit, non-prefetchable) [size=1M]
Expansion ROM at fc000000 [disabled] [size=128K]
Capabilities: [48] Vendor Specific Information: Len=08 <?>
Capabilities: [50] Power Management version 3
Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1+,D2+,D3hot+,D3cold+)
Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [64] Express (v2) Legacy Endpoint, MSI 00
DevCap:MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset-
DevCtl:CorrErr+ NonFatalErr+ FatalErr+ UnsupReq-
RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
MaxPayload 256 bytes, MaxReadReq 512 bytes
DevSta:CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
LnkCap:Port #0, Speed 16GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+
LnkCtl:ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta:Speed 16GT/s (ok), Width x16 (ok)
TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+
 10BitTagComp+ 10BitTagReq+ OBFF Not Supported, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
 EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
 FRS-
 AtomicOpsCap: 32bit+ 64bit+ 128bitCAS-
DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- OBFF Disabled,
 AtomicOpsCtl: ReqEn-
LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
 Compliance De-emphasis: -6dB
LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete+ EqualizationPhase1+
 EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
 Retimer- 2Retimers- CrosslinkRes: unsupported
Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
Address: 00000000fee00000  Data: 0000
Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
Capabilities: [150 v2] Advanced Error Reporting
UESta:DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
UEMsk:DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
UESvrt:DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
CESta:RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
CEMsk:RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
AERCap:First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
HeaderLog: 00000000 00000000 00000000 00000000
Capabilities: [200 v1] Physical Resizable BAR
BAR 0: current size: 256MB, supported: 256MB 512MB 1GB 2GB 4GB 8GB 16GB
BAR 2: current size: 2MB, supported: 2MB 4MB 8MB 16MB 32MB 64MB 128MB 256MB
Capabilities: [240 v1] Power Budgeting <?>
Capabilities: [270 v1] Secondary PCI Express
LnkCtl3: LnkEquIntrruptEn- PerformEqu-
LaneErrStat: 0
Capabilities: [2a0 v1] Access Control Services
ACSCap:SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
ACSCtl:SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-
Capabilities: [2d0 v1] Process Address Space ID (PASID)
PASIDCap: Exec+ Priv+, Max PASID Width: 10
PASIDCtl: Enable- Exec- Priv-
Capabilities: [320 v1] Latency Tolerance Reporting
Max snoop latency: 1048576ns
Max no snoop latency: 1048576ns
Capabilities: [410 v1] Physical Layer 16.0 GT/s <?>
Capabilities: [440 v1] Lane Margining at the Receiver <?>
Kernel driver in use: vfio-pci
Kernel modules: amdgpu

Does anybody have any clue what could be the problem here? I cannot think of anything else here..
 
Last edited:
Sep 5, 2023
3
0
10
Actually it was your reddit post which helped me at the end.. While researching for this problem I stumbled upon 3 possible solutions:
Where the last one worked for me.

I edited the XML where I altered <domain> tag:
Code:
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
and added this configuration at the end of XML file (before closing </domain> tag:
Code:
<qemu:commandline>
    <qemu:arg value='-global'/>
    <qemu:arg value='pcie-root-port.x-speed=16'/>
    <qemu:arg value='-global'/>
    <qemu:arg value='pcie-root-port.x-width=32'/>
</qemu:commandline>

Now I have GPU recognized as PCIEx16@Gen4 even if in the BIOS it is PCIEx8@Gen4, which I believe is not true but gives me almost full performance when compared to running natively on host.