Question Mellanox Connect-X 2 Performance Problems

jtyubv

Prominent
Aug 6, 2022
29
1
535
I'm currently using two Mellanox Connect-X 2 card to connect two computers for a 10 gigabit network connection. However, I recently moved one of the cards from an Asus X99 Deluxe with an i7 5960x to an MSI X670 Pro WiFi with an AMD Ryzen 7950x (the other card remained in an Intel NUC9i7QNX Ghost Canyon with an i7 9750H cpu) and, for some reason, I cannot get the full 10 gigabit performance no matter what configurations I use. I tried different PCIe slots on the board (the x2 and the x4) and there doesn't seem to be a difference. Changing the jumbo packet size to 9000 (from 1514) and disabling flow control made no difference either. I can't really figure out what's wrong. For example, here is some iperf output:

Code:
PS C:\Users\John> iperf3.exe -c 192.168.1.221 -p 577 -R
Connecting to host 192.168.1.221, port 577
Reverse mode, remote host 192.168.1.221 is sending
[  4] local 192.168.1.210 port 53369 connected to 192.168.1.221 port 577
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.01   sec   229 MBytes  1.91 Gbits/sec
[  4]   1.01-2.00   sec   304 MBytes  2.56 Gbits/sec
[  4]   2.00-3.01   sec   162 MBytes  1.36 Gbits/sec
[  4]   3.01-4.01   sec   150 MBytes  1.26 Gbits/sec
[  4]   4.01-5.01   sec   102 MBytes   855 Mbits/sec
[  4]   5.01-6.02   sec   165 MBytes  1.38 Gbits/sec
[  4]   6.02-7.00   sec   169 MBytes  1.43 Gbits/sec
[  4]   7.00-8.00   sec   202 MBytes  1.70 Gbits/sec
[  4]   8.00-9.01   sec   178 MBytes  1.48 Gbits/sec
[  4]   9.01-10.00  sec   226 MBytes  1.92 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  1.84 GBytes  1.58 Gbits/sec                  sender
[  4]   0.00-10.00  sec  1.84 GBytes  1.58 Gbits/sec                  receiver

iperf Done.
PS C:\Users\John> iperf3.exe -c 192.168.1.221 -p 577
Connecting to host 192.168.1.221, port 577
[  4] local 192.168.1.210 port 53395 connected to 192.168.1.221 port 577
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   601 MBytes  5.04 Gbits/sec
[  4]   1.00-2.00   sec   637 MBytes  5.35 Gbits/sec
[  4]   2.00-3.00   sec   616 MBytes  5.17 Gbits/sec
[  4]   3.00-4.00   sec   672 MBytes  5.61 Gbits/sec
[  4]   4.00-5.00   sec   597 MBytes  5.04 Gbits/sec
[  4]   5.00-6.00   sec   671 MBytes  5.63 Gbits/sec
[  4]   6.00-7.00   sec   644 MBytes  5.40 Gbits/sec
[  4]   7.00-8.00   sec   639 MBytes  5.36 Gbits/sec
[  4]   8.00-9.00   sec   630 MBytes  5.27 Gbits/sec
[  4]   9.00-10.00  sec   636 MBytes  5.35 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  6.20 GBytes  5.32 Gbits/sec                  sender
[  4]   0.00-10.00  sec  6.19 GBytes  5.32 Gbits/sec                  receiver

I'm using the driver located here:
https://network.nvidia.com/products/adapter-software/ethernet/windows/winof-2/
named (WinClient 1909 MLNX_VPI_WinOF-5_50_53000_All_Win2019_x64.exe) on Windows 10 21H2 (19044).

Here is some additional driver information:

iQKNP1i.png


I previously got 900-950 MB/sec both ways when the card was in the Asus motherboard with the old intel 5960x CPU.

Any idea what could be causing this poor performance? If this isn't resolvable, are there other 10 gigabit nics you recommend to try instead? I'm happy to buy a new one if I need to. I don't want to waste too much time figuring this out.
 

kanewolf

Titan
Moderator
I'm currently using two Mellanox Connect-X 2 card to connect two computers for a 10 gigabit network connection. However, I recently moved one of the cards from an Asus X99 Deluxe with an i7 5960x to an MSI X670 Pro WiFi with an AMD Ryzen 7950x (the other card remained in an Intel NUC9i7QNX Ghost Canyon with an i7 9750H cpu) and, for some reason, I cannot get the full 10 gigabit performance no matter what configurations I use. I tried different PCIe slots on the board (the x2 and the x4) and there doesn't seem to be a difference. Changing the jumbo packet size to 9000 (from 1514) and disabling flow control made no difference either. I can't really figure out what's wrong. For example, here is some iperf output:

Code:
PS C:\Users\John> iperf3.exe -c 192.168.1.221 -p 577 -R
Connecting to host 192.168.1.221, port 577
Reverse mode, remote host 192.168.1.221 is sending
[  4] local 192.168.1.210 port 53369 connected to 192.168.1.221 port 577
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.01   sec   229 MBytes  1.91 Gbits/sec
[  4]   1.01-2.00   sec   304 MBytes  2.56 Gbits/sec
[  4]   2.00-3.01   sec   162 MBytes  1.36 Gbits/sec
[  4]   3.01-4.01   sec   150 MBytes  1.26 Gbits/sec
[  4]   4.01-5.01   sec   102 MBytes   855 Mbits/sec
[  4]   5.01-6.02   sec   165 MBytes  1.38 Gbits/sec
[  4]   6.02-7.00   sec   169 MBytes  1.43 Gbits/sec
[  4]   7.00-8.00   sec   202 MBytes  1.70 Gbits/sec
[  4]   8.00-9.01   sec   178 MBytes  1.48 Gbits/sec
[  4]   9.01-10.00  sec   226 MBytes  1.92 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  1.84 GBytes  1.58 Gbits/sec                  sender
[  4]   0.00-10.00  sec  1.84 GBytes  1.58 Gbits/sec                  receiver

iperf Done.
PS C:\Users\John> iperf3.exe -c 192.168.1.221 -p 577
Connecting to host 192.168.1.221, port 577
[  4] local 192.168.1.210 port 53395 connected to 192.168.1.221 port 577
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   601 MBytes  5.04 Gbits/sec
[  4]   1.00-2.00   sec   637 MBytes  5.35 Gbits/sec
[  4]   2.00-3.00   sec   616 MBytes  5.17 Gbits/sec
[  4]   3.00-4.00   sec   672 MBytes  5.61 Gbits/sec
[  4]   4.00-5.00   sec   597 MBytes  5.04 Gbits/sec
[  4]   5.00-6.00   sec   671 MBytes  5.63 Gbits/sec
[  4]   6.00-7.00   sec   644 MBytes  5.40 Gbits/sec
[  4]   7.00-8.00   sec   639 MBytes  5.36 Gbits/sec
[  4]   8.00-9.00   sec   630 MBytes  5.27 Gbits/sec
[  4]   9.00-10.00  sec   636 MBytes  5.35 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  6.20 GBytes  5.32 Gbits/sec                  sender
[  4]   0.00-10.00  sec  6.19 GBytes  5.32 Gbits/sec                  receiver

I'm using the driver located here:
https://network.nvidia.com/products/adapter-software/ethernet/windows/winof-2/
named (WinClient 1909 MLNX_VPI_WinOF-5_50_53000_All_Win2019_x64.exe) on Windows 10 21H2 (19044).

Here is some additional driver information:

iQKNP1i.png


I previously got 900-950 MB/sec both ways when the card was in the Asus motherboard with the old intel 5960x CPU.

Any idea what could be causing this poor performance? If this isn't resolvable, are there other 10 gigabit nics you recommend to try instead? I'm happy to buy a new one if I need to. I don't want to waste too much time figuring this out.
That card is PCIe 2.0 and designed for an X8 slot. The x4 slot on that motherboard should provide enough bandwidth (2GB/s)
I am not sure how your display is showing PCIe 5.0. The specs for the motherboard -- https://www.msi.com/Motherboard/PRO-X670-P-WIFI/Specification show PCIe 4.0 . You might want to start by trying PCIe 3.0 in the BIOS if possible.
 

TRENDING THREADS