Question Intel XL710-QDA2 in PCIe 4.0 x 2 Slot?

jtyubv

Prominent
Aug 6, 2022
29
1
535
I want to directly connect two computers using two Intel XL710-QDA2 40Gbps PCIe 3.0 x8 cards. However, the only slot I have available on my MSI x670-P Wi-Fi motherboard is a PCIe 4.0 x 2 slot (the physical slot size is x16). See "PCI_E4" in the diagram below:

3M7QLLe.png


If the card I would put in the other computer is in a PCIe 3.0 x 16 slot, does this mean I will get 10 Gbps speeds (since the Intel card is x8 but would be limited to a x2 on the MSI motherboard)? Or would it not function at all, or at a different speed?

I did a similar test on these computers using Mellanox Connect-X2 10G SFP+ cards and I was getting about 500 MB/sec in the PCIe 4.0 x2 slot, which was higher than the 250 I expected given those cards are PCIe 2.0 x8 (I think).
 
The only cards like that I have every used I put in server class machines.

The theoretical limit of 4x2 pcie is about 4 GBYTE so you should see 32 gbit if there was no overhead.

I am unsure what realistic use case you have that can run that fast. Almost no standard disk system can run that fast for very long before you exceed the write cache. If you see 10gbit transfers you are doing really good. There is some overhead putting all the data into ethernet and taking it back out.

I suspect network will not be your limiting factor. Since you are going to run them back to back I would use jumbo frames to cut the overhead a bit.

Note I am unsure if there is some limitation in the card itself since these are very uncommon in the consumer space. You would have to read the fine print but the PCIE itself is not a limit other than it can "only" do 32mbit rather than 40mbps.
 

jtyubv

Prominent
Aug 6, 2022
29
1
535
The only cards like that I have every used I put in server class machines.

The theoretical limit of 4x2 pcie is about 4 GBYTE so you should see 32 gbit if there was no overhead.

I am unsure what realistic use case you have that can run that fast. Almost no standard disk system can run that fast for very long before you exceed the write cache. If you see 10gbit transfers you are doing really good. There is some overhead putting all the data into ethernet and taking it back out.

I suspect network will not be your limiting factor. Since you are going to run them back to back I would use jumbo frames to cut the overhead a bit.

Note I am unsure if there is some limitation in the card itself since these are very uncommon in the consumer space. You would have to read the fine print but the PCIE itself is not a limit other than it can "only" do 32mbit rather than 40mbps.
The intel card is a PCIe 3.0 card though, so wouldn't it run slower than that?