Question 10-Gigabit mixed with 1-Gigabit Network on Windows 10


Aug 6, 2022
I need a file server with a fast connection (random and sustained read/write speeds) to one workstation and a gigabit connection to several other machines. The other machines are render nodes for which a gigabit connection to the file server is sufficient. The workstation needs fast access to the file server for video editing/compositing, and a gigabit connection isn't cutting it. The workstation also needs to access individual render nodes, but a gigabit connection is fine for that.

The reason I'm not using the workstation as the file server is because the workstation has to be restarted frequently, and that causes issues for the render nodes.

Currently, I'm using an HP Slimline 290-p0043w as the file server with a 2TB 970 Evo+ as the working drive being shared with the other machines. The setup looks like this (all connections are one gigabit):

Configuration 1:

This is functional, but as I mentioned, the connection between the workstation and the HP server is too slow sometimes. So I bought two Mellanox 10 gigabit SFP+ cards and put those into the workstation and the HP server. Now the configuration looks like this:

Configuration 2:

The problem with this approach was that I was not able to get the Workstation to connect to the HP Server at 10-gigabit; it would always use the 1 gigabit connection instead. I am not sure if it is possible to have this setup and force Windows to use the 10 gigabit connection. So then I tried this configuration:

Configuration 3:

In configuration 3, in order to get the workstation to be able to connect to both the HP server and the render nodes, I had to "bridge" the ethernet and mellanox connections on the HP server, like this (otherwise, although the Workstation could connect to the HP server at full speeds, it had no way to connect to the render nodes):

Unfortunately, creating a bridge resulted in significantly degraded performance. For example:

(These tests were performed transferring a 10 gigabyte file from a WD SN750 NVMe SSD on the workstation to the 970 Evo+ on the HP Server or a SATA3 SSD on the render node). Additionally, the latency to connect from the workstation to the render nodes is a lot worse. A remote desktop connection would take less than 0.1 seconds to start in configuration 1, but under configuration 3 sometimes it can take 1 to 2 seconds.

My last thought would be to try something like this, but I do not have the hardware for it currently:

Configuration 4:

I would like to avoid this configuration if possible to minimize the amount of hardware necessary.

My guess is that there is something I'm doing wrong in configurations 2 and/or 3. There must be something I can do to get the full performance without needing a 10 gig switch. Either there might be some way to improve the performance of the network bridge in configuration 3, and if not, there's probably some way to tell Windows to use the 10 gigabit connection rather than the 1 gigabit connection in configuration 2.

Does anybody have any advice on what I should try here? I'm at a bit of a loss right now. I'd appreciate any pointers on how to proceed.
I did not completely read your post...tired today will do better tomorrow if this does not answer.

In the diagram where you dual connected the pc and the server. You can make this work if you use different IP blocks. Leave the gigabit one like normal for your internet and your router will provide the IP. On the 10g network you will have to manually assign the IP. You want it different subnet than the main network. So if your main network is 192.168.1.x you could use 192.168.200.x assigning say to one machine and to the other. Just to be simple leave the mask Make sure you leave the dns and gateway fields blank