Question about Link Aggregation to 10GBASE-T Card

Jan 31, 2019
3
0
10
If I have a link aggregation connection of 2 gigabits to the switch, will that go to a 10 gigabit pcie card on my computer at 2 gigabits-per-second through a netgear GS110EMX switch? Or, If I increased the LAG to 4 gigabits would it default to 2.5 gigabits per second?
 
Most switches don't support the bond type that makes that possible. It's called balance rr. packets go round robin, which has downsides. packets can go out of order. This can cause problems and re-transmissions.

Each time you open a connection the bond puts it on one of the links based on a hash. a file transfer is 1 connection so it would be stuck at 1Gbs. If you did 10 file transfers then it would go up to 2Gbs.

DLink has a SFP+ switch layer 2+ with 24 1Gbs RJ45 and 4 SFP+ for $350. mikrotik has a $120 switch that's similar but only 2 SFP+ for $140.
2 SFP+ NICS and a DAC cable runs about $80.

There are other options to connect your pc to a file server. You can use your 1Gbs for your main LAN and then add SFP+ between them directly and create a static route on each computer. When you mount the SMB share you type in the ip of the 10G nic.
 
Jan 31, 2019
3
0
10


ok so if the switch supported 2.5 gigabits per second and my nic supported 2.5 gigabits per second and i had a link aggregation at 4 gigabits per second... are you saying it would not default to 2.5 gigabits per second? So it would just default to the 1Gbps? Thanks for the quick response by the way.
 
I have no idea where you are getting 2.5g.

The standard link aggregation is really stupid mathematical path selection. A overly simplistic example. Lets say the link aggregation just adds the ip addresses of the source and destination machine together and if the results are odd it selects path 1 and if the result is even it will select path 2.

Since the ip addresses are the same for all data it will always select the same path. You could have 10 paths and the math will always select a single path.

It is a little more complex than this but it will always select just a single path. So a single session will only use a single link no matter how many you have aggregated.
 


If it were a single connection it would be limited to the weakest path. So 1Gbs on the quad bond. If you open multiple connections it could speed up to 2.5Gbs. If you already have a switch that can handle multi-gig then getting another NIC for the 2nd PC would give you a fast link between the 2.
 
Jan 31, 2019
3
0
10


I am getting 2.5g from multi gig cards that support 1, 2.5, 5, and 10 gigabits per second. To restate the question, let's say I have a 5 gigabit link aggregation connection (with five 1 gigabit ethernet cords) to a switch and a 5 gigabit connection from a multi gig card (using one cat6 cord) to the same switch, would the speed be five gigabits per second between the two devices if one device supported link aggregation and one device had a multi gig card? My original question was, assuming the same setup except instead of a five gigabit link aggregation, what if I had a 4 gigabit link aggregation connection to the switch and a multi gig card that supported 2.5 gigabits per second... I'm beginning to assume that I would need a multi gig card on the other end instead of the link aggregation to achieve the 2.5 gigabits per second and a link aggregation connection wouldn't necessarily default to the lowest common denominator... Am I correct in assuming this? Thanks for your response.
 
I see I pretty much only consider 1gbit and 10gbit. The 2.5 and 5 stuff I did not pay much attention to because it is a new standard and is pretty much the only reason you would ever use it would be if you could not get cat6a cable....like you had cat5e in the wall. I see they final actually certified those speeds. It is still very rare on commercial equipment which is mostly what I worked on.

Once you decide you need more than 1g you might as well go all the way to 10g. There is not a huge difference in the cost of the switches or nic cards. The big cost is going to shift to your disk subsystems since that is where the bottleneck quickly moves to. Most non ssd drives can not do much over 1gbit without tricks like raid or very expensive high rpm devices.