[SOLVED] CAT7 10Gbps Cabling - Do I need a transceiver?

AJAshinoff

Reputable
Feb 18, 2019
110
2
4,585
I'm poised to run several CAT7 wires to begin a process that will eventually see everything in my environment moving to 10Gbps in the near future. In looking into the network adapters I've noticed two types 1) RJ45 end and 2) a receptacle end which receives a 10Gb transceiver.

My Question:
If my cabling is 10Gbps, the keystones are 10Gbps and the switch is 10Gbps do I need a network card with a 10Gbps transceiver or can I use a 10Gbps network adapter with an RJ45 end?

Thanks in advance
 
Solution
Intriguing. However, I do think these are excessive overkill for my needs. Even so, they aren't that much more that dual 10G NICs, Definitely something to think about. Appreciated!

What I got out of the first half of your post is that I do need a transceiver if I seek lower latency and for the flexibility (attaching at RJ45 cable). Still there are NICs with RJ45 ports that say they are 10Gb that don't require a tranceiver (assumed the card handles it).
There are NICs that are 10GbE that run off of RJ45 cables. The biggest issue with the RJ45 cables are their higher latency compared to DAC (Direct Attached Copper) or Fibre Cables with Transceivers. DAC has the best latency (but are limited to 15 meters), Fibre with transceivers...
Technically it doesn't matter as you can get an SFP+ transceiver that is a RJ45 adapter. For ease of cabling I would have the cables ends the same on each end. For 10GbE you can use Cat5e for distances less than 100ft, Cat6 for distances less than 150ft, and Cat6a is the lowest official spec for distances of 300ft.
 

AJAshinoff

Reputable
Feb 18, 2019
110
2
4,585
@All I'm aware of the weakest link being the deciding factor in throughput. That said, all known switches and network adapters are 1Gbps. In planning 10Gbps, at least initally, I anticipate getting the full 1Gbps until the 10Gbps switching cost comes down. The reason for the question is that I'm building two servers and I want to add a 10gbps adapter to each. Considering a Cat7 patch cord will attach that NIC to the CAT7 Keystone which leads to the 1GBPS switch port I anticipate 1Gb speed.

Even so, to prepare for the future, the NICs in the servers do they need a NICs with transceiver ports and transceivers to be 10Gbps or can I buy 10Gb NICs with an RJ45 connection (considering that patch cord is CAT7 leading back to the switch).

perhaps pictures would help
Transceiver 10Gbps NIC https://www.amazon.com/10Gtek-E10G4...2DQASGWEARC&psc=1&refRID=XWK25QKSH2DQASGWEARC

RJ45 10Gbps NIC https://www.amazon.com/10Gtek-X540-T2-Converged-Network-Adapter/dp/B01HMGWOU8/ref=sr_1_5?crid=16PGWA77OG7Y&keywords=10gb+network+card&qid=1566068263&s=electronics&sprefix=10GB,electronics,238&sr=1-5

I'm not sold on 10Gtek, its just a ready example of each tech.
 
Last edited:

AJAshinoff

Reputable
Feb 18, 2019
110
2
4,585
The only difficult part in "planning for 10gbps" is the cable runs.
All the other 'parts' are easily replacable/upgradable.
Switch/NIC/whatever...

Agreed. I've done far too much cabling over the years to think it will be easy. Still, that doesn't help my determination to buy the one type of adapter or the other. Frankly, I prefer less connections. if I can avoid a transceiver NIC thats one less potential point of failure. Still, if I need a transceiver NIC in each server to obtain full 10Gbps at a later date then thats what I need to get.
 
Agreed. I've done far too much cabling over the years to think it will be easy. Still, that doesn't help my determination to buy the one type of adapter or the other. Frankly, I prefer less connections. if I can avoid a transceiver NIC thats one less potential point of failure. Still, if I need a transceiver NIC in each server to obtain full 10Gbps at a later date then thats what I need to get.
Transceiver NICs have lower latency than RJ45. That being said you can have transceivers that are RJ45 ended. https://www.fs.com/products/66613.html However, if you are looking at doing 10GbE for your hosts at this point just go 25GbE and use Direct Attached cables for the host to the switch. https://store.mellanox.com/products...25gbe-dual-port-sfp28-pcie3-0-x8-rohs-r6.html I have used these Mellanox cards and they are amazing. 2 advantages are they are backwards compatible with 10GbE and Mellanox doesn't vendor lock their SFP+.
 
  • Like
Reactions: AJAshinoff

AJAshinoff

Reputable
Feb 18, 2019
110
2
4,585
Transceiver NICs have lower latency than RJ45. That being said you can have transceivers that are RJ45 ended. https://www.fs.com/products/66613.html However, if you are looking at doing 10GbE for your hosts at this point just go 25GbE and use Direct Attached cables for the host to the switch. https://store.mellanox.com/products...25gbe-dual-port-sfp28-pcie3-0-x8-rohs-r6.html I have used these Mellanox cards and they are amazing. 2 advantages are they are backwards compatible with 10GbE and Mellanox doesn't vendor lock their SFP+.

Intriguing. However, I do think these are excessive overkill for my needs. Even so, they aren't that much more that dual 10G NICs, Definitely something to think about. Appreciated!

What I got out of the first half of your post is that I do need a transceiver if I seek lower latency and for the flexibility (attaching at RJ45 cable). Still there are NICs with RJ45 ports that say they are 10Gb that don't require a tranceiver (assumed the card handles it).
 
Intriguing. However, I do think these are excessive overkill for my needs. Even so, they aren't that much more that dual 10G NICs, Definitely something to think about. Appreciated!

What I got out of the first half of your post is that I do need a transceiver if I seek lower latency and for the flexibility (attaching at RJ45 cable). Still there are NICs with RJ45 ports that say they are 10Gb that don't require a tranceiver (assumed the card handles it).
There are NICs that are 10GbE that run off of RJ45 cables. The biggest issue with the RJ45 cables are their higher latency compared to DAC (Direct Attached Copper) or Fibre Cables with Transceivers. DAC has the best latency (but are limited to 15 meters), Fibre with transceivers is second, and RJ45 is a far 3rd in latency. However, you will probably not notice the greater latency with RJ45 vs DAC since you probably won't be running heavy I/O storage vSANs. For normal management network traffic the difference in latency is negligible.

You are correct that the dual 25GbE aren't that much more than the dual 10GbE. Where the difference comes in is that the dual 10GbE that you linked are from a no name company running on the Intel chip. Outside of wireless and 1GbE Intel chips aren't used very much for Ethernet. Intel's 10GbE chipsets aren't that highly regarded in the IT world. The Mellanox Connect X4-LX is one of the best 10/25GbE cards out there right now. When the company that I work for decided to do a hardware upgrade last summer, we went with the 25GbE over 10GbE since the difference in cost was about 10% per port but 2.5X higher bandwidth. We have not been disappointed that we went this route and the lack of vendor locking (ex: buying HP NIC that requires HP transceivers) means that we can save money and time buying cables.
 
Solution
Before you run too far down this path be really sure you can actually use the bandwidth.

It take a substantial disk subsystem to come anywhere close to 10g. You are likely going to have to raid SSD drives to get it. Your file systems also have to be the correct types. You can easily see this problem by say coping the "users" directory to a backup SSD. All the tiny files and all the empty files make this copy very slow and this has not network overhead at all just the internal buses of the computer.

The network part is actually the easy part. There is a reason storage network guys get paid big bucks it takes a very broad understanding of many things to get the performance you need.
 

AJAshinoff

Reputable
Feb 18, 2019
110
2
4,585
Before you run too far down this path be really sure you can actually use the bandwidth.

It take a substantial disk subsystem to come anywhere close to 10g. You are likely going to have to raid SSD drives to get it. Your file systems also have to be the correct types. You can easily see this problem by say coping the "users" directory to a backup SSD. All the tiny files and all the empty files make this copy very slow and this has not network overhead at all just the internal buses of the computer.

The network part is actually the easy part. There is a reason storage network guys get paid big bucks it takes a very broad understanding of many things to get the performance you need.

The 10Gb bandwidth will initially be excessive without a doubt. But the present bar when completed should reside at 1Gb for the forseeable future with wiring capable of moving forward in conjunction with our fiber runs between buildings. While the media department works hard to make their productions smooth and efficient the live streaming media (audio and visual) will make use of everything it can get and what it doesn't only assures it's not inhibited in any way. Data-wise, under utilizing the pipe would be a blessing and not a burden. I inhereted a kluge mish-mash of undocumented Cat3, Cat 5, Cat5e, and Cat 6 in the walls, overhead and among patch cords. Having a reliable baseline to work with, particularly when replicating VMs across the private cloud I'm building will come in handy.
 
Last edited:

AJAshinoff

Reputable
Feb 18, 2019
110
2
4,585
i.e.: A NAS box with separate physical HDD, with each HD allocated for a particular task/department can use the bandwidth easily.

Exactly. While we do a ton of media production and streaming, live streaming a few times a week, the bandwidth needs to be consistent for my end-users (corporate network not part of the media production) no matter which campus they are on. While inter-campus traffic will slow considerably (100Mbps), once the data arrives at the remote site and resides on those servers it needs to move fluidly throughout that campus when called on. Not fully using the pipe provided is preferable to being concerned about whether we have enough pipe if we do too much. (besides the new encoders I built are sweet).
 
Last edited: