Network Interface Cards (NICs) 101

Status
Not open for further replies.

freeskier93

Reputable
Dec 2, 2014
159
0
4,710
I'm honestly not sure what the point of this article is. I mean for 99.99% of users who are on less then a 100 meg connection they will never need to think about a separate NIC, it's really a niche need outside of business. If you guys want to start doing more network articles you should start at the heart of the network, a router. A good NIC is going to be a waste of money if you're still rocking a "gigabit" consumer router.

Personally, we have a gigabit fiber connection an a Ubiquiti EdgeRouter. The Intel NIC on my Asus motherboard will pretty much hit gigabit speeds too. I've hit 950 Mbps up/down with Speed Test and in real world I've hit 800 Mbps with Steam. By comparison out old consumer grade router was choking our speeds to 400 Mbps.
 
Is the USB2 an issue at 50Mbps or 50MBps?

You said "50Mb" which is just over 6MB which seems pretty low to me since the USB2 controller can manage up to 60MBps so i assumed you meant 50MBps.

USB2 has a 480 Megabit/second maximum theoretical debit, not counting protocol overhead - this translates to 50 Megabytes/second maximum theoretical limit. In practice, it's closer to 40 Megabytes/s.

The use of Gigabit Ethernet at home is easily reached when you have a media server : streaming HD content over RJ45 100 allows one streaming at a time, and the rest of the network (web browsing etc.) slows down a lot. Switching to Gigabit allows for a couple streams and normal traffic takes place as usual. Since a TV recorder is essentially a media server, it counts too.
 
I liked the article. It would have been interesting to see different common controllers on a chart listing different characteristics such as latency versus the other (e.g. PCI NIC v PCIe NIC v on-board NIC v wireless NIC) just to quantify exactly what differences we are dealing with.
 

g00ey

Distinguished
Aug 15, 2009
470
0
18,790
Not a very informative article at all. There are plenty of things to mention just to give a few examples: One could discuss the different cat and how that spec affects the cabling and other hardware in detail. Shielded vs unshielded gear. The different features that you find among different NICs, in more technical detail as to what differentiates a server grade NIC from a desktop grade. How different brands of NICs handles more complex issues such as QoS or TSO and what controllers are better than others in different workloads. What differentiates their signal strenghts how good impedance matching one can get from different ports of different brands of hardware and how well this is resolved. How are packet collisions and losses handled? What latencies can we get? How well do these different NICs really offload the CPU? There are many different features out there such as iSCSI, PXE VDEV, SerDes, IPMI, Cable Diagnostics, Crossover detection, pair swap/polarity/skew correction, NDIS5 Checksum Offload (IP, TCP, UDP) and largesend offload support, IEEE 802.1Q VLAN tagging, Transmit/Receive FIFO (8K/64K) support , ... There are concepts such as HMAC+, RMCP, RAKP, SOL Payload, Kg, SIK, BMC, MSI Writes, SKP Ordered Set resets and Training Sequences (TS), RDTR and RADV timer mechanisms, ICR reads, Bit Banging, ...

There are many different features and concepts, and differences among how different manufacturers handle them. Things could be measured and analysed with spectrum analyzers, and dedicated benchmark hardware, both on the performance and CPU utilization. But none of it is discussed in this article that is quite sub-par unfortunately.
 

Kewlx25

Distinguished
LSO/TSO/GSO are no longer needed for modern NICs.

A bigger issue is the quality of interrupt coalescing. A 100Mb network can recieve up to 144,800 packets per seconds and cheap network cards issue an interrupt for every packet. Normal CPU time slices are in the order of milliseconds, and your CPU may be handling something like 10,000 switches per second. When you cheap 100Mb NIC attempts to issue 144,800 per second, your CPU will crumble. 1Gb is 10x worse.

Most people doing file transfers will be sending large 1500 byte packets and the max packets per second is for 64byte packets. Most home users don't even notice the difference other than they're only getting 800Mb/s, which could very well just be their mechanical harddrive being the bottleneck, but if they see something like 20% cpu usage while copying, that's why.

High end NICs have large onboard buffers and will interrupt the CPU on the first packet, but subsequent packets they will delay. Don't worry bout the delaying being very long, they're still less than a millisecond. Coupled with that, modern NICs support what is called MSI-X, which is part of PCI-Express since 3.0. Don't worry, they back-ported this 3.0 feature to something like 2.1.

This sweet feature allows for "soft interrupts". Once the first interrupt happens and the driver starts processing the packets, the driver can disable the interrupt. This way new packets that come in do not interrupt the CPU or do not need to be delayed until the next interrupt. The NIC will instead flag a location in memory if any new packets come in. Once the driver is done processing the current batch of packets, the driver can check if there were any "soft" interrupts during this time. If there was, process those packets.

At some point the driver will catch up and process all of the packets. Once this happens, the driver will re-enable the interrupt and switch back to what it was doing before the original interrupt. This dramatically reduces CPU usage while still maintaining low latency and high throughput. The best of both worlds!

Don't pay for a premium Killer NIC, you can get a high end Intel i210 server NIC with all of these awesome features for $60 and will work very well with great drivers for all operating systems. If you need something cheaper, there are plenty of Intel NICs in the $15-$30 range that may not have all of these features, but will still be great for any home user.
 
The major benefit to these cards is that they handle the buffer storage, encoding and decoding of data through the seven network layers.

Um. Network cards don't usually handle offload for anything above TCP (OSI layer 4)... and everything below that is either always done in hardware, or provides negligible overhead.

No one in their right mind is going to offload something like SSH to the NIC.
 

evenstephen85

Honorable
Jun 6, 2012
10
0
10,510
I would have expected and/or liked to have seen some recommendations in this article. We come to Tom's to see results, and there were not even any charts or data to show how/why this would be important. How many times did the article say "we all know this". It's true, we do know this, so give us the details we've come to expect from your articles. How much latency between the two cards difference can we expect etc. Dissapointing read.
 

Vlad Rose

Reputable
Apr 7, 2014
732
0
5,160
Fairly generic article. It would be nice if they had talked about advantages/disadvantages to different NIC brands for use in FreeNAS servers since many people (including myself) either are or are planning on building their own media server.
 

Kewlx25

Distinguished


Most NICs built into motherboards are crap but work good enough for most regular home users. I purchased a $70 NIC to put in my $80 motherboard for my desktop. My mini-server has a $140 NIC in its $80 motherboard.

I only work with Intel NICs at home. Of course there are better brands that Intel, but Intel is the cheapest of the good brands and plenty good enough for me.

Of course some people don't care if they're eating a Big Mac or a home made Black Angus burger.
 


I'd like to see some benchmarks on this, given I've pushed 117MB/s through an onboard NIC...
 

GearUp

Distinguished
Feb 19, 2010
326
0
18,860
quote: It is fully expected that 100 gigabits per second will be a reality utilizing standard RJ45-connected NICs.
Just tested a new Intel NIC card with updated driver using Windows Performance Monitor. I know it's hardly a benchmark and highly dependent on system state, particularly for an 8-year old system.
360 to 600 MB/s. The other systems never showed this high performance, onboard or otherwise.
 
I'm willing to forgive the absence of benchmarks and recommendations in this particular article, if only because it is clearly labeled an Introductory piece (101-level courses are introductory/overviews, often little more than to gauge interest and/or whet appetites). The implication though is that there will be more advanced articles coming along (hopefully fairly soon) that will indeed include benchmarks and recommendations.
 

Kewlx25

Distinguished


What's your CPU usage like when doing 117MB/s? I had an onboard Intel NIC, still Intel, that was around 0.5% CPU while doing 114MB/s. My mini-server is a firewall and was at a messily 5% CPU while doing NAT and stateful firewall at 1.5Gb/s, which was the limit of my integrated Intel NICs on my two desktops.

Once I upgrade my other desktop and put a decent NIC in it, I expect to get a full 2Gb/s(1Gb up+down) through my firewall. During my 1.5Gb/s test, my firewall was only doing a total of 600 interrupts per second load balanced across 4 cores.

To give a comparison to a crappy RealTek NIC, 60,000 interrupts per second all to one core, lots of dropped packets, massive multi-second latencies, and the server was unresponsive during the test. CPU was completely hosed, and because drivers run at kernel level with realtime priority, almost everything stopped working correctly.

 

hydac8

Reputable
Jan 13, 2015
10
0
4,510
I would have liked this article to be more informative not just some generic talk about nothing much at all
 
From way back in my days of building Netware servers, Intel NICs have always had an excellent reputation.
I've learned that for NICs (wireless too), you do get what you pay for. Cheap ones satisfy average home users where distances are short and there isn't too much blockage or congestion, but a $40 wireless card will beat the stuffing out of a cheap $15 one.
 

g00ey

Distinguished
Aug 15, 2009
470
0
18,790
Great comments, at least some of them. That's what I mean, TSO may no longer be required with the NICs of today but that's what would make things interesting and insightful; the how and why TSO no longer is required so we get an understanding of what is happening with the Ethernet technology. In other words a tour of the Ethernet development over the years doesn't hurt.

I believe that Intel is the king of the hill with the wired Ethernet market. I'm sure you won't go wrong with an Intel server grade NIC that goes for a rather decent price. Even the dual port server NICs by Intel sell for reasonable prices.

One thing to take into consideration is the drivers and Intel offer certified drivers for most Linux and Unix platforms that are stable through many years of runtime in large server environments. So I believe Intel is the best you can get, especially if you want to run FreeNAS, OpenIndiana or other BSD/Unix or Linux based environment.

When it comes to the wireless adapters, which we so far haven't talked about, I'd probably give the royal crown to Broadcom. I'm not sure with Intel but I'm not that happy with their driver support, I'd experienced a lot of crashes with their drivers on a Windows 7 machine. Broadcom otoh have been rather innovative and released high-speed hardware, and people seem to be rather happy with their drivers. Don't know how well Broadcom works with Linux or BSD/Unix though.

If one really wants low latency then I guess a modern-grade Infiniband controller would be preferred. Many Infiniband controllers also support 10GbE, but with that we are really crossing into the realm of enterprise-class hardware, which indeed is cool and interesting to know more about. I would believe Mellanox offer better hardware than what Intel or Broadcom is offering in this range of products but I might be wrong. However, I believe it's a different market segment with different players.
 

jehanne

Honorable
Apr 3, 2012
24
0
10,510
Are you, Tom's Hardware, sure about the following:

"If the data is received from the Ethernet cable, the controller is responsible for stripping the first three layers of encapsulation prior to handing the data off to the computer."

It is my understanding that the NIC is only responsible for the Layer 1 ("frames") of the IP protocol model (Layers 1 & 2 of the OSI reference model) and that the OS itself (whether Windows, Linux, etc.) is responsible for Layers 2 & 3 of the IP with some Layer 4 processing, also.
 
Status
Not open for further replies.