Unmanaged Gigabit Ethernet Switch Round-Up

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

bit_user

Polypheme
Ambassador


Actually, there's a whole roundup of low-cost switches here:

https://forums.servethehome.com/index.php?threads/gigabit-10gb-switches-under-550.6921/

I'd run across this before (but after I bought my setup), but I couldn't remember exactly where. Your post spurred me to dig it up. If you need only 2 SFP+ ports, the MikroTik CRS210-8G-2S+IN will do the job for less. D-Link has a 24 + 4 switch, for those who need up to 4 SFP+ ports, but it's over $500.
 
10GbE is expensive for a reason. At the signaling speed required for passing over copper you start bumping into physics and they need to use some cute tricks to get around it. Typically this means you need very short cables without any sort of interference, like power cables, nearby. Fiber is the preferred method to use for any sort of distance with the clients connecting via SFP 10GbE NICs or to a nearby switch via 10GBASE-T. This translates into some expensive stuff, even on the "low end". It's almost always better / cheaper to use 1GbE clients to the distribution switch and then have that switch have a 10GbE connection to the rest of the network. Servers are the ones who need large bandwidth and these days we virtualize everything so the ESXi cluster will typically have 12~16 (or more depending on size of installation) 10GbE ports connected to the core with bandwidth allocated internally.

Currently there is near zero demand in the consumer market for 10GbE networking. Some enthusiasts want it, but that's more for bragging rights or playing with then for actual usage. A single 10GbE connection has a ridiculous amount of bandwidth, 1212 MBps, yes MegaBytes per second, or 1.2GBps. A person would need RAID SSD's or RAMdisks to even use that much bandwidth. And that's per port, the switch backplane would have 6~10x that much internal capacity. Cheap 1Gbps LAN is more then sufficient for streaming and file transfers, your media devices or storage I/O subsystem would become the limiting factor before the network interface would. Of course this is in real world usage scenarios with real computers running real software in real configurations, not simulated stripped down back to back configurations used for benchmarking. Running two systems on a switch with nothing else talking is about as useful as running a cross over cable between them.

Now eventually we will move to 10GbE in the home, but not for awhile. There simply isn't sufficient demand signal for manufactures to mass produce products. And that is how you get something cheap, mass production for millions of buyers.
 

bit_user

Polypheme
Ambassador
I'm sure the same thing was said of GigE, when it came out. Yes, 10 Gig needs more intensive signal processing and it's harder to drive at a distance, but its range over twisted pair is probably enough for most home use.

Again and again, it seems people think you need to be able to saturate 10 Gig, to justify upgrading from 1 Gig. Wrong! The rationale for upgrading is once you bump up against the limit of 1 Gig!

Back in 2010, I built a fileserver with a 5x 7200 RPM disks in a RAID-6. Even using an inexpensive Phenom II CPU, I could get over 350 MB/sec, sustained. And short writes could burst faster, since the server would buffer them in memory. It was so sad to copy multi-gigabyte files over my network, knowing the server could write much faster. And they were often still cached in the client's RAM, having just been rendered or downloaded.

Last year, I built another fileserver with 3x SSDs in a RAID-5. Sustained xfer speeds are north of 800 MB/sec, and the machine has 16 GB of RAM for buffering writes & caching reads. The goal was to build a server so fast that there was no downside to using it directly, instead of local storage. So, yeah, I'd say 10 Gig was warranted.


Another unrealistic requirement. Most people don't need that much aggregate bandwidth. In fact, I'd happily use a 10 GigE hub, were one available at significantly lower cost.

Well, for most home & small office setups, it's rare to see more than a couple ports maxed at any given time. And most connected devices will still be at 1 Gbps, so they're not going to stress it.

It's a classic chicken & egg problem. In a home environment, having a 10 Gig port on just one device is of almost no value. But once the ball gets rolling and high-end motherboards start shipping with 10 Gig ports, then I think prices will quickly fall.
 
I'm sure the same thing was said of GigE, when it came out. Yes, 10 Gig needs more intensive signal processing and it's harder to drive at a distance, but its range over twisted pair is probably enough for most home use.

Umm no.

I wasn't joking, there are some real physics issues you run into with 10GbE over copper that simply aren't a problem at 1GbE.

Anyhow we could go back and forth but the ultimate answer is there simply isn't enough market demand. There is no chicken vs egg, home users don't even use much of 1GbE and thus don't require more bandwidth. You want some more as a play thing or for bragging rights and you don't want to pony up the cash to buy a low demand niche product. Deal with it. The price won't go down until mass market adoption happens and that won't be for a long time.
 
Anyone with a NAS as primary storage(for multi system file centralization reasons or similar) would like faster than gigabit. No way to say extra bandwidth is not used when you have to shuffle large files around either.

I am not saying it will be cheap or become common in the near future, but I can certainly find no fault with wanting such network speed.
 
Anyone with a NAS as primary storage(for multi system file centralization reasons or similar) would like faster than gigabit. No way to say extra bandwidth is not used when you have to shuffle large files around either.

I have one of these along with an ESXi virtual server, still not using anywhere near the full bandwidth on a day to day basis. The only time I saturate the network bandwidth is when I copy very large multi-gigabyte files from a SSD to the file store. Otherwise the large local 7200RPM HDD that holds my media files, games, downloads, steam library and what not is the limiting factor. The system is waiting for you far more often then you are waiting for it.

This is what I mean by real world usage scenarios and not back to back benchmarks. People don't use their computers to run and post benchmarks. They use their computers to watch media, play games, write stuff and communicate via social media. None of those consume sufficient bandwidth to saturate a 1GbE LAN connection. Which leaves an incredibly small number of people who might copy 10~30 GB worth of data to and from their network file storage on a daily basis. That market is so small that the manufacture wouldn't make money mass producing cheap 10GbE components. And while mass production is the only way to make them cheaply they would still be more expensive the older mature 1GbE products already on the market.

Once 1GbE or faster internet is available and we're all streaming in 8K video, then we can start talking the need for cheap 10GbE home infrastructure. Until then it's a toy for super early adopters to brag about.
 

InvalidError

Titan
Moderator

With 4k being done at 15-20Mbps with h265, 8k will likely end up being done at less than 100Mbps with whatever next-gen CODECs come with it. Even if 8k streaming became mainstream, 1Gbps would still be vastly sufficient for sharing the connection with a handful of other devices.

There will always be fringe cases to "prove" someone's need for something but as far as the mainstream is concerned, I expect 1GBase-T to stick around as the standard for another 10+ years since the bulk of what most people do hardly requires more than 20Mbps.
 

bit_user

Polypheme
Ambassador
What's your point? You're just spouting meaningless generalities. I can play that game, too. Yes, each iteration of technology pushes closer to the limits of physics. We all know that. But then it's refined to a point where reliability goes up and cost & power goes down. It may never be as cheap or low-power as 1 Gigabit is, but it can and will certainly get cheaper and the range should be a non-issue for most (I read 55 meters for Cat 6 twisted pair).

You really think anybody who copies multi-GB files between machines or uses a networked filesystem doesn't wish it were faster? I'm sure most do, and many would be willing to spend money to upgrade. Just not quite at current prices.

The chicken and egg is really that without the volume, the prices won't drop. But without the prices dropping, there won't be the volume.
 

bit_user

Polypheme
Ambassador
I do software builds on a network filesystem volume, so that I can develop, debug, and run the software on any one of several different systems. That hits storage hard, and ends up looking a lot like a benchmark. In fact, linux kernel compilation is a popular benchmark, specifically because it is so storage-intensive and happens to be a use case many people have.

There are other real-world use cases, like CAD, content creation, etc. Don't pretend you know everyone's needs.

Something else to keep in mind is that an increasing share of wired networking users will be the power users. WiFi is adequate for most casual computer usage, and a lot of it is happening on devices that don't even have Ethernet jacks. That should mean the % of Ethernet users that want (and will pay more for) 10 Gig should increase faster than the absolute demand curve for 10 Gig.
 

InvalidError

Titan
Moderator

The lion's share of people who still want Ethernet of any form without necessarily being power-users are gamers who want and often need a wired connection's predictable and stable performance.

I used to use my tablet as a VoIP phone over WiFi but gave that up for an SPA112 because WiFi was not stable enough to provide repeatable call quality.
 

bit_user

Polypheme
Ambassador
Yeah, I did try to use measured language, specifically with gamers in mind. Still doesn't mean the trend I predicted is invalid, though.

FWIW, most of my cell phone calls are on WiFi (Google.Fi), these days. Quality has been carrier grade - waaay better than Skype.
 


Different strokes for different folks? I mean I max out my network pretty much daily.
My NAS acts as the home movie server, and pulling 3 relatively lightly compressed HD streams seems to do a good job at maxing out my Gig port on my home server during regular use. I constantly do big 30-150GB file dumps from client HDDs to back up data before formatting a system, and having more than a sad 100MB/s throughput would substantially speed things up, especially when modern single HDDs are capable of 150-250MB/s sustained, and SSDs are capable of 500MB/s sustained.
I also do the occasional video editing project. I only have SSDs in my main rig, so all bulk storage is on the NAS. This means I can pretty much only hold one project on my machine at a time, and switching projects means offloading a project to the NAS, and pulling the current one from the NAS. Again, 100MB/s gets the job done after a few hours, but being able to use my drive's full 800+MB/s (RAID on both the server and PC side) would make very short work of the file shuffle. Heck, if I had a faster network I could just edit with my files on the NAS! No point in bringing them to my local machine at all if I have that kind of bandwidth to play with!

The other thing is that we don't need a full 10Gig to make all of this work. Even a much simpler 4Gig or 8Gig connection would satisfy my needs. I know that these kinds of connections are available in the pro space... but if 10Gig is such an issue then why not move to one of these standards instead for the enthusiast market? If I was making money with my system, then I would have no issue paying the $$$$ for a high end 10Gig setup... but if it is just a hobby, I only have so much money to blow on it every year.
 

OMG WiFi is such a problem! People just don't seem to understand that WiFi is a shared connection! Assuming you have a decent quality router/AP then N/AC works great in a house, but if you live in a busy apartment/condo complex, or trying to run a whole lab of laptops and cell phones in a MS or HS classroom, then the WiFi network simply becomes slow and unreliable. Such a pain.
Went to a friend's house to troubleshoot his slow WiFi and found that he had some 20+ devices on an N network! I had to explain to him that simply streaming Netflix to his TV was enough to essentially bring the WiFi to a crawl. All stationary devices need a wired connection. Free up WiFi for the things that need the portability of being wireless.
 


A full HD video stream takes less then 10Mbps, and no where near 1000Mbps, so your off there by an entire order of magnitude. Your file copies are being restricted by the speed of your local 7200RPM HDD which doesn't have anywhere near the 100MBps that GbE caps at. You would need to copy 30+ GB to or from a SSD in order for the network connection to be the bottleneck, in which cause I'd have to ask why are you coping the exact same data over and over again.

Like I said previously, the usage scenario required to justify 10GbE is so rare that it's statistically non-existent in the consumer market. There will always be an incredibly niche segment of users who want to brag to their friends about their 10GbE network, they can go out and purchase the same 10GbE stuff that's sold to small business's that might actually have a business case for it. Small business's are not the same market as home consumers, their purchases are driven by cost vs profit analysis and they will purchase it at whatever cost enables them to generate a better profit.
 

InvalidError

Titan
Moderator

He said "lightly compressed" and when a raw HD stream is 1.8-4Gbps, "lightly compressed" could quite possibly be over 100Mbps. The only sources with that sort of bitrate though is HD video cameras since even Blu-Ray maxes out at ~50Mbps.

As for HDDs, my WD Black can do 160Mbps sustained sequential reads (up to 200MB/s on outer tracks) and 120MB/s sequential writes. If your HDD cannot do 100MB/s sequential, you might want to upgrade.
 
He said "lightly compressed" and when a raw HD stream is 1.8-4Gbps, "lightly compressed" could quite possibly be over 100Mbps. The only sources with that sort of bitrate though is HD video cameras since even Blu-Ray maxes out at ~50Mbps.

As for HDDs, my WD Black can do 160Mbps sustained sequential reads (up to 200MB/s on outer tracks) and 120MB/s sequential writes. If your HDD cannot do 100MB/s sequential, you might want to upgrade.

HD Video streaming is ~10Mbps, this is a common standard.

Your HDD most certainly can't do 160MBps sustained. The standard disk to buffer data rate for 7200RPM HDD's is around 128MBps, but that's not the effective transfer rate. The actual transfer rate is usually 40~50MBps per HDD, 70MBps if your really lucky of have well optimized disk geometry.

This is because workloads are rarely serialized transfers from the disk's buffer into memory, and instead are transactional I/O requests. A series of requests are sent to the disk, that then performs them and notifies the host controller when they are finished. File access is deeply abstracted and all those layers of abstraction add additional I/O latencies which prevent that perfect serialized speed from being reached in real world performance. That is actually the single biggest benefit of SSD's, they are capable of handing an order of magnitude more I/O requests then HDD's and thus experience a much better optimized data flow.

I can't stress how much people need to start thinking in real world terms and not fictional sanitized benchmarks. I've personally ran these tests in asserting the disk performance required for various enterprise installations and you aren't getting anywhere near those numbers from HDTach or Crystal Disk. Best case scenario is a single ridiculously large file stored in sequential logical sectors that are perfectly aligned with sequential physical sectors across all platters with zero fragmentation and no other system I/O happening. That is such an uncommon occurrence that any attempt to base a judgement on it is fallacious. Scenarios where that would be common would also dictate specialized data storage solutions.

Inevitably it all goes back to 10GbE being so niche and unnecessary in the home as to be nothing but bragging rights for a few individuals. There are certainly business case's for it's utilization, but those are all revenue generating and wouldn't be your home variety anyway. This entire argument just smells of a few people being unhappy that they can't cheaply upgrade their network to show off to their friends and instead have to hunt down second hand equipment or fork out heavy cash for new business class equipment.
 
FWIW

Someone said "Even a much simpler 4Gig or 8Gig connection would satisfy my needs. I know that these kinds of connections are available in the pro space."

You don't get 4gbit or 8gbit in ethernet. Those are typically fiber channel standard speeds. Ethernet is older 10 mbit, 100 mbit then current 1gbit, 10 gbit, and future 40 gbit and 100 gbit. The last two are not deployed, but are coming.

Re wireless throughput: Good luck getting the best AC wireless router even close to 1/2 what a 1gbe connection actually delivers. The numbers "1700 mbps" don't mean the same thing. Then there's interference from the neighbors, etc. http://www.tomshardware.com/reviews/802.11ac-wifi-router-testing,4107.html

"That is actually the single biggest benefit of SSD's, they are capable of handing an order of magnitude more I/O requests then HDD's and thus experience a much better optimized data flow." Very true of server workloads. Not really a help at all with typical Windows workloads where the queue depth rarely gets above 1 or 2. Most 2.5 inch SSDs internally use 8 channels that can operate in parallel, typical windows workloads only get one channel working so real world throughput for SSDs starts at 1/8 that of benchmarks (which are done with queue depth 32 of all things) and gets worse from there. Aside: this is why raid 0 striped arrays which really help server workloads so no gains in real world windows apps... only one of the drives in the array is in use at any time because there is only one outstanding IO request. Windows 7, 8 and 10's "resource monitor" does a good job of showing disk queue depth if you get interested.
 

InvalidError

Titan
Moderator

Online HD streaming, sure. HD video camera recordings on the other hand are frequently well over 30Mbps. My cheap photo camera can shoot HD video at 1080p60 and chews through the SD card at a write rate of about 60Mbps when doing so. Someone who archives and edits videos from his NAS would be using far more than 10Mbps.


It certainly can: I just copied the ~3GB Windows 7/8.1/10 ISOs from my Seagate HDD to my WD Black and the whole copy happened at 130-170MB/s. That's between HDDs that haven't been defragmented in years. (Both the source and destination partitions are large file archives, so not much fragmentation happening there.)
 


Nice test. Real data is nice to have. WD specs a sustained data rate of 218 MB/s for their 6TB drives down to 168 MB/s for their 3 TB drive. Your 130 to 170 MB /sec says you were very clost to best case. I'd guess you did not see any fragmentation issues because windows auto defrags spinning HDD since vista, and windows file system allocates large contiguous blocks when it can for large transfers. http://www.wdc.com/wdproducts/library/SpecSheet/ENG/2879-771434.pdf



 
re: "Someone who archives and edits videos from his NAS would be using far more than 10Mbps." Yes indeed. 10Mbps is a playback rate, not the rate at which people read and write files. Saving an updated clip back to the NAS would easily max 1Gbe throughput (unless the NAS is crappy). My kids often copy large game directories between systems to install game mods. All of those transfers sit at 100 MB/sec ethernet wire limited.
 
It certainly can: I just copied the ~3GB Windows 7/8.1/10 ISOs from my Seagate HDD to my WD Black and the whole copy happened at 130-170MB/s. That's between HDDs that haven't been defragmented in years. (Both the source and destination partitions are large file archives, so not much fragmentation happening there.)

And that's not a real world workload, or does your day consist of coping the Windows ISO back and forth all day?

Now try to copy an entire directory structure containing a few thousand files but of similiar size to that ISO. Watch that number plummet because the file subsystem has to do tons of meta-data lookups on the HDD prior to even starting he data transfer. See large sequential files are nice for benchmarking but rare in real world usage. Real world data access is tons of 4K random I/O, so no easy clean reading off the HDD platters.

Again my argument is that 10GbE is completely unnecessary in real world consumer usage scenarios. Every argument made so far as dodged that statement entirely.
 

InvalidError

Titan
Moderator

I always disable scheduled defrag, virus check, indexing, etc. because system stuff causing disk activity when I'm not doing anything annoys the heck out of me. Everything I do not use and know how to disable, I disabled.

 

bit_user

Polypheme
Ambassador
Wow, what decade are you in? A reasonably large 7200 RPM HDD has no problem exceeding 100 MB/sec. I think you've pretty much discredited yourself, right there.

No, you're just wrong. Do you honestly think we don't know how to monitor our network usage and see when it's maxed?

The problem with your argument is that you're trying to say that if my network is only maxed 1% of the time, then I shouldn't upgrade. By that argument, most people should probably still be using Pentium 3's for their CPUs. But if that 1% of the time is spent with me sitting around and waiting, then it's a problem.

You insult us, by implying that we don't know what we need. Please stop.
 

bit_user

Polypheme
Ambassador


Good points, but I just wanted to mention that it'd be easier to read if you'd use the [ quote ] and [ / quote ] tags (remove the spaces), instead of double-quote characters.
 

bit_user

Polypheme
Ambassador
Please cite your source on that. I'm betting it's probably ~10 years old.

I can't stress how much you need to stop injecting irrelevant and outdated data into this conversation. Modern filesystems, on typical desktops, aren't nearly as fragmented as the enterprise systems you probably analyzed. And to the extent this whole conversation has veered off into HDD performance, is just another case where you're focusing on irrelevant, anachronistic data and use cases. I don't even have HDDs in any desktops, any more. My only HDDs are in my fileserver (the one I use for media & backups).

Again, I don't think anyone needs you telling them whether or not they should be dissatisfied with how long their file transfers take. If he's unhappy enough to justify an upgrade, then he should upgrade. It's as simple as that. Trying to assess whether he does it enough that the time savings will justify the cost is a tradeoff you really can't make for people.

This depends a lot on how the data is transferred, but it's rarely as bleak as you suggest. Only in some pathological worst-case will the data be randomly distributed around the disk. And, there we go again, focusing on disks.
 
Status
Not open for further replies.