Marvell Has Low-Cost 10GbE For Home Users But Demand Is Still Low

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

bit_user

Polypheme
Ambassador
SFP+ cables are not only expensive, but they're also coded and certain devices are restricted to working with certain cables. SFP+ wasn't meant to be used this way, primarily. The copper cables are just a hack for cheap (relatively speaking), in-rack connectivity.
 

CRamseyer

Distinguished
Jan 25, 2015
425
10
18,795
In my experience the only add-in cards that are picky about the cables are the Intel products. The others seem to like whatever I plug in. I haven't had any issues with the switches but I use Mellanox, Quanta, Netgear and Supermicro enterprise switches.
 

USAFRet

Titan
Moderator
The number of people chomping at the bit to rewire the house, and update all their wired endpoints and inbetweenpoints, is approx zero.

The perceived value is just not there. (yet)
With the current 1Gbe...
Can I play a movie from that box, and display it on this box? Yes.
Can I move data between devices pretty seamlessly? Yes.
Will this make a difference in all my wireless devices? No.

Of course, this assumes the 'early adopters' actually own the residence, and can make such alterations.

What's in it for me? Convince me why I need to change.
I am a power user. 20TB drive space in or connected to the NAS box, 2TB SSD space in my main system, software dev, video editing, lots and lots of image editing, comprehensive automated backup plan....blah blah blah.

If, for instance, an automated backup from my main system to the NAS box takes 10 minutes (1Gbe), or 1 minute(10Gbe), it matters not. Because I've got that scheduled for a time when I do not care. 10 minutes at 2AM is just the same as 1 minute.

Now...if that were to take 4 hours instead of the current 10 minutes...then we'd have an issue. But current data sizes do not indicate a whole house remodel.

If I were building new? Maybe. Yeah, probably.
But this will be a slow uptake.
 

bit_user

Polypheme
Ambassador
Here's my use case:

I have a i7-2600K that runs only Windows and a Xeon-based workstation that's dual-boot but runs mostly Linux. I originally setup a fileserver that uses RAID-6 across 5 disks, but it could feel a bit slow - especially after I got accustomed to the speed and responsiveness of SSDs.

So, I hatched a plan to build a lower-power fileserver with only SSDs. RAID-5 across 3 SSDs, and relegated the old fileserver to doing backups and storing bulk media (FLACs from CD Rips, videos, etc.). Now, GigE was well & truly the bottleneck, so I added a 10 Gig link between it & my Linux workstation. File access really does seem as fast as a local SSD. I can even edit & remux HD videos stored on the new fileserver with ease.

All of this stuff is in one room. Wireless & GigE is sufficient for the rest of my domicile. Hence my interest in SFP+. Now, the old RAID-6 supports write speeds around 250 MB/sec and reads @ 325 MB/sec. So, 2.5 Gigabit would be sufficient for it. I'd also like to upgrade the link between the new fileserver and my Windows box, though I'd settle for 2.5 or 5 Gig.

As tempting as that 4-port SFP+ fanless switch is, the price of that + 2x NICs + 3x SFP+ cables really adds up. So, I could just sit tight and wait for a good 2.5 or 5 Gig option.
 

jasonf2

Distinguished
I don't think that we will see widespread integrated 10g until the switches become inexpensive and consumer accessible. I cannot justify a 1000 switch in a consumer network. Corporate networks are another story. Until that switch is under 200 and purchasable at best buy home network integration is going to be almost nonexistent. And yes it will speed up your typical broadband internet connection setup. Way too many people are completely relying on wireless for the final haul and 802.11ac devices don't really play that well together. I make it a personal point to get as many devices off wireless as possible and a good active switch internally isolates communications between the two devices communicating with each other rather than sharing the bandwidth between all devices. When you start to shove 10+ 802.11ac devices in the same bubble you run into the same problem that early passive hub tech did. Constant packet collisions or the need for overhead intensive token ring time allocation. Getting back on the wire for devices that can dramatically reduces latency and reliability. The average consumer only looks at the bandwidth number on the box though. When the box on the wifi indicates that the wireless is good for 3.5gbs and the wired is only good for one the wires simply aren't being put in. Before SSD the bandwidth was moot. But with many consumer drives being able to push 500mps+ today there is a usable consumer need for wired 10gbe and beyond.
 

bit_user

Polypheme
Ambassador

I agree about the superior reliability, etc. of wired, but just wanted to point out that MIMO is supposed to help reduce wireless collisions. I haven't yet taken the plunge, but since my cell phone and laptop now support multi-band AC, I'm starting to think about upgrading to a MIMO router. I think I am getting some interference from neighbors.
 

InvalidError

Titan
Moderator

In infrastructure-based WiFi, the WiFi router/AP handles scheduling time slots between active clients much like DOCSIS and GPON do, which nearly eliminates collisions between clients on the same WLAN. What MIMO does is improve the SNR by leveraging multipath propagation using multiple RX/TX antennas. What you likely had in mind here was MUMIMO, Multiple User MIMO, which uses propagation characteristics between each antenna and each client to equalize and time each antenna stream to produce peak SNR at each client. Depending on how different the characteristics of each router-client path are, this can hypothetically allow multiple overlapping streams with little to no performance degradation.
 
There wen't be cheap 10GBASE-T equipment until 14nm becomes cheap to manufacture in bulk. The current 28nm stuff simply requires too much power and gets too hot to reliably power cheap twisted pair wires which is what's required for it to be mass adopted. Anyone can already do 10GbE home networking, decently priced switches are available and purchase the NIC's to go with them. I was already looking at this stuff for my VMWare stack but decided it wasn't really worth it, which is the exact decision nearly every other enthusiast made.
 

InvalidError

Titan
Moderator

14nm won't reduce the cost of 10GBase-T chips by much as a large chunk of the space is used by analog stuff which doesn't scale to 14nm, especially the line drivers and receivers which need to be able to survive all the transients and other junk on the lines themselves. ADCs, DACs, differential amplifiers, etc. don't scale much beyond 120nm either.

What will reduce the cost of 2.5/5/10GBase-T is standardization (there are still a few competing specs on the market with more introduced to provide longer range on lower grade wiring), mass manufacturing and integration once the definitive standards are locked down for good.
 

bit_user

Polypheme
Ambassador

I've read that 10 Gig involves a pretty substantial amount of signal processing. Unless you know otherwise, perhaps that might constitute the majority of the power dissipation.
 

InvalidError

Titan
Moderator

Let's put it this way: you aren't going to drive a 100m long cat6a cable using a differential line driver made of 14nm transistors. Those alone will account for about 1W per port. If you are going to do the processing in a DSP, those 1GSPS ADCs and DACs will cost you another watt of mostly analog goodness.

The problem with shrinking and reducing power in analog circuitry is that the smaller and lower-current the circuits are, the noisier and the more difficult to accurately match they become. Mismatched circuits would defeat the whole point of bothering with using differential signaling to get rid of common-mode noise in the first place.
 

kal326

Distinguished
Dec 31, 2007
1,229
108
20,120
SSD's in more and more computers are making 1 gigabit networking a bottleneck. If I am trying to backup an SSD based computer to my file server which has a decent direct attached storage array, the network is the bottleneck. Even platter drives can saturate that network connection. Internal storage movement, backups, file access, stuff of that nature is currently held back. So certainly there is a market for the connections. It is the switches and related gear that needs to come closer to prosumer pricing. As for the argument that most people run wireless and don't have a switch beyond the isp provided router, please. I have a 24 port and plenty of area smaller switches. I use Wifi for things that move a lot. Otherwise a solid, reliable, wired connection for the vast majority of of PCs and devices.
 

InvalidError

Titan
Moderator

Only on a freshly formatted drive. As soon as you have a little bit of fragmentation or deal with tons of small files, HDDs quickly dip well below 100MB/s.
 

bit_user

Polypheme
Ambassador

You overestimate the extent & impact of fragmentation, on modern filesystems. Plus, modern operating systems do fairly aggressive read-ahead, which does a good job of hiding modest amounts of fragmentation during the sort of sequential access where you're likely to be bottlenecked by GigE networking.
 

InvalidError

Titan
Moderator

My WD Black did up to 200MB/s sequential after I initially installed it. After two years of moderate use, copying files off of the drive to my P20 thumb-drive (can do ~270MB/s sustained write) averages 70-100MB/s on NTFS/Win10 with fragmentation being at merely 2%.
 


He's talking about regular user data access patterns instead of pure sequential reads that benchmarks like to use. The only time you run into large sequential reads is copying huge files or high end photoshop / video work. Streaming videos would count if they used more bandwidth but currently they don't.

The way most I/O is done is about 60~70% random and only 30~40% sequential, the reads are separated by pauses as the program analyzes the data it's run to determine the next chunk to open. Your biggest limiter won't be data transfer times but the I/O latencies. This is what makes SSD's "feel" so much faster, their random I/O latency is orders of magnitude better then a spinning platter. It's also why I've abandoned using two fast disks in RAID0 for games and what not, the I/O latency just wasn't there and it made no different in actual usage.
 


Yeah that's a normal data access pattern vs "theoretical benchmarked" numbers. When your coping files / directories from a filesystem the copy program will just got a large list of files from the table and then start at the top and work it's way down, those entries are in no way guaranteed to be in sequence. Even at 0% fragmentation where every file occupies a single long chunk you still won't see anywhere close to theoretical speeds because files are all of varying sizes, mostly small, and thus creates a ridiculous amount of OS overhead reading those files into memory. Just use ASSSD and only choose the random benchmarks to see what happens when your accessing data that's not in a long multi-GB length chunk.
 

bit_user

Polypheme
Ambassador

How big are the files you're talking about? I was addressing only your point about fragmentation vs. large files. If you've got lots of small files spread around the disk, there's obviously going to be at least that much seeking.

Did you perform the same test, while the disk was new? Or how was that write speed measured?
 

InvalidError

Titan
Moderator
200MB was the highest read speed from when the drive was new and those were mostly files over 100MB.

Just tried it again out of curiosity, now getting 100-130MB/s. Microsoft must have optimized some things since the last time I paid attention to copy speed around a year ago.

Well, even if Microsoft tweaked things further and managed to get reach the same 160+MB/s I was seeing on the drive when it was brand new, upgrading to 10Gbps would still make LAN-based backups only ~30% faster for 10X the cost, saving me about a minute per ~50GB worth of backup. If I simply tab away from the backup to do something else, it'll be done by the time I remember that I started it either way and the net difference or worth to me is effectively zero.
 

bit_user

Polypheme
Ambassador

Nobody is saying you should upgrade for a single HDD. @kal326 simply said "Even platter drives can saturate that network connection.", which is true.
 

bit_user

Polypheme
Ambassador
I quite like the idea of 2.5 Gbps. If it can be offered at a low premium over 1 Gbps, I can see the typical enthusiast opting for it. It's not often that we see 2.5x the performance, in the PC sector, over a single generation or performance tier.
 
Apr 6, 2016
22
0
1,520
Unless you're a network engineer and know WTF you're doing, this is stupid. I personally run 100Gb at home. The reason it doesn't catch on is PRICE. There is NO reason for the price to still be so damn high. Board makers should have had 100Gb NICs long ago.

And don't play with fibre, childrens. 1 strand in your bloodstream and you're DEAD.
 
Status
Not open for further replies.