Highest MB/s transfer rate on a home LAN?

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
haha, u guys are funny. well here it is, I have a Dell Desktop 8400 and it uses SATA for hard disk connections. I have 4 ports and i'm using one right now with a 160GB Hard Drive. My average Read Time is around 45/MBs and i'm also using my onboard Ethernet Gigabit Port. I recently made my bros' PCs Gigabit capable with some cheap NIC's. But i still don't have a router or switch.

And yes being that i don't need wireless anymore could yall recommend a good Switch that handles Gaming packets and throughput and all that good stuff. Switches will give internet sharing right? by just plugging in the cable modem's Ethernet Cord right?

also yeah, i decided i'll get 2 big hard drives near Raptor performance for the price of a 74GB Raptor. But which ones?
 
haha, u guys are funny. well here it is, I have a Dell Desktop 8400 and it uses SATA for hard disk connections. I have 4 ports and i'm using one right now with a 160GB Hard Drive. My average Read Time is around 45/MBs and i'm also using my onboard Ethernet Gigabit Port. I recently made my bros' PCs Gigabit capable with some cheap NIC's. But i still don't have a router or switch.

And yes being that i don't need wireless anymore could yall recommend a good Switch that handles Gaming packets and throughput and all that good stuff. Switches will give internet sharing right? by just plugging in the cable modem's Ethernet Cord right?

also yeah, i decided i'll get 2 big hard drives near Raptor performance for the price of a 74GB Raptor. But which ones?

Maybe madwand can recommend a switch, if not, then you'll have to do some research on your own to find a god one, I havent done any research on any lately. You COULD look for the one I mentioned before, and google around a bit, and see what people have to say about it, it seems to have good features, but I couldnt say for sure.

HDD wise, well you COULD find as close a match as possible for the one you have now, and run them in RAID 0. They wouldnt give you the hopefull magical 90 MB/s (alot of people think that RAID stripping would double your speed, but it wont), but it would be considerably faster than using one. You COULD also find some older technology drives, like the one I have, and benchmarked for a 'cheap' price, and run those in RIAD 0.

Concerning Seagate, the drives best to get are the Barracuda models, poke around on newegg, read the reviews, then google for a review or three. Last year or so, barring the raptors Maxtor had one of the faster big SATA drives, I dont remmber which one it is, you should however beable to find a review using google. I can tell you what I'd do, Id buy a Seagate, well two of them, at a cost that would allow me to buy two. There are lots of other factors also, speed, warranty, storage size, etc, you'll just have to decide what your priorities are, and make some adjustments in reguards to what you buy. Typicly for me, the tradeoff usualy is comming down in storage size, for a better price, but I have no idea what priorities motivate you outside of speed.

My belief in buying new hardware, is that its always better to do your own research (homework), and trying to stay away from forums, word of mouth, and non well known reviewer pages. Using web sites like, tomshardware.com, anandtech.com, firingsquad.com, to figure out exactly what would make ME personally happiest with the type of hardware I'm wanting to find. Recently I did this to the extreme while looking for a digital camera, and I'll tell you what, I'm very pleased with the product I've purchased.

SO basicly,, for YOU the buyer to be the happiest possible, you're going to have to do alot of homework on whatever it is that you purchase, you know your motives better than anyone else

Good luck :)
 
oh wow, i bet u know how to woooohh the ladies with words huh, haha. well yes, with the price of 1 raptor i can get 2 250GB hard drives that bench at around 58MB/s or so each, and then get somewhere near 100MB/s hopefully when in Raid 0. yes i will do some researching and go with that. thanks dude. i do understand fanboyism so i'm aware of the forumz fall. thanks again
 
oh wow, i bet u know how to woooohh the ladies with words huh, haha. well yes, with the price of 1 raptor i can get 2 250GB hard drives that bench at around 58MB/s or so each, and then get somewhere near 100MB/s hopefully when in Raid 0. yes i will do some researching and go with that. thanks dude. i do understand fanboyism so i'm aware of the forumz fall. thanks again

You know what you could do about the NICs, is buy one of the Intel PCI cards i linked to in that review (the one that did the best besides the onboard Intel GbE port), do a mini bench on your Bro's PC, with his current NIC, then drop the intel in, and see if the difference justifies buying two other Intel nics for the other PCs in the house :)

Intel usually makes pretty good nics, so I wouldnt be surprised, if you wound up buying two more . . .
 
could yall recommend a good Switch that handles Gaming packets and throughput and all that good stuff. Switches will give internet sharing right? by just plugging in the cable modem's Ethernet Cord right?

It's a router's job to handle communication to the internet. So you'd have to use your existing router, connect one of the ports on the switch to the router, and connect the clients to the switch. All the client to client traffic would go just through the switch; all internet traffic would have to go through the router via the switch.

I have an D-Link DGS-1008D switch. It has 8 ports, supports 9kB jumbo frames (though I've never used them), and has performed well for me. However, it looks like it's history, and only the 5-port version is still generally available. Don't forget though that you lose a port to the router.

http://www.dlink.com/products/?sec=1&pid=230

The DGS-2208 looks like its replacement. It has less buffer memory, but would probably be just fine.

http://www.dlink.com/products/?pid=495&sec=1

The Netgear GS608 also seems to be a similar device (9kB jumbo frames), again with less buffer memory. Looks like it comes in two different cosmetics -- here's the nicer one (IMO):

http://www.netgear.com/products/details/GS608.php

There are similar 5-port versions (pretty cheap). If you don't forsee needing more than 4 ports (1 to router), then you can consider them.

The DGL-4100 is the DGL-4300's sibling router with gigabit without wireless. So if you want to potentially improve internet performance for torrents, use QoS prioritizing, or upgrade to a fancy internet connection, and don't need more than 4 ports, you might consider it. Bah, it's the same price as the wireless one on NewEgg ($100 AR), so I'd just get the wireless one going this route -- even if you don't use the wireless, it should have a better re-sale value. But if your internet connection's fine, you don't need a new router.
 
You know what you could do about the NICs, is buy one of the Intel PCI cards i linked to in that review (the one that did the best besides the onboard Intel GbE port), do a mini bench on your Bro's PC, with his current NIC, then drop the intel in, and see if the difference justifies buying two other Intel nics for the other PCs in the house :)

This benching is a good idea -- you could even do it just now -- direct wire your computer with your brother's (a crossover cable might not even be needed), assign static IP's, and bench away using those IP's. You could measure TCP network throughput using PCATTCP, and file transfer throughput by copying.

The direct wire benchmark is also the reference for a switch's performance.

You can get the Pro 1000 MT for as little as $21 in the US; it's a good option.
 
You know what you could do about the NICs, is buy one of the Intel PCI cards i linked to in that review (the one that did the best besides the onboard Intel GbE port), do a mini bench on your Bro's PC, with his current NIC, then drop the intel in, and see if the difference justifies buying two other Intel nics for the other PCs in the house :)

Intel usually makes pretty good nics, so I wouldnt be surprised, if you wound up buying two more . . .

Yes this is a good suggestion but i already bought 2 Gbit NICs for the other 2 PCs. The extra 30-50 bucks can go toward the extra hard drive i wanna get from Seagate or Samsung.
Also that was a good article you linked. By reading it I was thinking about doing something wicked but not sure if it will work. Can I buy a NIC for my PC, which already has an onboard Gigabit connection, and be able to have twice the bandwidth. Sort of like those Dual Gbit Motherboards. Cause that would be awesome.

Also, i read the article and it says that a PCI Bus shares a total of 133MB/s throughput with all PCI devices throughout all ur PCI slots. So if I can do what i just mentioned above, it wouldn't be useful to fill all my PCI Slots with Gbit NICs would it? But i could use another alternative like PCI-E 1x, 4x, and so on. what do u think. Its a very interesting question.
 
well i thought all we needed to do was plug in the modem. but considering that i have millions of things on my desk i really can't afford another box sitting on cd racks or a lamp. lol. i have instruments, mixers, turntables, subwoofers, studio monitors, keyboards, monitors, and lots of other things. i think i'll just go for the D-Link 4300, ur right about the wireless, its a few bucks more, but sooner or later i could use it, and if i don't, then it will at least up its value if i sell it. I really wanted an All In One solution so I have no choice. I'm a neat freak so I like the way my desk looks right now, and I dread the idea of a Modem, Router, and Switch all on my desk. Thanks for clearing that up for me.
 
This benching is a good idea -- you could even do it just now -- direct wire your computer with your brother's (a crossover cable might not even be needed), assign static IP's, and bench away using those IP's. You could measure TCP network throughput using PCATTCP, and file transfer throughput by copying.

The direct wire benchmark is also the reference for a switch's performance.

You can get the Pro 1000 MT for as little as $21 in the US; it's a good option.

Nice, i'll see about doing that. But are u saying that I could not use my traditional method using Direct Wire? like sadly but surely, using the trusty old Instant Messenger to send and recieve. It seems to work fine using my current setup. I'll probably have to wait until I try it out. Also are you sure I wouldn't need a Crossover, cause if i do, i can get one on the way home from work. Cause i don't think i own one.
 
ha ha ! madwand, I see you've been 'lurking' around on nvidias forums!

Seems you've answered both of our concerns by contacting nvidia dirrectly, somehow I KNEW it wouldnt make sense to implement only half of 'link aggregation' 😉 So you DO know what ethernet bridging is 😛

Since I'm a debian fan, I wont hold my breath waiting for Linux support . . .
 
Nice, i'll see about doing that. But are u saying that I could not use my traditional method using Direct Wire? like sadly but surely, using the trusty old Instant Messenger to send and recieve. It seems to work fine using my current setup.

It should work fine as long as IM can use your IP.

Also are you sure I wouldn't need a Crossover, cause if i do, i can get one on the way home from work. Cause i don't think i own one.

I couldn't tell you for sure -- it depends on your NIC's. I have direct-wired several NIC's without crossover cables though.
 
nVIDIA implied that Linux bonding can already handle it, which is already in the kernel AFAIK. Moreover, one mode of Linux bonding, balance-rr, might be the only one that can give you > 1 Gb/s over a single TCP/IP connection.

This also requires the switch to be smart, but can probably work direct-wired without a switch.

http://sourceforge.net/projects/bonding/
 
Also that was a good article you linked. By reading it I was thinking about doing something wicked but not sure if it will work. Can I buy a NIC for my PC, which already has an onboard Gigabit connection, and be able to have twice the bandwidth. Sort of like those Dual Gbit Motherboards. Cause that would be awesome.

Well yes, and no.

Yes, because you could implement this in Linux, using ethernet channel bonding. Which would mean every single PC that you wanted to do this on, would have to be running Linux, have two GbE NICs, AND a switch OR router, that supported link agregation, or ethernet channel bonding. The main problems with this, besides the obvious need for Linux, is that documentation is hard to find if you're a Linux newb, and a few other things I either dont kow about, or cant seem to reember at the moment.

No because without the use of very expencive switches or routers (cisco equipment, and maybe a company or two others implement it), it is not possible in windows, atleast that I know of, madwand seems to know more about this aspect of networking than I do.

Anyhow, if you wanted to do some interresting reading about the Subject, you could google 'cisco port trunking', 'Linux ethernet channel bonding', 'ethernet load balancing', '802.3 link aggregation', and probably a few other terms I'm forgetting about. Just keep in mind that while all these terms may or may not (more or less) mean the same thing, you will get infomation pertaining to the subject, since no one can really agree what is what (depends on whose site you land on). Atleast that was the feeling I was left with when I did the research on it more than a year ago. Supposedly cisco port trunking is not the same thing according to a network guru IRC buddy of mine ( who is the west coast lead systems admin for Sun Micro), yet I found info simular to 'linux channel bonding' when I googled the term. Same goes with all the terms I mentioned above. Anyhow, if you like technology, and the neat thigns that are possible using it, you'll like the information ireguardless of its proper terminology.
 
nVIDIA implied that Linux bonding can already handle it, which is already in the kernel AFAIK. Moreover, one mode of Linux bonding, balance-rr, might be the only one that can give you > 1 Gb/s over a single TCP/IP connection.

This also requires the switch to be smart, but can probably work direct-wired without a switch.

http://sourceforge.net/projects/bonding/

Thanks for the link, but Ive had that txt file archived for close to two years now :) Its getting really hard to find any more, even though its supposed to be included with the distro I use (supposedly)

And yes according to what I've read, and a friend of mine who has had experience with chanel bonding, dirrect wired channel bonding can work. So you're saying according to what you herd dirrectly from nVidia, that > 1Gbit links will not work with Windows, without additional hardware ?
 
Thanks for the link, but Ive had that txt file archived for close to two years now :) Its getting really hard to find any more, even though its supposed to be included with the distro I use (supposedly)

Here's a relevant part from the doc that I interpret.

balance-rr: This mode is the only mode that will permit a single
TCP/IP connection to stripe traffic across multiple
interfaces. It is therefore the only mode that will allow a
single TCP/IP stream to utilize more than one interface's
worth of throughput.

The Linux bonding implementation has tons of options, way more than I've seen anywhere else. The Linux bonding is also compatible with Cisco, Intel, and 802.3ad bonding. If none of those do single connection > 1 Gb/s, then I think it's very unlikely that nVidia will.

And yes according to what I've read, and a friend of mine who has had experience with chanel bonding, dirrect wired channel bonding can work. So you're saying according to what you herd dirrectly from nVidia, that > 1Gbit links will not work with Windows, without additional hardware ?

I've done it, using Broadcom software and a mishmash of NIC's (a pair of Broadcoms on one end, and a mix on another end (actually a built-in and a PCIe / PCI)). I've found that single connections do not go > 1 Gb/s in any of the options that I've found to work under Windows. However, I have gotten 2 simultaneous connections through with > 1 Gb/s -- around 1.6 Gb/s to be precise, using 2 PCATTCP sessions over FEC/GEC 802.3ad static. A single file transfer wouldn't go faster, because it's one connection. You could try to parallelize your own file transfer, but here you put the disk subsystem under even greater stress, and unless its very very fast, probably defeat your goal.

I haven't actually pressed nVIDIA on that point, but from what I've read, and esp. the fact that they don't need special switch support whereas the Linux balance-rr does, that they will only do > 1 Gb/s over multiple connections, not for a single TCP/IP connection. But since you asked, I guess I will ask nVIDIA for confirmation to be sure.

Edit: It was 1.6 Gb/s, not 1.4 Gb/s. I.e. 200 MB/sec.
 
heh, the cheapest 1Gbit Layer 2 load balancing switch I can find (in a quick search) is $700 usd (ish) =/

There are a number of "smart" switches that will do manual trunking, though not the full 802.3ad dynamic trunking. This is probably good enough. The cheapest one is probably the Dell 16-port smart switch at around $200 USD. I made a list of a few of them in another post asking for jumbo compatible switches.

At this level, you have to look carefully at the feature set, and also look out for unwanted features, such as noisy fans (which may be unavoidable in the end if you're looking for high performance / capability).
 
Can I buy a NIC for my PC, which already has an onboard Gigabit connection, and be able to have twice the bandwidth. Sort of like those Dual Gbit Motherboards. Cause that would be awesome.

Also, i read the article and it says that a PCI Bus shares a total of 133MB/s throughput with all PCI devices throughout all ur PCI slots. So if I can do what i just mentioned above, it wouldn't be useful to fill all my PCI Slots with Gbit NICs would it? But i could use another alternative like PCI-E 1x, 4x, and so on. what do u think. Its a very interesting question.

It is a very interesting question, but Yyrkoon's right, you need a fancy switch (> $200) for most forms of trunking, and you also need software that's compatible with your NIC's, and with the trunking mode on other ends.

Most vendors' teaming software don't like to support other vendors' NIC's. SysKonnect for example only works with their own Marvell-based NIC's. nVIDIA only their own. Broadcom, however supports other brands, although you have to have a Broadcom NIC to get started -- to get the software, and of course Broadcom doesn't promise full compatibility with other NIC's, besides Intel that is -- they seem to have made a special effort for Intel.

Even with that, you have two big problems: (1) Its looks like very few, maybe just one, Linux, will do what we want -- give you > 1 Gb/s over a single TCP/IP connection. Most of the other ones are designed for enterprise use, failover, etc. -- there, it's not critical to support > 1 Gb/s over a single connection, because they have lots of lots of connections to deal with -- they can keep it simple and let each connection have a single gigE path, while still getting > single gigE overall. (2) Your HD's probably won't keep up, and doubling gigE isn't going to give you 2 Gb/s -- even more bottlenecks will crop up at that level.

I think (1)'s a killer for what we want to do here. I'm disappointed that the indications are that you can't get it in the official standard trunking, 802.3ad. The way out for now is Linux, but that's pretty tough for most of us. I'm sure I'll try it in time though.

But hey, even 30-40 MB/s is a heck of a lot faster than 10 MB/s, you gotta be happy with that at least.
 
http://www.newegg.com/Product/Product.asp?Item=N82E16833122018 a product simular to this is what I was talking about before, last week it was on sale for 69usd, had 8 ports (i think), but im having a hard time finding it now. Its possible it was a misprint, but i really hope not !

http://www.newegg.com/Product/Product.asp?Item=N82E16833122111 here it is i think, if you read down near the bottom it says it supports up to 2000mbit per port, doesnt say it supports IEEE 802.3ad though 🙁

Those are both unmanaged switches, and I think the 2000 Mb/s is just fancy marketing, counting full duplex gigabit as dual gigabit. It ain't fair, and it's confusing, when we have to do twice the amount of searching on NewEgg to find the gigabit switches. All gigabit has to do full duplex! Bah.

What you need to look at is managed switches and smart switches, because you need to tell the switch about the trunks, or at least enable it and disable spanning tree. If you look around, you'll find lots of these. The cheaper ones won't always say 802.3ad, because they don't support the full 802.3ad dynamic trunking, but they will mention trunking, and importantly, how many trunks they support. The cheapest 16-port D-Link for example only supports 2 trunks. The Linksys SRW series is a good candidate, though with noise and a bit more cost than others..
 
yeah, unfortunately. 🙁

Here i was, dreaming of a day when my bros and I could exchange 10GB in just a few minutes. 10mins tops. but i guess i'll have to wait a few decades for Tbit NICs huh. :cry:

ok i take that back, 5mins tops.
Wow, Tbit, sounds promising huh. :lol:
 
yeah, unfortunately. 🙁

Here i was, dreaming of a day when my bros and I could exchange 10GB in just a few minutes. 10mins tops. but i guess i'll have to wait a few decades for Tbit NICs huh. :cry:

Wow, Tbit, sounds promising huh. :lol:

Id say in less than 5 years, they MAY come out with copper 10GbE, which should mean you, me, and ever other home user should beable to have in home 10gbit networks relitively cheap. The same happened with 1GbE not too long ago . . .

Err the other problem is, Im unsure of the distance you would beable to traverse with copper 10GbE . . . 50 meters is about all you can git, without a repeater, or switch on 1GbE, with 10GbE, I would THINK that 3-5 meters would be max, atleast for awhile.
 
yeah, unfortunately. 🙁

Here i was, dreaming of a day when my bros and I could exchange 10GB in just a few minutes. 10mins tops. but i guess i'll have to wait a few decades for Tbit NICs huh. :cry:

ok i take that back, 5mins tops.
Wow, Tbit, sounds promising huh. :lol:

10 GB / 30 MB/s = 333.33 s = 5.56 minutes.

Assuming big files and everything's working well, you should be able to do this soon.
 
Status
Not open for further replies.