Highest MB/s transfer rate on a home LAN?

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
cool, so lets wait and see. do u think they will come out with something like 5gbit for midrange performance.

Id have to say no. Judging from past 802.x specifications jumps (atleast wired networks) they seem to be advancing <current tech> x10 (powers of ten ?) *brain is numb * been working out in the sun all day (112 F !)

lol, do your thing. is there any simple program that can monitor your transfer rate and all your PCs on a network?

There are a few, you would have to look around on thier shareware sites, as i dont recall the names offhand, I write my own programs to do such, and dont circulate my code, or executables.
 
is there any simple program that can monitor your transfer rate and all your PCs on a network?

I use the built-in Performance Monitor, aka "perfmon". It can give you local and remote stats, and also log them (separately) for long-term review. It's a freebie, so the UI isn't the best, but it's a serious system tool, and can give you tons of information.
 
is there any simple program that can monitor your transfer rate and all your PCs on a network?

I use the built-in Performance Monitor, aka "perfmon". It can give you local and remote stats, and also log them (separately) for long-term review. It's a freebie, so the UI isn't the best, but it's a serious system tool, and can give you tons of information.

He may or may not have to enable the service, cant speak for his install, but mine by default leaves it disabled (XP Pro SP2)
 
Thanks for the link, but Ive had that txt file archived for close to two years now :) Its getting really hard to find any more, even though its supposed to be included with the distro I use (supposedly)

Here's a relevant part from the doc that I interpret.

balance-rr: This mode is the only mode that will permit a single
TCP/IP connection to stripe traffic across multiple
interfaces. It is therefore the only mode that will allow a
single TCP/IP stream to utilize more than one interface's
worth of throughput.

The Linux bonding implementation has tons of options, way more than I've seen anywhere else. The Linux bonding is also compatible with Cisco, Intel, and 802.3ad bonding. If none of those do single connection > 1 Gb/s, then I think it's very unlikely that nVidia will.

And yes according to what I've read, and a friend of mine who has had experience with chanel bonding, dirrect wired channel bonding can work. So you're saying according to what you herd dirrectly from nVidia, that > 1Gbit links will not work with Windows, without additional hardware ?

I've done it, using Broadcom software and a mishmash of NIC's (a pair of Broadcoms on one end, and a mix on another end (actually a built-in and a PCIe / PCI)). I've found that single connections do not go > 1 Gb/s in any of the options that I've found to work under Windows. However, I have gotten 2 simultaneous connections through with > 1 Gb/s -- around 1.6 Gb/s to be precise, using 2 PCATTCP sessions over FEC/GEC 802.3ad static. A single file transfer wouldn't go faster, because it's one connection. You could try to parallelize your own file transfer, but here you put the disk subsystem under even greater stress, and unless its very very fast, probably defeat your goal.

I haven't actually pressed nVIDIA on that point, but from what I've read, and esp. the fact that they don't need special switch support whereas the Linux balance-rr does, that they will only do > 1 Gb/s over multiple connections, not for a single TCP/IP connection. But since you asked, I guess I will ask nVIDIA for confirmation to be sure.

Edit: It was 1.6 Gb/s, not 1.4 Gb/s. I.e. 200 MB/sec.

OK, I asked nVIDIA, clearly, and finally got back an answer. Basically, "no", you can't split a single TCP/IP connection across multiple ports reliably. Linux balance-rr also mentions the problems that can happen when you try to do this. Since everyone would love to be able to do this, there has to be a damn good reason why only Linux even tries it, and that reason is that it really doesn't work.

I also found an entertaining writeup on this issue on Alacritech's web site.

http://www.alacritech.com/html/teaming.html

Other Algorithms

Round Robin Packet
Packets are distibuted iteratively across all the ports.

Advantages: Traffic is evenly divided amoung all the ports in the team, maximising throughput.

Disadvantages: Completely violates protocols in that a conversation isn't kept on a single port, and packets are not guaranteed to arrive in sequence.

Recommended for: Amusing yourself about how badly you can break your network. Never ever use this. It is a disaster.

I even tried Linux balance-rr. I only have 1 Linux box, so I tried connecting that to a Windows box running Broadcom, and the result was basically a disaster -- about 1/2 the throughput I got with single gigE.

Guess I'll have to take my own advice and try to be happy with single gigE :)
 
The thing is, Im not sure what part of ethernet channel bonding HE used, but i read an article by a IT professor who actuall bonded like 4 ports per PC, with like 16 PC using linux (of course). I read this article about a year ago, and the article was written way beofre that, but he sucsessfully had 10/100 (fast ethernet, in parrelel) out performing a 1GbE controller. So it IS possible. ANyhow I dont have the link any more, perhaps I could find it, but basicly it was a college professor experimenting with some form of CPU farm (again its been a year or better since I've read the article).

But it does sound like dualnet in windows has no chance of doing this atm, sadly 🙁
 
Here are some previous figures from pg. 1:

Here are some of my recent ones in the meanwhile:

Using a single 8.4 GB file (8,425,053,778 Bytes):

3x Raid 0 ---gigabit---> 4x Raid 0: t=108s => 78.0 MB/s
4x Raid 0 ---gigabit---> 3x Raid 0: t=112s => 75.2 MB/s

4x Raid 0 ---gigabit---> 5x Raid 5: t=131s => 64.3 MB/s
5x Raid 5 ---gigabit---> 4x Raid 0: t=118s => 71.4 MB/s

5x Raid 5 ---gigabit---> 3x Raid 0: t=109s => 77.3 MB/s
3x Raid 0 ---gigabit---> 5x Raid 5: t=137s => 61.5 MB/s

I've been able to get a bit faster recently:

Directories processed = 1
Total data in bytes = 8,425,053,778
Elapsed time in sec. = 92.92
Files copied = 1

That's 8.4 GB / 92.92s => 90.7 MB/s

This performance is reproducible.

The hardware's the same. This was a 4x Raid 0 -> 3x Raid 0 transfer. I used xxcopy and a couple of additional tweaks this time; the 3x Raid 0 was a new configuration, so it's got less data now and possibly a new stripe configuration.

I modified the TCP window parameters -- I'm not sure that they did anything. I modified one nVIDIA NIC to "optimize for CPU" and kept the other as "optimize for throughput" -- this and using xxcopy seemed to do the trick and brought up and stabilized the transfer speeds a bit.

(I can't say what the other transfers would be like, because my arrays are even more full than before -- completely full, and this brings massive fragmentation and performance loss into play. Have to add more drives now; of course that would change the performance parameters by itself.)

I've seen some claims elsewhere of transfers using multiple SCSI drives hitting up to 98 MB/s, so still have room for improvement.
 
E:\tools>dir n:\test\test0\10.gb
Volume in drive N is nvr533264
Volume Serial Number is 8CBC-EC83

Directory of n:\test\test0

08/24/2006 06:01 PM 10,000,000,000 10.gb
1 File(s) 10,000,000,000 bytes
0 Dir(s) 528,742,416,384 bytes free

E:\tools>time 0<nul
The current time is: 14:05:28.49
Enter the new time:
E:\tools>xcopy /y n:\test\test0\10.gb x:\test\test9\
N:\test\test0\10.gb
1 File(s) copied

E:\tools>time 0<nul
The current time is: 14:06:54.49

That's 10 GB / 86s = 116.3 MB/s.

Vista to XP-64 (Vista and 2003 also perform well as servers).
3-drive nVIDIA RAID 5 to 3-drive Intel RAID 0.
Marvell-based PCIe NICs with 9K jumbo frames.

(Could probably do this with 2-drive RAID 0 on source and destination provided drives were not crowded.)
 
I have vista 64 bit and I am usually getting about 11.5 - 12.5 MB real time speed when I transfer stuff through my 100 Mbps router. Has been that way no matter which hard drive speed I have. That is about as fast as 100 Mbps will go I would say.
 


Yes, 12.5 MB/s is the theoretical maximum of 100 Mb/s networking, so if you're approaching that, you're doing as well as you can under 100 Mb/s networking.

However, if you have gigabit NICs on both ends, you could do a lot better by adding an inexpensive gigabit switch, connecting that to the router, and connecting all your gigabit-capable devices to that.
 
I am using justek gigabit switch and gigabyte built in gigabit lan and getting 180mbps of data transfer rate on an average. 😀
 
copying large files from xp to vista-sp1 over a gigabit link im getting around 100megabytes/sec peak with 80+ avg

notes
xp is using that horrible 1.5tb seagate
vista is using almost as horrible 640G seagate
both using onboard lan, marvell and nvidia
using cheap asus gigabit switch

 



Can you post screen shots of this using lan speed test (free and available here:http://download.cnet.com/LAN-Speed-Test/3000-2085_4-10908738.html) using an 8GB transfer. I doubt that you get these speeds using tcp/ip protocol! various studies have been done (http://www.yale.edu/fastcamac/gigabit/gbit_winnt.html even tho these guys state Mbits instead of Mbytes which i think is a typo)
I have observered this transfer rate limit in many network enviornments that use windows OS.
I think some of the previous posts may have typos as well....MB=Mega BYTES and Mb=Mega BITS!!!
 

gay boy
 
Go with 2 x 1gb+ Seagate 7200.12 HDD in raid 0 or better 4 x in raid 10 (for data protection ) with a good on-board raid controller (mine both have Intel matrix storage controllers). 2 of the new 7200.12 can easily max out a gig lan (1000 Mbit = 125 MB) In the real world seeing 100 megabyte / sec is good.

I routinely hit 90-105 mbyte/sec between my gamer and my storage system. However there is no no router or switch in the way to slow anything down, I ran a crossover cable between the 2 10/100/1000 onboard Intel lan controllers. Most likely your Hub/Router/Switch will be slow point, its worth a few extra dollars for a good gig switch (not just a hub). A switch routes traffic to avoid packet collisions and has the bonus effect of allowing full duplex connections.

Gamer has 4x 750 gig 7200.12 Seagate HDD raid 10 setup on a Intel p55 mainboard (no southbrige but raid controller is basically equal to a ICH10R)
Storage has 4 x 500 GB 7200.11 Seagate HDD raid 10 on a Intel g965 mainboard with the ICH8DH raid controller. 4 HDD cost a few dollars extra but has data raid 0 speed and raid data protection.

Actually on my Gamer I split the raid into 2 volumes, boot drive is ~400 GB raid 0 and the D drive is a ~ 1.3 TB raid 10

The 4x raid 0 is very nice, tested at 280 MB/sec (best case reads) and cost less than $400, to go faster expect to spend a lot more on SSD's or SAS controllers. Between the 2 volumes Big file transfers hit 150 MB/sec (as reported by the terracopy utility)

Both have plenty of memory; 8 gig for Win7 / 2 gig with Linux, were recently built/rebuilt, and are about 25% filled
 
Do any of you know the difference between B and b?
Some of you use both of them incorrectly IN THE SAME SENTENCE.
Is that stupid or what?
 
Theoretically you can run 1000 Mbps through your home or upto to the capacity of the switch you are using. For example if the total capacity of a switch is 1000 Mbps and you have 24 1000 Mbps ports on the switch between any two computers with no other computers sending/receiving data you will be able to transfer 500 Mbps of data, which would then throw the burden of actually pushing a file over a network back on the computer.

in a general situation computers (an Intel Xenon server with 32 Gigs of ram) will only transfer ~5 - 7 GBps (Bytes not bits) between Virtual instances of an operating system as tested by a small program called IPerf. (That's an over the internal hardware transfer, which would by far exceed the capacity of a 10/100/1000 Mbps NIC bringing this back into the realm of possibility. (That's a computer designed to host many operating environments, but they are still fairly inexpensive as far as equipment can go)

In all practicality, With your Cat5e wiring I would recommend making sure all of your computers will support 1000 Mbps Transfer speed and grab a DGS 2208 (Stock 16Gbps Forwarding Capacity over 8 ports) or something similar If you can get the matching D-Link NICS It will not make much of a difference, but would give you some advantage on the Green Technology side.

The way to make your computer run much more quickly would be to install more ram than the files you typically open and use Including the OS, and make sure you have a hard drive with a high transfer speed. (The processor as well will dampen speed and should be determined on how fast it works Based on all aspects of the processor from GT/s to GHz And the Bus Speeds and not just the GHz rating, or you may get the feeling that you've been ripped off)
On processors usually the higher the number the better, except usually for the size of technology used to produce the processor. Don't ever forget to cool your processor.

The RPM of the hard drive doesn't matter as much as what the hard drive will read and write to disk in a second and get it back to your mother board.
(Though a slower RPM hard drive will deffinately never be able to read or write as much data as quickly as one with a higher RPM.)
Hard Disk Drives are inefficient compared to Solid State Drives which have the ability to call and use data almost instantaniously and can be stacked in a various forms of RAID including 1+0 format for the highest reliability and speed. (depending on the controler) (Similar to Raid DP from NetApp.) http://www.hardwaresecrets.com/news/4392
SATA 600 is the best for the price. Fiber chanel gets expensive, but will blow sata 600 away.

Every AMD processor I've ever used has been in a computer that has burnt out after a few years usually two - five years. Every Intel processor I've ever used is Except for this 10 year old box by my feed worked without any problems I'm willing to bet the processor works perfectly fine, and the hard drive is having some issues with reading and writing data, but don't take my word for that ^_^ Because Windows is not the best OS which is what I was running on that system. And as always I will recommend a Mac running an INTEL processor only, and would never touch an older Macintosh EVER because they are not good at all!!!

 
Just in case anyone's interested, I was able to pull off 100 Megabytes+ per second for a 14GB file on my gigabit network with the following hardware:

Source:
SuperMicro 1U with 4x1TB 7200RPM Western Digital SATAs RAID 5
Areca Controller
Windows 2008 R2 Bare Metal Hyper-V

Destination:
Dell PowerEdge Server/Workstation
Windows 7
Hitachi 2TB 7200RPM SATA Secondary Hard Drive

Network:
2 x Netgear switches and about ~50 feet of CAT5e/6

http://tweetphoto.com/21820752
 
Status
Not open for further replies.

TRENDING THREADS

Latest posts