No money for 10 gbps SFP NiCs, what's faster than 1 gbps, and affordable between two computesr?

Hi there guys,

Lately I've invested a LOT to my computer, now I built a server, first on Windows Server 20102 R2 legal, now I hit it with Windows 8.1.

I'm waiting for some Mellanox refurbished 10 gbps sfp adapter/s, but I'm not sure if it would work.

In general I want to use server as extened storage, with raid 0 + 1 reaching speeds of 360 MB/s.

1 gbps network on direct connection manages barely 110-115 MB/s, which is well, 3x as little, as that raid array can do, so I'm losing here some serious firepower and since I work mainly with large .psd's and .ai, and .avi/.mpeg/.mov files, I really need that bandwidth.

Is there some other way, how I can use tcp/ip protocol in two Windows 8.1 Pro?

Thank You in advance!

Regards

uplink
 

kyllien

Honorable
Jan 22, 2013
529
0
11,160
Are you sure the network is your bottleneck? There are many solutions but in general the faster the speed above 1gb the higher the cost. You also end up buying commercial equipment as consumer equipment doesn't get that fast.
 
Port bonding will buy you nothing if your goal is to increase the speed of a single session. Port bonding does not load balance by packet it will send all the data for a single session over the same path. You will be limited to 1g. Even then it does not load balance sessions real well. Port bonding is designed for a large number of machines to access a server it does not work well when you only have a small number of hosts.

800-900mbit/sec is about the best you will see using 1g ethernet. There is quite a bit of overhead in data packets and the file sharing applicatons.

If you do not want to use ethernet then as mentioned fiber channel is a good options since that is designed for data networks.

Still even if you were to use 10g cards there are huge bottlenecks all over the place. Go read the discussions of why there are only so many channels available for video cards in the pci slots. A 10g must go into one of the slots you would put a video cards in.

These internal bottlenecks tend to be the reason you use xeon processors in servers. They are designed to allow for things you generally do not need in a desktop workstation type of device.
 
Hi there guys,

At first, thank You for Your fast reactions!

Well, in general I built a server with low tdp i3 cpu, with enough power to run 1600 MB/s on mPCi-e SSD drive so far. The other is 4790k delided. So on buses there shoudln't be a bottleneck.

I have 16x PCi-e v 3.0 on both motherboards.

The only thing is, whether the cheap cards will support tcp/ip. Any increase in speed will be great.

P.S.: The server writes around 220 MB/s to USB stick. I know, diff bus, but still, why should it bottleneck in pci-e 16x 3.0? On server it has direct access to cpu, and that to m.2 PCie Kingston Predator SSD, well. That should do the trick, or?
 
Ethernet is ethernet it really doesn't have a clue how fast the interfaces are. You generally must use jumbo frames to get rid of some of the overhead but the down side is you must then use that network as a data network since you can not pass jumbo frames though a router going to the internet.

If you can spare the pci slots for the 10g card there should be no bottleneck. A pair of video cards running in sli mode can push close to that rate with no problems. You would of course not want to run a 10g and 2 high end video cards in the same machine.

 
My rig info is a bit outdated. I already have a different one. I run GTX 980 Ti from EVGA atm, and will switch to Strix asap. I may go SLi in the future, well, I'll see about consequences.

In general, the only thing I worry about is whether it will "work", anyhow. I can also rebuild raid array to 2 x raid1, where I'll have 150-160 MB/s both ways, that should suffice. But 90-115 MB/s is just not enough :/...
 
There is a lot of sense to have a server with fast I/O, since it is supposed to serve to multiple clients. Even if you provide for 3gbps channel between the server and your PC, and your PC cannot store that fast - you're wasting money and efforts just for a bragging rights.
 
Alabalcho - really? This is 2 way raid1, I'm planning on raid 0 + 1.

On left, that's how the raid1 made out from Red Pro works via LAN, and router, via cheap onboard lan and router. On the right, there's local read/write speeds.

So bragging You say?

Speeds will be around 350 MB/s in raid 0+1. And You call it bragging?

lan-vs-local.png
 
Alabalcho - .psd, .ai, .cdr, .max, .avi, .mpeg, .mkv, well in general, I'm a technical designer. I work 90% of the time with Adobe Bridge and I need those previews really fast [there are ~1 mio graphical files atm in my database]. Also, I work mostly with 300 - 700 MB .psd files, sometimes up to 4-5 gigs per .psd. HDD/SSD speed is vital for me. That's why I use raid0 made out of SSD drives. It's not because I can, it's because it saves me TONS of time, while working with superfast scratch disk. It doesn't matter whether it's for linked embeds, or textures, or just unpacking the .psd file with all smart objects [sometimes I have 6-8k pics within one .psd, tens of them. I create large scale prints [billboards, etc. 96 - 150 dpi], 600 dpi prints for magazines that require this dpi, and of course posters, business cards and other print materials. And I also render visualizations in 3DStudio Max/Maya/Cinema 4D, and create animations, thus .mkvs/.avis [uncompressed], image sequence [.tiff/.png], etc.

All I know is, that there's a massive diff, when I work with raid1, raid0, or raid 0 + 1. I can feel each 10-20 MB/s, mostly, when I use actions or automate process in Photoshop/Bridge. E.g. some of my scripts pump up my SSDs up to 700-900 MB/s, without a sweat.

That's what for.

P.S.: I also use RAM drive. The 32 gigs I have aren't for brag. When SSDs aren't enough, for thousands of small pictures, RAM drive can do magic with them. It has superior iOPs.