Sharing GPU and CPU computing power externally...

liquidsnake718

Distinguished
Jul 8, 2009
1,379
0
19,310
Figurativly, the PS3's used by the military as well as Stanford for Folded@home somehow link the GPU power via Cuda to aid in research, could we also link up our PCs to link up for better processing power via cloud computing or "PC to PC" SLI, lol. I mean wouldnt this be the next step instead of having a dual CPU onboard a motherboard like the xeon, server motherboards, instead link PC's externally and use 4-6 GPUs and 2 CPUs? Even memory(Ram).

This is just an interesting theory though and I was wondering what your thoughts would be on this?
 
Solution
"Ethernet latency: 350"

Ethernet isn't a flat 350µs latency. They must comparing a specific datagram/frame/packet size. I get a 0-1µs ping to my wife's computer over my gigabit switch.

In 1µs at gigabit speeds, you would've transfered 1000 bits or 125 bytes. Not very useful, but large enough for a 64byte ping packet.

Infinaband is awesome though. It's actually cheaper than 10gigabit networking, but it does have a VERY short limit on cable length. For ~$1500, you can get a new IB switch that could link 3 computers together with 40gb/s but a 12 meter max cable length and the cables are expensive. Each IB card costs about $1200, but a decent 10gb nic costs about the same. although, a fiber 10gb nic has a range of ~ 100xs that of IB...

liquidsnake718

Distinguished
Jul 8, 2009
1,379
0
19,310

yeah i mentioned that, hehe i meant folding not folded, but does it truly boost gpu power? Can it work for crysis, lets say if someone is using a dinky 9600gt and you have a setup with 5850's in crossfire and a better gpu, how would this work and will it?
 

amnotanoobie

Distinguished
Aug 27, 2006
1,493
0
19,360


For games, it wouldn't.

Reason 1: Latency
Reason 2: Latency
Reason 3: Who'd buy it anyway.

For massive computations wherein a millisecond delay is acceptable this could be done (though with a lot of coding). There is a heck of a lot of latency when you link PC's via LAN. By the time data arrives to be rendered, the player has already left that scene and the data becomes useless.

Edit:
- Ethernet latency: 350 microseconds (according to this: http://feedblog.org/2006/11/26/ethernet-latency-the-hidden-performance-killer/)
- Infiniband latency 2.5 microseconds (http://en.wikipedia.org/wiki/InfiniBand)
 

Kewlx25

Distinguished
"Ethernet latency: 350"

Ethernet isn't a flat 350µs latency. They must comparing a specific datagram/frame/packet size. I get a 0-1µs ping to my wife's computer over my gigabit switch.

In 1µs at gigabit speeds, you would've transfered 1000 bits or 125 bytes. Not very useful, but large enough for a 64byte ping packet.

Infinaband is awesome though. It's actually cheaper than 10gigabit networking, but it does have a VERY short limit on cable length. For ~$1500, you can get a new IB switch that could link 3 computers together with 40gb/s but a 12 meter max cable length and the cables are expensive. Each IB card costs about $1200, but a decent 10gb nic costs about the same. although, a fiber 10gb nic has a range of ~ 100xs that of IB.


But yes, what amno said. Locality of the data is very important since higher latency = slower computing. A multi-socket computer will have latency measured in nano-micro seconds while a "linked" computer would be in the micro-millisecond range.

Also, with heavy computing, the bottleneck really is how fast your CPUs are. It is better to pack more CPUs in the same computer, but after a while, the complexity of a large computer starts to become more of a problem than a blessing. It's probably best to have dual socket computers all network together via a very high-speed LAN.
 
Solution