Virtualization cluster server or possibly render farm at home?

bda714

Reputable
Aug 12, 2014
4
0
4,510
ADDED EDIT: So after reading an article just now and thinking on it some more, I guess the proper name for what I am ideally wanting is a "beowulf cluster". If there are any experts on this subject or anyone who could lend a hand with hardware/software config, PLEASE let me know. I am very anxious to free up some space in my cramped quarters by turning multiple systems to one.

Full details now:


First of all I am sorry if this is not the right forum for posting this type of question but I did not find a better fitting category than this.

I am trying to research building a server at home which will serve as a cloud computing host. Basically I want to be able to take all the hardware that I have accumulated in the past 4 years and put it all in a server rack. Then place that server rack either in my garage or a location nearby, and have about 5 different fully built systems running on that. I would then like to be able to access them with a browser basically, if possible even a Chromebook, but at least even a low end laptop.

If there is a way that I can remotely power on the systems so that I don't have to keep them all running 24/7 that would be great, I know that with my ASUS Sabertooth Z87 I have that feature and it uses a magic packet to power on but along with that I have two ASUS crosshair V formula Z's with 8 core 4.7 GHz and 5.0 GHz processors. then in the sabertooth an i5 3.4 and then a HP whatever with a i5 3.2... and lastly, possibly will take some other parts and putting together one more with probably some type of intel1155 socket as I have a mini atx that is brand new that is just sitting. So I use my computers for mostly 3D modeling and CAD, but I am also coding and hacking with Kali and Debian.

I have so many questions but I dont expect anyone to take the time to answer them all and I am a firm believer in due diligence, the reason I am posting though is that I want to make sure I spend my time researching the right things. Is Oracles Virtual box the way to go or Hyper V or what? Also, what is the best way to do this? would I use all the processors resources like a render farm to basically have one system with a bunch of virtual machines running or would it be better to run a actual OS on each system?

If anyone could point me in the right direction of where to do my learning on this I would greatly appreciate it and pass on the good deed as well. Or if anyone else is interested in learning about this or starting a similar project and wants to link up to work together that would be awesome. Thank you very much to anyone that takes some time to answer or read. o ya also have 2 gtx 760s, about 8-10 TB in HDDs, 2 TBs in SSDs, & 3, PSUs.

THANK YOU!

p.s just a quick yes or no, is running a remote gaming system feasible or would the extra latency kill it? even i did a stand alone system on the rack meaning a single system with an single OS installed and not part of a render farm, but still accessed remotely, would there be added lag?

 

kanewolf

Titan
Moderator
If you have never done cluster programming or cluster control, I would say spend a couple hundred dollars and buy 6 to 8 Raspberry PI B2 boards. This will give you enough physical hosts that you can work our the logistics of controlling them without spending a lot of money. This is much more of a science experiment than application productivity, but you can then take this mini-cluster and use it as a minecraft server.

Virtualization, isn't really useful on the PI (not enough memory), but if you have 6 or 8 of them it is like virtualization.
 
bda714,

I should say first, that I am by no means an expert on the Beowulf Cluster, nor even conventional servers.

A few months ago, a friend who was running a series of flight dynamics problem in Matlab, asked me about appropriate hardware that could run the custom, multi-threaded algorithms inputting wind tunnel data. I did look into this, including a Browulf cluster, but my recommended system was so expensive (more than $20,000), he decided to give the problems to an aerospace company.

My first comment is to try and understand the nature of what you'd like to accomplish - the application- of the proposed system. a Beowulf cluster's strength is closely related to high calculation density tasks that can be multi-threaded over all available cores / threads and/or coprocessors such as NVIDIA Tesla on Xeon Phi. The only conventional applications I know of that might typically benefit from high multithreading is CPU-rendering, some video editing, and some gas flow, thermal, and structural analysis programs.

So, question one is for what uses the system is put.

I understand too that, apart from needing a supercomputer, you might want to use all your gear and set up the parallel system. In that case, I'd recommend looking into PVM - Parallel Virtual Machine, which is still open source- free, and highly scalable and portable. You install PVM on every system- and they can be quite different hardware. This sets up the links between processors, memory, and files in a LAN. I'm not certain the way in which graphic processing is utilized and linked, these are like multi-node servers, and output, but it can be configured. I think of Beowulf clusters as high density computational analytical / database and not for visualization- 3D CAD, animation, etc.

If you're goal though is efficiency in cost, configuration time, use, as you may be aware, it's possible to configure a single PC with up to near supercomputer capabilities and very fast subsystems, a lot of fast access storage and so on for reasonable sums. It's possible to configure a system with dual 6 or 8-core Xeons, a lot of RAM, a fast Quadro, Firprero, plus a coproccessor, a fast RAID controller and 8 or 10 SAS drives in performance .redundant RAID, and so on. Used Tesla coprocessors can actually be very inexpensive as so few PC users can use their capabilities.

This is to suggest that, depending on the scale and complexity of your work, you may be better off having a single, high capability system. Simpler is always easier to configure, control, and maintain. In that case, I'd say sell all the gear and upgrade a used, dual CPU workstation or build from a Supermicro Superworkstation. This can be surprisingly cheap to do:

Purchased for $171:

Dell Precision T5500 (2011) Original: Xeon E5620 quad core @ 2.4 / 2.6 GHz > 6GB DDR3 ECC Reg 1066 > Quadro FX 580 (512MB) > Dell PERC 6/i SAS /SATA controller > Seagate Cheetah 15K 146GB > Linksys WMP600N WiFi > Windows 7 Professional 64-bit
[ Passmark system rating = 1479 / CPU = 4067 / 2D= 520 / 3D= 311 / Mem= 1473 / Disk= 1208]

Paurchased for $320:

Xeon X5680
24GB DDR3 EEC reg 1333

Had around:

Quadro 4000 (2GB)
Samsung 840 250GB
WD RE4 1TB

And this made:

Dell Precision T5500 > Xeon X5680 six -core @ 3.33 / 3.6GHz, 24GB DDR3 ECC 1333 > Quadro 4000 (2GB ) > Samsung 840 250GB /WD RE4 Enterprise 1TB > M-Audio 192 sound card> Linksys WMP600N PCI WiFi > Windows 7 Professional 64> HP 2711x (1920 X 1440)
[ Passmark system rating = 3339 / CPU = 9347 / 2D= 684 / 3D= 2030 / Mem= 1871 / Disk= 2234]

I also recently bought:

PERC H310 6GB/s SAS /SATA RAID controller
2X Seagate ES.3 Constellation Enterprise 1TB (SATA III, 128MB cache)

> and to which I can add:

Dell lT5500 CPU /memory /Fan riser, 2nd X5680 and another 12GB memory for about $400.

This would then be, for about $1400 total, not a supercomputer, but a reasonably high capability and fast 12 core /24 thread system- and excellent rendering engine.

Interesting project!

Cheers,

BambiBoom

HP z420 (2015) > Xeon E5-1660 v2 six core @ 3.7 /4.0GHz > 16GB DDR3 ECC 1866 RAM > Quadro K2200 (4GB) > Intel 730 480GB > Western Digital Black WD1003FZEX 1TB> M-Audio 192 sound card > Logitech z2300 > Linksys AE3000 USB WiFi > 2X Dell Ultrasharp U2715H 2560 X 1440 > Windows 7 Professional 64 >
[ Passmark Rating = 4918 > CPU= 13941 / 2D= 823 / 3D=3464 / Mem= 2669 / Disk= 4764]
 

bda714

Reputable
Aug 12, 2014
4
0
4,510
Thank you both for your advice, regarding your response #Bambiboom, maybe i can clarify a little bit what i am looking to do. I just want to configure one host os that will have 5-6 VMs running on it that i can access from anywhere with a remote desktop... Not just from the local network but from anywhere. Now that being said, i am hoping to be able to use all the hardware i have to do this because my budget is non existent... Actually what i will be spending on is the server rack and a few 2U ATX chassis'. As fast as selling what i have to purchase the items you spoke of, if i could do that with out down time i would but in fact i don't know of a way that would be possible. Please advise, and thanks again.
 


bda714,

Thanks, that clarifies the situation.

It appears to me that you're more along the lines of combining a series of systems as nodes on a more or less conventional LAN than configuring parallel computer that in effect connects a heterogeneous set of CPU cores in a chain. This makes sense too, as a Beowulf cluster is a way to set a lot of threads to work on deeply multi-threaded custom algorithmic scientific problems, one at a time. That is, I would think of your proposed system as individual, connected systems with a central data storage.

In this kind of LAN, if you're running conventional programs in Windows, you could configure one system as a host server, possibly on MS Server, and then decide on the level; of nested VM's within each node. I'd think of each node as task specific-one for the visualization work like 3D modeling, animation, another for programming / compiling, one as database, and so on. Of course, you can be running various Windows, Linux, UNIX, MAC sim, VM's inside each node. I mention this node specialization idea to make configuring all these systems a bit simpler to configure, access, and maintain- it will be massively complex.

Some randoms notes for this kind of system:

1. Consider the stability of each node carefully and this addresses especially nodes that have overclocked CPU's or RAM, the relative speed and type of drives, cooling, and quality and condition of power supplies. Of course, the nodes may be quite different, but the closer they are to being similar, the more stable the network. You might look into runing the system off a stabilized power condtioner. I buy used 11A or 12A Powervar and ONEAC isolation transformer conditioners as used on hospital equipment.

2. The central data storage server / system will need a good RAID setup and I recommend looking for a recent LSI controller. I recently bought two of these, one an HP (LSI) 9212-4i for an HP z420 and the other is a NOS Dell PERC H310 fthat will give the Precision T5500 a 6GB/s disk susbsystem. As you know, in RAID you need to have the drives to be identical. I'm receiving, probably tomorrow, a few Seagate Constellation ES.3 SATA, 1TB which are enterprise grade, have 128MB cache, self-encryption, and a MTBF of 1.4M hours. The Dell Precision 390 is going to get a PERC 6/i with a 146GB 15K SAS OS /Programs and 300GB 15K SAS data drive. That'll perk up the old (2006) girl a bit!

3. Keep in mind from the beginning how to configure input and output, including Web and scanners, printers.

4. Organize a really clear overall file structure as every node and VM will be running files specific to different Os's, programs, workspaces, security level, and printer setups ans so on.. On PC's I use a lot of partitions, but in a server configuration it may be folders.

5. If you're remounting the systems into 2U rack cases, which are designed fro huge air flow, I'm imagining the system will be quite noisy, and may need to run 24 /7.

An interesting and ambitious project, full of potential both in use, and in systems education.

Cheers,

BambiBoom