Render Farm Help.

CombatMuffin

Honorable
Mar 9, 2012
7
0
10,510
0
Hello everybody, yet another render farm thread, I guess.

I am looking to build a small render farm for Architectural visualization and other 3D rendering jobs. I have done my share of reading and research, including articles and forum posts here at Tom's Hardware, but I'd like some advice from you folks before I commit to choosing a build.

First off: The budget is between $10k and $12k USD . I understand in most cases, it is better to have quantity rather than quality(within reason!), so the aim here is to maximize the number of render nodes in that range. I am hoping to reserve some $2k in racks, UPS, switches and whatever simple additions(such as a basic server) are needed.

Specific questions:

CPU: I am probably aiming for a quad core Xeon. I heard quad cores are the middle ground in performance vs price ratio. Something that is no more than $450usd. Newegg has a Xeon E3-1230 at around $250.00. Am I aiming right?

Motherboard: I was just aiming for whatever supports my processor. Should I absolutely go for something that supports ECC memory, or does the price shoot up unreasonably?

PSU: Since I am aiming this for commercial purposes, it is critical that my farm runs 24/7, but I also don't want a build that sucks half the city's power. What wattage should I be aiming for? 250w? 400w?

Memory: I am aiming for something like 16GB per node, since RAM is quite cheap these days, if I end up getting an ECC MoBo, the obviously ECC memory. Two questions here: is there a performance difference between DDR3 1333 and DDR3 1600? and what's better: less sticks with more memory, or more sticks with less? I assume less sticks is better since it allows for expansion.

The rest of the components would be whatever fits the budget. I'll probably go for small capacity SSD's for the nodes and high capacity HDD's for backup storage. I will obviously not purchase any GPU's to keep the costs down.

I apologize for the long post, but I wanted to be as concise as possible. As a final note, I am aiming for this render farm to pay for itself within a year's cycle, and from there on expand it with more nodes.

Thank you for your time.





 

onichikun

Distinguished
Nov 13, 2009
304
0
18,860
39


Before I can give you the best recommendations, what software products are you going to be using? There are actually a few pieces of rendering software that will do better with a GPU accelerator like a Titan rather than just CPUs.

 

onichikun

Distinguished
Nov 13, 2009
304
0
18,860
39
Also, are you planning to run a Rocks-based cluster? And how big are your job input files? Having to send data over something like ethernet would be terribly slow if the job files are huge. How long does a job usually take on your current setup?
 

onichikun

Distinguished
Nov 13, 2009
304
0
18,860
39

A quad-core Xeon will be a fine choice. You may want to check if any of the programs you use can take advantage of a GPU accelerator, since that may change your compilation job execution time by an order of magnitude or more.


You should go with something ECC. Although some people don't really think its worth it, if you are running compute intensive jobs for weeks/months/years, you will start getting bit errors. ECC memory can knock the error probability to almost nothing.


Depending on what you put in it, maybe max 500W. Just make sure you get an efficient power supply :)


Yes there is a difference in DDR3 1333 memory and 1600, in terms of raw benchmarks its not much, but over time the small improvements you get from the faster memory adds up. So for week/month long compilations you will notice a difference.

If you motherboard supports dual/quad channel memory taking advantage of that will give you pretty significant performance gains. So you may want to take advantage of that if the motherboard you decide on has it.


EDIT: One other thing, you may want to avoid the racks and just get some cheap server cases. Racks and rack mounts are expensive, and you can easily spend an additional 2-5 grand on just that equipment. It would be easier and quieter(esp. if this is a home setup) to just get some mid tower cheap cases with adequate cooling.

Another thing to keep in mind, a lot of rack-mount server chassis come with redundant single-mode 240V mains power supplies, not switching power supplies that you are likely familiar with that support both 120 and 240V mains.. So you would have to run your 240V line to your "server room". Just be careful when selecting your chassis.
 

bambiboom

Dignified
Combatmuffin,

With rendering, in general, the key is to have as many cores as possible. There are GPU-based and CPU-GPU rendering applications, but for the most part, the mainstream aplications will be CPU-based.

My suggestion would be to buy about 5 of these - and this configuration is surpsringly common. I recently saw someone selling 10 at once. >

http://www.ebay.com/itm/DELL-PRECISION-T7500-2x-XEON-3-33GHZ-QUAD-CORE-CPUS-24GB-MEM-2x-250GB-9800GT-/380682356107?pt=Desktop_PCs&hash=item58a26fc58b

> which is a completed sale of a Dell Precision T7500 with dual Xeon Quad Core W5590 @ 3.33GHz and 24GB ECC 1333 RAM for under $1,000. That's 8 cores / 16 threads per system at a very good clock speed- and, importantly, you can be up and running quickly with several systems.

Dell Precisions are ideal for this as they are extremely rugged, use server related hardware for constant use, everything is error correcting.

Add more RAM- remember that with dual CPU's the RAM is divided between the two CPU's and I would have a minimum of 16GB per CPU, but 64GB total would be better if the files are large. Also, a network card and possibly new drives- which don't have to be large. With a render farm, the graphics cards are immaterial, and though I've heard of people buying thrift store 17" LCD's and setting one on each system and running off old Quadro's like FX 1700's.

I'm assuming you have a content creation system, but if not, you can combine the rendering farm sloggers with a very fast system quite diferrent from the farm to set up the renderings as that system doens't need all the cores/ threads- but it does a jazzy graphics card. the fewer the cores, the higher the clock speed can be. And in the content creation use, concentrate on clock speed and the GPU to setup the renderings and then send then on the network to the farm. If you need a suggestion for this system, I can provide an idea, using the economical but fast E5-1620 3.6/ 3.8 quad core or E5-1650 V2 (3.5 / 3.9) six core which is still only a $650 CPU and an Intel C602 motherboard. This system would be the have extensive storage in RAID and be the mission control- network hub, farming out- system. The GPU would be Quadro or Firepro depending on your applications- CUDA or OpenGL. Do not be tempted by gaming cards- the drivers are made for an entirely different purpose.

Depending on how many systems you intend to have, by going the used Precision route, you could be in business really quickly and well under your budget.


Cheers,

BambiBoom

Content > HP z420 (2013)> Xeon E5-1620 quad core @ 3.6 / 3.8GHz > 24GB ECC RAM > Firepro V4900 (soon Quadro K4000 or 5000) > Samsung 840 SSD 250GB / Seagate Barracuda 500GB, (soon Seagate constellation ES.3 1TB) > Windows 7 Professional 64 > to be loaded > AutoCad, Revit, Inventor, Maya (2011), Solidworks 2010, Adobe CS4, Corel Technical Design X-5, Sketchup Pro, WordP Office X-5, MS Office

Rendering > Dell Precision T5400 (2009)> 2X Xeon X5460 quad core @3.16GHz > 16 GB ECC 667> Quadro FX 4800 (1.5GB) > WD RE4 / Segt Brcda 500GB > Windows 7 Ultimate 64-bit > purchased for $500 when 2 years old.






 

onichikun

Distinguished
Nov 13, 2009
304
0
18,860
39
I somewhat disagree that CPU is the most important in mainstream apps: GPU-accelerated blender for example turns a 10 minute render into less than a minute.

Although I guess I haven't used any professional studio software, but I would assume they'd be even better accelerated using GPGPU

If he is going to be using software that can take advantage of GPU-acceleration, 1 GPU would be worth more than one more CPU.
 

bambiboom

Dignified
onichikun,

As mentioned, anyone contemplating a system for rendering should apply the hardware to optimize for the software. I can see where a person who had never used these applications would become confused, as I use these all the time and every couple of years I find the situation has changed.

Rendering is the calculation of the position of polygons and/or lines. Yes, GPU acceleration is an advantage, but the basic engine resides in the CPU and the GPU is acting as a parallel co-processor. Renderings can use all the available threads which is the reason rendering systems have so many cores- and the number of core applied is selectable. When I run renderings, I apply 14 of 16 threads to the rendering, leaving one for the OS and one for the application.

Both Adobe and Autodesk applications- the "mainstream" applications in 3D modeling / animation / rendering are CPU-based but use CUDA- acceleration, though Maya is becoming more OpenGL oriented and Sketchup is OpenGL accelerated. These applications are almost all hybrids, with few pure GPU rendering engines. I think even Octane is hybrid now.

One trend is to add GPU parallel coprocessing with piles of CUDA cores, like NVIDIA Tesla, and that is in effect using the GPU as a many, many core CPU and you'll see that the new Kepler Quadros keep adding CUDA cores. This is going to become more common.

It's a complicated subject, but if you look at any specialty firms producing systems for rendering- like BOXX, you'll see these tend to have either a lot of CPU cores / threads, or they will have Quadro Keplers like the K5000 with 1500+ CUDA cores or dedicated coprocessor units like the K2075 or K20 .

Cheers,

BambiBoom





 

onichikun

Distinguished
Nov 13, 2009
304
0
18,860
39


I am actually a hardware engineer who has done quite a bit of CUDA and OpenCL coding for GPGPU applications, so I know the topic quite well :)

GPUs are SIMD architectures they can do huge parallel computations on parallel data much faster than a generic x86 architecture pipeline, with or without SSE intrinsic. This makes them ideal for many stages of rendering including occlusion, lighting calculations, etc. etc.

The real question is the state of the software. While GPUs are amazing in SIMD applications, the acceleration cores are still lacking. Although the push for OpenCL standardization will hopefully resolve that in the near future. If the software supports it though, a GPU can easily outperform several CPUs for many rendering-related algorithms.

So as I said before, if his software suite supports it, GPUs are the way to go rather than just more CPU nodes.
 

CombatMuffin

Honorable
Mar 9, 2012
7
0
10,510
0
Thank you so much for your replies, there are some very helpful tips I am considering. I really appreciate it.

As far as software goes, I am limiting myself to what's used most by local architecture studios in local area: Sketchup and 3dsMax, most likely coupled with VRay. While in theory the project demands could be huge, most projects here are not memory intensive (very rarely do they use custom textures, maps, very light AO and I've never seen dynamics, yet). Project sizes are bound to be small.

The goal is to buy about 8 nodes or so, and provide rendering solutions for many of the architects here, which most of the times work with their personal laptops (so to them, it is a huge leap in workflow). While GPU rendering is very attractive, I am a little unsure the kind of projects I'd be facing would really benefit from investing in an expensive piece of hardware, such as a GPU (and power costs can go up considerably, too). I live and work in Mexico, so electricity costs can really cripple you if you aren't careful.

As promised, I've nailed down some more specific components, which I think might help you in providing more feedback. I like newegg as my website for purchasing these, by the way...

CPU: Intel Xeon E3-1270V3 Haswell 3.5 GHz (Quad Core, 80W).
Motherboard: Supermicro MBD-X10SL7-F-O uATX LGA 1150 DDR3 1600
RAM: Wintec 16GB (2x8GB) 240-Pin DDR3 SDRAM ECC
PSU: Athena Power Zippy P1H-6400P 400W Single 1U
Case: SUPERMICRO CSE-510T-200B Black 1U Rackmount Server Case.
Storage: ADATA S510 Series AS510S3-60GM, 60 GB SATA III SSD. (I'm thinking of getting the big storage on the server, and these for the standard node, so they have enough swap).

All of these came to about $1,127.00USD or less.

I was a little persistent with a Haswell CPU, because I heard they were designed to be a little more power efficient than the Ivy Bridge, and they also gain a tiny little boost in performance (maybe 5-7%?).

I was really unsure of which PSU to buy. The workstation I originally built 5 years ago had a Corsair PSU and it has lasted though A LOT. I am probably going to heed the advice on the case: Ditch racks in favor standard tower cases for now, that way I can use bigger PSU's.

Of course, there's still other considerations, such as cooling fans, but I want to worry about cooling solutions and networking after I have the basic performance hardware down.

Any advice? Critiques? Keep in mind, the projects shouldn't be too taxing, technically speaking, and I am aiming for flexibility right now. The goal is to work like this for a year, and depending on how it goes, I begin expanding with more ndoes(and maybe more RAM, too) after that.

Thanks again for taking the time!


 

onichikun

Distinguished
Nov 13, 2009
304
0
18,860
39
Looks like your setup (3dsMax + Vray) typically gets up to an order of magnitude acceleration (10x) using CUDA.
Sketchup and Autocad have similar results.

You don't have to spend 1000$ on a Titan, a GTX 580/680 can still do wonders, and thats half the price of one node.

Power wise, a GPU maybe will take 200-260W max under load, which will be less than a node under load.

So a GPU can give you 10x speedup (effectively 10 more nodes working on a single render) over 1 node, at half the cost, and less power usage than a single node.

I am a bit of an HPC guy, so thats just the route I would go... I will shuddup about GPUs now :)

I would definitely go for desktop cases unless you are planning on mounting this setup where space is a premium. Server cases are LOUD since they have higher RPM fans at a small size.

EDIT: If you have a server room at your office, the rack solution would probably be best for future expansions. Just as long as employees don't have to listen to them all day.

What OS are you going to use?

GL

 

CombatMuffin

Honorable
Mar 9, 2012
7
0
10,510
0
I don't mind a discussion on GPU's at all! I'm just a little skeptical because I have personally never relied on GPU's to render things. I can learn, that's on problem, but I am worried that my clients' scenes and projects might need optimizing or other details that makes it too uncomfortable for them. I'll probably end up with about 8 nodes to start (if I keep that $1100 budget per node), so I'd need to move stuff around if I add a GPU. I don't mind having less nodes with GPU's, as long as the performance gain is substantial.

GPU rendering could benefit my business goals, since I would be providing something nobody around here offers, and it prepares my nodes for the next 3-5 years (GPU rendering is the future, after all). So I am open on any suggestions, including hybdrid CPU/GPU rendering, ups and downs, etc. :)



We have not yet decided where these servers will be placed (different areas here have different electrical bills, so we are hunting), but noise and space shouldn't be a problem: We can accomodate those 8 nodes and server into a single, well A/C'ed room, and have any potential employees working at another room, connecting to the server via VNC or something of the sort.

I am planning to use Windows 7 64-bit to start, simply because I don't have experience with Linux(If I did, that would be the route, since its stable). Like I said, a lot of people around my area work with laptops as their workstations, and some of them even work with MacBooks. That's why I am trying to at least stick to intel CPU's, and of course, their entire scene will have to be rendered in the farm, to prevent any discrepancies between cpu specs.


 

onichikun

Distinguished
Nov 13, 2009
304
0
18,860
39
Sounds like a plan. Definitely play around with GPUs when you get a chance.

The other thing you need to think about is what kind of interconnect you are using. How large are your input data files for a scene? I am assuming there are going to be quite a few textures and other assets that will be moving around your render farm, unless you are going to be assigning specific employees to specific nodes.

Are you going to work on the model that each employee uses VNC or RDP to your cluster from a thin-client and then do all their work on 3dsmax installed on the remote machine?

Have you done any tests with 3dsmax performance over VNC or RDP?

With Linux its a little nicer, since you can have one node providing thin-client access, and the rest of the nodes being dynamically scheduled for handling rendering jobs. But you would probably need to hire a system admin to manage that for you.
 

CombatMuffin

Honorable
Mar 9, 2012
7
0
10,510
0
To be honest, I haven't considered the networking aspect just yet, and I'll need as much help as possible because I'm not highly knowledgeable in it.

I am mostly going for small scale projects made by younger or freelance architects that need the extra horsepower, so the input files shouldn't be too big. Big studios have the money and scale to hire big render farms, after all.

I have not yet tried 3dsMax through VNC yet, but here's what I want to do:

Ever single node connects to a main server. The main server will handle all large storage, scene files, etc., and will command the each and every node, as necessary. Each node's SSD is meant to enough swap space for its process, but so much that it cripples my budget.

The only person with complete access to the server will be me. Specific employees will have access to a limited "account", so they can access a very specific folder in the server with their projects, and use a portion of the rendering farm to speed workflow. Employees will, at no time, be able to copy the files from the server into their individual workstations (for security reasons).

I am also planning to design this in such a way that students and small studios can also rent my rendering nodes. Ideally, they'd be given remote temporary access to the servers so they can render. Worst case scenario, they send me the file and I manually run it through the server.

What would be the best solution for this? I had planned to purchase a switch large enough to hold at least 15-20 nodes. What considerations should I take for the server (as economical as possible, of course) and what could I do to improve the network layout?

 

Similar threads


ASK THE COMMUNITY