How To: Building Your Own Render Farm

Status
Not open for further replies.

Draven35

Distinguished
Nov 7, 2008
806
0
19,010
People have been saying that for several years now, and Nvidia has killed Gelato. Every time that there has been an effort to move to GPU-based rendering, there has been a change to how things are rendered that has made it ineffective to do so.
 

borandi

Distinguished
Jan 7, 2006
150
0
18,680
With the advent of OpenCL at the tail end of the year, and given that a server farm is a centre for multiparallel processes, GPGPU rendering should be around the corner. You can't ignore the power of 1.2TFlops per PCI-E slot (if you can render efficiently enough), or 2.4TFlops per kilowatt, as opposed to 10 old Pentium Dual Cores in a rack.
 

Draven35

Distinguished
Nov 7, 2008
806
0
19,010
Yes, but it still won't render in real time. You'll still need render time, and that means separate systems. i did not ignore that in the article, and in fact discussed GPU-based rendering and ways to prepare your nodes for that. Just because you may start rendering on a GPU, does not mean it will be in real time. TV rendering is now in high definitiion, (finished in 1080p, usually) and rendering for film is done in at least that resolution, or 2k-4k. If you think you're going to use GPU-based rendering, get boards with an x16 slot, and rsier cards, then put GPUs in the units when you start using it. Considering software development cycles, It will likely be a year before a GPGPU-based renderer made in OpenCL is available from any 3D software vendors for at least a year (i.e. SIGGRAPH 2010). Most 3D animators do not and will not develop their own renderers.
 

ytoledano

Distinguished
Jan 16, 2003
974
0
18,980
While I never rendered any 3d scenes, I did learn a lot on building a home server rack. I'm working on a project which involves combinatorial optimization and genetic algorithms - both need a lot of processing power and can be easily split to many processing units. I was surprised to see how cheap one quad core node can be.
 

MonsterCookie

Distinguished
Jan 30, 2009
56
0
18,630
Due to my job I work on parallel computers every day.
I got to say: building a cheapo C2D might be OK, but still it is better nowadays to buy cheap C2Q instead, because the price/performance ratio of the machine is considerably better.
However, please DO NOT spend more than 30% of you money on useless M$ products.
Be serious, and keep cheap things cheap, and spend your hard earned money on a better machine or on your wife/kids/bear instead.
Use linux, solaris, whatsoever ...
Better performance, better memory management, higher stability.
IN FACT, most real design/3D applications run under unixoid operating systems.
 

ricstorms

Distinguished
Jul 24, 2007
6
0
18,510
Actually I think if you look at a value analysis, AMD could actually give a decent value for the money. Get an old Phenom 9600 for $89 and build some ridiculously cheap workstations and nodes. The only thing that would kill you is power consumption, I don't think the 1st gen Phenoms were good at undervolting (of course they weren't good on a whole lot of things). Of course the Q8200 would trounce it, but Intel won't put their Quads south of $150 (not that they really need to).
 

eaclou

Distinguished
May 22, 2009
102
0
18,680
Thanks for doing an article on workstations -- sometimes it feels like all of the articles are only concerned with gaming.

I'm not to the point yet where I really need a render farm, but this information might come in handy in a year or two. (and I severely doubt GPU rendering will make CPU rendering a thing of the past in 2 years)

I look forward to future articles on workstations
-Is there any chance of a comparison between workstation graphics cards and gaming graphics cards?
 

cah027

Distinguished
Oct 8, 2007
456
0
18,780
I wish these software companies would get on the ball. There are consumer level software packages that will use multiple cpu cores as well as GPU all at the same time. Then someone could build a 4 socket, 6 GPU box all in one that would do the work equal to several cheap nodes!
 

sanchz

Distinguished
Mar 2, 2009
272
0
18,810
Correct me if I'm wrong, but wouldn't 30 million hours be 30,000,000/24 = 1,250,000 days which would in turn be 1,250,000 / 365 = 3,425 YEARS!!! o_O
Please someone clarify this. How could they render a movie for 3,000 years? Did they have this render farms hidden in Egypt??
 

nemi_PC

Distinguished
Jan 31, 2007
19
0
18,510
Some thoughts for small nodes:
1) Cases cablable of taking a 2 slot grpahics card woudl future proff setting up anode at this time in case GPU rendering does become applciable over the lifetiem of the node. So (m)ATX cases not rack mounts
2) Resale of a (m)ATX "reglaur" looking desktop a few years down he road to "home users" is easier than a rack mount server. So should factor that into the value.
3) With 500-1TB being the sweet spot for Gb/$ I would go with those drives and use the render node also as a distributed (redundant) back up solution , this address where are you going to store all your work over the years.
 

eyemaster

Distinguished
Apr 28, 2009
750
0
18,980
I'm with you sanchz. But I think they mean per single processor. Say, if you had a common desktop computer and tried to render the whole transformers 2 movie, it would take thousands of years. If you have 10000 processors doing the job, you can do it within a year or less.
 

mlcloud

Distinguished
Mar 16, 2009
356
0
18,790
[citation][nom]sanchz[/nom]Correct me if I'm wrong, but wouldn't 30 million hours be 30,000,000/24 = 1,250,000 days which would in turn be 1,250,000 / 365 = 3,425 YEARS!!! o_OPlease someone clarify this. How could they render a movie for 3,000 years? Did they have this render farms hidden in Egypt??[/citation]

What do you think the meaning of parallel processing is? Doing a lot of that work at once, right? If we have a huge render farm of 5000+ processors, we cut down that time to less than a year, wouldn't we?

Of course, a lot of that depends how fast each processor in the render farm is, but the general public won't care about that; just give 'em the huge numbers and don't tell them you were using 1.6ghz celery's in your render farms.
 

one-shot

Distinguished
Jan 13, 2006
1,369
0
19,310
Hmmm. The standard electrical voltage for residential dwellings (United States) on a 120/240V two phase installation is plus or minus 5% of 120V, not the 110V which is mostly stated. So 15A * 120V = 1800VA or Watts, not 15 * 110V.
 

ossie

Distinguished
Aug 21, 2008
335
0
18,780
In view of eventual future GPU offloading, at least a 3U case for 4x2slot-PCIe GPUs would be necessary, so the upgradeability of 1U cases is limited to one 1slot GPU. But such a monster would get easy over 1kW, posing more challenges for power supply and cooling (generated noise left apart).
As MonsterCookie pointed already out, use some good scaling multi-processor/-node OS for good distributed performance (m$ doesn't apply).

Finally a decent article on TH... almost without the usual vi$hta or $even (aka vi$hta sp2+) m$ pu$hers behind.
What? xpire x64 is working for TH? almost unbelievable...
Also, none of the usual m$ fankiddie and gamer comments, (at least) till now... :)
 

dami

Distinguished
Jul 17, 2009
3
0
18,510
Another example, getting out of the computer jargon...

If a task took 100 man hours, that means it took 2 guys 50 hours each to do something. If you did that with 10 guys, it would take each man 10 hours of work. There is a point of diminishing efficiency, which is mentioned in the article. The extreme to this is, it would take 100 men, 1 hour of work to complete the same task. The efficiency has been drasticly reduced.

This is whats being done in these rendering farms. A bunch of processors are put together, tasked with a job, and they belt out the results. If you did that with just one processor, it would take the 3k years in egypt to come up with a result.
 
G

Guest

Guest
[citation][nom]borandi[/nom]And soon they'll all move to graphics cards rendering. Simple. This article for now: worthless.[/citation]
I do agree with graphics card rendering,but don't think this article is worthless!

When I read about xeons, I also read about AMD making similar, low power processors like that (45nm or lower?,and a TDP of around 65W, which is 30W lower than their previous processor line).
It might not be beneficial to buy xeons, but perhaps it might when going with AMD.
 

aspireonelover

Distinguished
Jun 16, 2009
109
0
18,680
I rather spend the money on helping developing countries. (I know this has nothing to do with farm render and stuff but the amount of money they've spent on these machines is an incredible amount)
Like buy a few XO netbooks for the developing countries, and sponsor lots of children.
 
G

Guest

Guest
How about network booting the cluster - we have found it easier to manage upgrades/patches as you just need to reboot the nodes that need upgrading. Also makes each node a little cheaper and saves a fair bit on power consumption.
Another idea we have been playing with is using cheap USB keyfobs either as system drives or to persist config data etc. - much faster boot times, very low power consumption and great MTTF.
 

Greg_77

Distinguished
Nov 11, 2008
334
0
18,780
[citation][nom]aspireonelover[/nom]I rather spend the money on helping developing countries. (I know this has nothing to do with farm render and stuff but the amount of money they've spent on these machines is an incredible amount)Like buy a few XO netbooks for the developing countries, and sponsor lots of children.[/citation]
Although giving computers to kids in developing countries is a honorable thing to do, I don't see what that has to do with making a render farm that is necessary for your business. Maybe the better thing to do is give part of the profit from your business and donate it to your charity of choice (ex. XO Notebook).
 
G

Guest

Guest
You can always use some existing middleware to build a scalable approach upon hardware replicas... Only took a couple of hours to implement this free software prototype http://code.google.com/p/rendering-cluster/
 

shapr

Distinguished
Jul 17, 2009
1
0
18,510
I recently purchased a used IBM BladeCenter and seven QS20 dual Cell blades on ebay for a total cost of $2500. That gets me about 2.5 Teraflops of rendering power, and adds about $50 to the monthly power bill in my apartment.

There are cheaper options if you're willing to go a bit further afield.
 

Draven35

Distinguished
Nov 7, 2008
806
0
19,010
[citation][nom]eaclou[/nom]-Is there any chance of a comparison between workstation graphics cards and gaming graphics cards?[/citation]

There was one done several weeks ago while this article was being written.
 
Status
Not open for further replies.