Is it a good idea to have a 2nd pc for rendering, when doing 3D design and rendering and video editing?

Solution
if you need to render and still use the pc, then a second one would be a good idea. rendering uses the pc pretty heavily and will slow down anything else running. if you can set it to render and then walk away, then 1 pc is fine. like if you were to do it overnight while you slept.

not a pro myself but this is my understanding of how rendering uses a pc.
if you need to render and still use the pc, then a second one would be a good idea. rendering uses the pc pretty heavily and will slow down anything else running. if you can set it to render and then walk away, then 1 pc is fine. like if you were to do it overnight while you slept.

not a pro myself but this is my understanding of how rendering uses a pc.
 
Solution
Its a great Idea to have a second PC for rendering. If you have 8 cores on your workstation and 8 cores on a second machine then you can use 16 cores for a faster render. I am a multimedia artist myself and I am planning to make my office into a small render Farm. I currently have 2 PCs in my office that are not used much. And both of them are i5 quad cores.

If you can, Invest in a second PC and your render times will be much faster. And there's also the benefit that Math Geek speaks of.
 


g335,

Much depends on the proportion of time spent on each kind of work. Since 2010, I had a faster main system with four-cores at a higher clock speed and a second system with eight cores (dual Xeon) somewhat slower speed. However, I realized that I was spending very little time rendering proportionally, and that the industrial design renderings I was doing were only taking 20-30 minutes on the faster system. I didn't have a LAN so I had to transfer the model and set up the rendering on the second system.

In the end, I still have two systems, and it's logical that separate systems may be better optimized for their uses, but my new thinking is that I perhaps should put all the assets into a single very fast system that does everything well. The logic is that if there are a lot of fast cores / threads, a wide bandwidth OpenGL workstation GPU, a lot of RAM, and fast disk, it's possible to assign two thirds of the CPU to the rendering and still model.

But, everything depends on the proportions of use. Your situation may be quite different and if you're doing a lot of rendering you might consider the tactic of having a fast four core modeling system with a a great GPU and then buy a high quality, obsolete generation Dell Precision or HP z-series- systems that can use dual Xeons- and then upgrade it to sit in the corner and render. A month ago I bought:

Dell Precision T5500 (2011) (Original): Xeon E5620 quad core @ 2.4 / 2.6 GHz > 6GB DDR3 ECC Reg 1333 > Quadro FX 580 (512MB) > Dell PERC 6/i SAS /SATA controller > Seagate Cheetah 15K 146GB > Linksys WMP600N WiFi > Windows 7 Professional 64-bit
[ Passmark system rating = 1479 / CPU = 4067 / 2D= 520 / 3D= 311 / Mem= 1473 / Disk= 1208]

>on Ebahhh for $171.

I bought a Xeon X5680 for $200 and had some parts from the upgrade of my main system which I transferred to the T5500 and it became:

Dell Precision T5500 (Revised) > Xeon X5680 six -core @ 3.33 / 3.6GHz, 24GB DDR3 ECC 1333 > Quadro 4000 (2GB ) > Samsung 840 250GB /WD RE4 Enterprise 1TB > M-Audio 192 sound card > Linksys WMP600N PCI WiFi > Windows 7 Professional 64> HP 2711x (1920 X 1440)
[ Passmark system rating = 3339 / CPU = 9347 / 2D= 684 / 3D= 2030 / Mem= 1871 / Disk= 2234]

> and the cash outlay was about $650. I plan to add a second X5680 for 12 cores /24 threads and a PERC H310 SATA /SAS RAID controller to have a 6GB/s disk system. I'm also thinking whether I will use it enough and I should sell it and use my trusty Precision 390 [Xeon X3230 4core @ 2.67GHz > 8 GB > Firepro V4900 (1GB) > 2X WD 320GB > Windows 7 Professional 64-bit ].

Again, if you are running renderings or video effects processing often / constantly and it interferes with continuing modeling / animation, etc., by all means consider a separate system, and you might look into upgrading a high quality older system for that purpose.

Cheers,

BambiBoom

HP z420 (2015) > Xeon E5-1660 v2 six core @ 3.7 /4.0GHz > 16GB DDR3 ECC 1866 RAM > Quadro K2200 (4GB) > Intel 730 480GB > Western Digital Black WD1003FZEX 1TB> M-Audio 192 sound card > Linksys AE3000 USB WiFi > 2X Dell Ultrasharp U2715H 2560 X 1440 > Windows 7 Professional 64 >
[ Passmark Rating = 4918 > CPU= 13941 / 2D= 823 / 3D=3464 / Mem= 2669 / Disk= 4764]
 
My Point exactly. I am thinking of buying a new PC for 3D, Video Rendering and a Lot of stuff. Here are my specs:

Intel i7 4790K processor
16GB Corsair Vengeance 1600MHz DDR3
120GB Samsung 850 SSD
1 TB 7200rpm WD Blue HDD
Non Reference R9 290 (Probably Sapphire)
Asus Z97 Extreme 4 Motherboard
700W Corsair or Cooler Master PSU


In future I'm thinking of buying the same or better configuration and plan to use this PC as a render slave.
 
CPU renderers can usually be run without massive interference to the responsiveness of the machine as there is a working system to deal with prioritization of task management, and it works well (task scheduling and prioritization is very mature in modern operating systems). The renderer can be run as a low priority task, effectively only using idle cycles of the CPU. There will still be a mild consequence to the responsiveness of the machine but not enough to bother most people.

The issue of responsiveness is much more prominent when doing GPU accelerated export rendering. There isn't a good system in place for dealing with task management on GPU's. Any heavy GPU workload, even if "throttled" to a percentage of available throughput with renderer settings, is still apt to interfere with the responsiveness of the machine and reduce viewport performance significantly, so if you use a GPU accelerated export renderer like Octane, Furball, V-ray GPU mode, I-Ray, etc, then a separate render rig may be a good idea...

Another main reason to have separate modeling and render rigs, is that the the best value CPU and GPU for modeling and design work, is often not the best value CPU and GPU for export rendering.

Example: (going to use Maya as an example here)

If you do modeling and design work in Maya, then the best value CPU for the job, is going to be an E3 Xeon or i7-4790K. On a tighter budget, even an i3 or i5 makes a supurb CPU for viewport performance in many modeling applications as the workload is predominantly single threaded. Core performance is more important than core count here. The best value GPUs for use in Maya, are actually the AMD FirePro W4100/5100/7100 series, as they offer the best openGL optimizations for this viewport, and the most raw hardware for the money in workstation GPUs.

The export renderers that you might use in Maya, (of which there are many, mental ray/iray, v-ray, octane, furyball, etc), are CPU or GPU intensive (or in some cases, can use both at the same time pretty effectively), and they can scale into parallelism really well (many cores, or many GPUs, or both). Most of the GPU export renderers have been developed for CUDA cards, which as you probably know, is an nvidia proprietary technology, so while the best viewport performance comes from a FirePro card in this particular application, the best export render performance will either come from many-core Xeon builds, or as many GeForce cards as we can cram into a machine, or both. With the exception of rare situations, high end Gaming cards actually offer the most bang-for-buck here, as export rendering does not benefit from any of the advanced driver optimizations or error correction technology found on high end Quadro cards.


An i3-4160 + W4100 (~$250) will perform better in the Maya Viewport than a machine configured with 2X E5-2650 V3s + 4X GTX TITANs (~$6000+). Sometimes, the cost to build a nice dedicated modeling machine, is so inconsequential in the grand scheme of a big workstation build, that *not* separating the build into 2 machines doesn't even make sense, especially considering the performance and value advantages....

-------------

Lets say for instance, we wanted to try to build a workstation that could provide BOTH excellent performance in advanced openGL viewports, AND excellent CPU and GPU based export rendering:

Since there is no official support for mixing GeForce and Quadro cards on the same machine (would require unsupported hacks), the build might wind up looking something like the following in order to work properly:

2 X E5-2687W V3 ~$4200
2 X Noctua NH-U14S ~$140
ASUS Z10PE-D8 WS ~$550
8 X 8GB DDR4 ECC RDIMM ~$800
4 X Quadro K5200 ~$7600
512GB Samy/Tosh/Sandisk/Crucial/Kings semi-pro/enterprise drives ~$250
WD/Seagate Enterprise ~$80+
EVGA 1.6KW G2 ~$300
EATX Chassis: ~$150

~$14,000

Contrast that with the value of the following machine optimized as an export rendering node:

2 X E5-2697 V3 ~$5400
2 X Noctua NH-U14S ~$140
ASUS Z10PE-D8 WS ~$550
8 X 8GB DDR4 ECC RDIMM ~$800
4 X GTX980 ~$2300
256GB Samy/Tosh/Sandisk/Crucial/Kings semi-pro/enterprise drives ~$125
EVGA 1.6KW G2 ~$300
EATX Chassis: ~$150

~$10,000

$4,000 less expensive, with an estimated ~20% faster in CPU export rendering and ~50% faster in GPU export rendering performance. A good dedicated modeling machine can be built for ~$2500. So we can actually get more performance for our money by building separate rigs here....

-------------

Now look what happens when we "split" even further and build 2 render nodes...

4 X E5 2650 V3 ~$4440
4 X Noctua NH-D9L ~$220
2 X ASUS Z10PE-D8 WS ~$1100
16 X 4GB DDR4 ECC RDIMM ~$1100
8 X GTX970 ~$2700
2 X 256GB Samy/Tosh/Sandisk/Crucial/Kings semi-pro/enterprise drives ~$250
2 X EVGA 1.3KW G2 ~$400
2 X EATX Chassis: ~$300

~$10,500

By buying into the higher value of reduced compute density, we now have 25% more CPU render performance and ~70% more GPU render performance than the single similarly priced render node listed above....

The effect can continue to an extend with lesser machines. Some people build render clusters out of lots of consumer grade hardware, which will take up a lot of space and power, but can offer a lot of render performance for the money. The best balance of compute density and performance/$ will vary from implementation to implementation.

---------

The examples given above, assume a specific scenario, involving the combination of an advanced openGL viewport, and support for a wide range of export rendering modes. If you use all CPU based unbiased export renderers, then a separate render rig is largely unnecessary, as you can throw a single workstation GPU into a many-core Xeon box and not have any areas of "poor" value (like using quadro cards as export rendering devices).

Whether or not to go sideways depends heavily on the specific workflow (what applications and renderers are being used, and how).
 




Thanks for the examples

I was thinking a i7 5820K and a i7 4790k as the machines, one for render, one for modelling. I will use other programs such as Photoshop, Illustrator, etc.

Or

I was thinking i7 5820k and a FX8350 systems.

How are those choices?

Is there any use for my old Q6600 pc? It has 4 gigs DDR2 and the cpu is overclocked to 2.9
 
Photoshop? Illustrator?

nevermind, nevermind!!! NEVERMIND!!!!!!

Those programs have absolutely no implementation or way to scale performance to multiple machines. Don't bother, there's no point.

You implied that there would be a 3D modeling program involved in your original post. So???

---------

How "big" are your photoshop and illustrator project files?
 


Yes I will be using 3D and video editing.

I will use all of the programs.

My files will be pretty big. I am making animation.
 


I will use Lightwave and Octane, Zbrush, for starts and after, I will go for Maya.

I will use Premiere CC for the editing, and later a Avid system.

 
Depending on budget then, yes, I think you would benefit from an external render node...

Maya is a good reason to have a workstation GPU. Having a workstation GPU in your modeling machine is a good reason in itself to build an external render node for GPU rendering in octane, since scaling export render performance with workstation GPU's is not cost effective, and using GPUs in your modeling machine for rendering will interfere with the responsiveness and view-port performance.

Zbrush uses Keyshot, which is an all CPU based renderer that supports network rendering. This would actually run fine on a high core count workstation being used as the modeling machine, but since you have a good reason to build a render node already, it makes sense to go ahead and put some decent CPU power into that render node and use it to speed up keyshot.
 


g335,

Perhaps it's a good time in this thread to restate / summarize:

Generally, for professional use involving large scale projects, I'm an advocate of maintaining a separate, networked rendering system as it's possible to optimize each system for their uses. My approach has been to have a modeling system with fewer but faster cores and the best GPU for navigation. In my view, if you depend on this professionally, I suggest buying a base proprietary system and upgrading it as necessary. I've bought two new HP z420's for realtively little money with with fast CPU's but basic GPU's and disks. The most recent, about three weeks ago is an E5-1660 v2 (6-core 3.7 /4.0GHz), 16GB 1866ECC for under $1,000. To put that into perspective, the CPU alone cost $1,100 so it would be impossible to build this system for even double the price paid. After a Quadro K2200, Intel 730SSD, and WD Black is the highest rated z420 of 200 on Passmark- a total investment of about $1,800.

As I use Autodesk, Dessault (Solidworks), and Adobe products which are CUDA accelerated, almost all my GPU's have been Quadros. For the separate, rendering system, I like to revise older dual CPU Dell Precisions since many rendering programs can utilize every available core- Premiere also. Dell Precisions are beautifully made and ultra-reliable by design, and Dell supports them generously long after they're out of warranty- you can download updated BIOS, drivers. A month ago I bought a 2011 T5500 for $171 and upgraded it to be 6-core @ 3.33 /3.6GHz and with a Quadro 4000, and Samsung 840 250 GB, and PERc H310 6GB/s RAID controller it's already very fast. I can add a second CPU for 12 cores / 24 threads. and total cost will be about $1,000.

However, this approach is dependent on need and budget. If your budget is restricted, I'd suggest having one very good system that can do everything as well as possible- fast, 6-core CPU, a lot of RAM, and good GPU, 250 or 512GB SSD for OS /Applications and for Maya, Adobe CS6, I've had excellent results from a Quadro K2200, but a K4200 is advisable for heavy animation.

My recommendation in either case is to do one system at a time. Buy a proprietary, fast, 6-core CPU but base system and upgrade it in preference to building. You'll be productive much sooner- perhaps the same day, and the support is important to avoid work interruptions In some cases you can get to work and upgrade gradually. If you find the first system is being diverted too much of the time for rendering and you're waiting, consider upgrading a used Precision that can sit in the corner and render / process. After getting my new HP going properly, I'm begging to think I don;t really need the rendering system, but your use suggest that if the budget allows, a second system would be useful.

With the first system quickly on line for work and continuously usable, the second, rendering system can be done methodically and finding good prices. This spreads the cost and effort over time while maintaining productivity.

But, I'd repeat the comments in so many of the posts in this thread: set a budget, a total cost. Without a cost limit, no one can give you a specific answer.

Cheers,

BambiBoom

 


Just so you know, if you ever want to use your GPU for CUDA rendering, Nvidia cards are the only ones that work for that.
 


The E5-1650 V3 offers the same performance as the E5-1660 V2 for under $600. Intel's price tag may still say $1100 on the 1660V2, but nobody would pay that for a new system build unless they were ignorant of current hardware. Price tag and real world value are not the same thing.

$1800 for the completed build on those HP Z420's is certainly not a bad deal, but really isn't as special as you are making it out to be...

The following build offers the same performance/quality IMO.
1650 V3 ~$580
SilverStone AR01 ~$35
Supermicro X10SRA ~$270
2x8GB DDR4 2133 ECC RDIMM ~$180
K2200 ~$420
730 240GB ~$120
WD SE 1TB ~$80
PSU: Seasonic SSR-550RM ~$80
Case: Fractal Design Arc Midi R2 ~$100

<$1900
 


mdocod,

Firstly, the specifics of relative CPU performance is not so much the point as the overall economics of the situation.

But, starting with the CPU:

The E5-1650 v3 on Passmark: Rating = No. 30 Score= 13124

E5- 1660 v2: Rating = No. 27 Score= 13643

> which means the v2 is performing about 4% above the E5- 1650 v3. This is probably due to the 1660's higher clock speed- 3.7 /4.0GHz to the 1650's 3.5 / 3.8GHz. Not a highly significant performance advantage but the cost / benefit of teh 1660 v2 is.

As mentioned, the significance of the recommendation is economic. Here is the current listing a for a new HP z440 with an E5-1660 v3 and 16GB DR4 2133:

http://shopping1.hp.com/is-bin/INTERSHOP.enfinity/WFS/WW-USSMBPublicStore-Site/en_US/-/USD/ViewProductDetail-Start?ProductUUID=9fQQ7EN5H7EAAAFI.yhFN.5n&CatalogCategoryID=&JumpTo=OfferList

> and as you can see, the Z440 with 16Gb Ram and Quadfto K2200 costs $2,699 . For comparison, I bought the z420 with an E5-1660 v2 and 16GB DDR3 1866 for $940 shipping included.

My z420 with a Quadro K2200, 16GB RAM, and 480GB SSD is rated at 4918 on Passmark. A similar specification z440 with K2200, 16GB RAM, and similar disk score is rated 4693. Now, I think anyone who was discussing the economics of a system- no matter how anxious they are to prove superior knowledge, would see that having a higher performance rating for 50% of the cost is quite a good cost / benefit ratio. The $1900 build you propose would be a very good performer, but it's still double the cost of my system. And, one of the most important aspects, your system would have: research, ordering, waiting for delivery, assembly, configuring, and trouble-shooting. With my system I installed the K2200, SSD, and loaded a system image from my previous z420 and was at work within 2-3 hours of opening the box. With your double-cost proposed system, the same task might be done over a period of 30 hours, but those hours would start and stop, probably over more than a week- very disruptive. If a person is billing hours, just the cost of the work lost could pay the difference in buying a finished system. Plus, the purchased system has a warranty and single-point access to updates, spares, and drivers.

Every architectural and engineering office I've ever known has considered building rather buying. Most have never done it as anyone with experience knows the true cost of disaster. The couple of offices that have built systems have ever done it twice. As may be imaged, this factor is even more important in a single person firm- all those hours are lost work as no one is producing anything. For a lot of individual consultants, your system would cost $1,900 plus $3,000 labor- it's actually more of a $5,000 system and I would guess a far more stressful process for someone without a lot of experience.

Try to analyze systems in terms of complete cost / benefit within the market and work production context and not starting from a pre-determined resultant.

Very good discussion!


Cheers,

BambiBoom

 
and as you can see, the Z440 with 16Gb Ram and Quadfto K2200 costs $2,699 . For comparison, I bought the z420 with an E5-1660 v2 and 16GB DDR3 1866 for $940 shipping included.
You said $1800 in prior post for that system, now you're saying $940. I was under the impression that it was $1800 to buy and outfit ONE of those machines, thus, the basis for my response.
 




So if I understand correctly, a high core count pc for the rendering and a lower core count but higher speed for the modeling?

Would the i7 5820k and the i7 4790k be fine? Or the FX8350 and the i7 4790k?
 
g335,

Yes, in general, given the software you're using, and if you'd benefit from two systems, my suggestion would be to use ".., a high core count pc for the rendering and a lower core count but higher speed for the modeling."

But, before going any further:

1. What is your budget?

2. Do you definitely want to have two systems?

3. Will you or would you prefer to build this system yourself?

4. Would you object to an upgraded, used system for the rendering system?

My overall preference would be to have a fast 4-core LGA2011 for modeling and possibly a dual Xeon with 12 cores for rendering, but it's impossible to say more until the cost limit is known.

Cheers,

BambiBoom
 
i7-4790K with a workstation GPU for your modeling machine, and 5820K + 2-4 GTX970/980 for export rendering in keyshot (cpu) and octane (gpu); Should work fine....

I would personally want to do cost analysis comparing 1 5820K machine with 2-3 FX-8320E machines.

A Single FX-8350 machine doesn't really make sense as a render node for an i7-4790K, as the 4970K would be faster at export rendering anyway. A render node that is slower than the modeling machine at rendering doesn't' strike me as particularly awesome unless you implemented several of them as a small render cluster instead. The 8350 is unfortunately, awful for compute efficiency. I would stick to 8320E's for cheap AMD render nodes.

Of course, if the budget has room I would advise something higher density (multi-socket motherboard with Haswell-EP Xeons), or maybe bambi can help you locate some good previous gen hardware.
 




My budget is about
550 dollars for a i7 4790K build (only have to buy cpu and mb) .(not counting gpu and OS prices.) Well right now I have a case, 120ssd, dvdrw drive, 16gigs of 2133 DDR3 ram, (all new). Reinstall Windows 8.1 on new pc.
I am going to re-use my 750watt psu and my GTX660

If I go for the i7 5820K first, I can take the ram back and use the money to get DD4 ram. Buy the cpu and mb and ram. I have about 850 dollars for this.
As I mentioned before I have a case, 120gig ssd, dvdrw drive that I can use. (All new) Reinstall Windows 8.1 on new pc.
I am going to re-use my 750watt psu and my GTX660.

Now that does not include the budget for buying a new gpu. I have about 600 dollars for a new gpu and I am looking at the GTX980 because of its 4 gigs of DDR5 memory.

I will buy a 2560x1440 IPS 27" monitor.

Please remember I will game lightly on one of the computers.(Which every one has the proper gpu).

I think having two will be good for me so while one is rendering I can still use my pc for normal activities as well.

After I start making money in this field, than I want to up grade to Xenon work stations and more expensive workstation gpu's.

If I have a workstation gpu in one of the i7 systems, I need to get the lowest priced one that will help. Those cards are expensive.

So what should I start first with? i7 4790k or a i7 5820K?
I thought the FX8350 was better than the FX8320?
 


g335,

I think the most forward-looking approach would be to start with the LGA-2011-3 system. Not only does that allow up to 18-core CPU's the memory bandwidth is 68GB/s instead of 25.6 and there are 40 PCIe lanes instead of 28. Besides all the cores for CPU-based rendering- and Premiere can used them editing- this makes the system more expandable in the future- M.2 SSD, dual GPU's and so on.

The i7-5280K is 3.3 / 3.6GHz and a very good CPU and excellent value, but it has a limitation in having 28 PCIe lanes. It supports 64GB RAM which is probably sufficient. My attitude is ot to overclock workstation so the "K" of the 5820 is, in my opinion not a factor. It's worth considering going close to your $850 maximum and using components that could be the foundation for a very powerful workstation:

CPU: Intel Xeon E5-1650 v3 Six-Core Haswell Processor 3.5 / 3.8GHz 0GT/s 15MB LGA 2011-v3 CPU, Retail > $579

http://ark.intel.com/products/82765/Intel-Xeon-Processor-E5-1650-v3-15M-Cache-3_50-GHz

http://www.superbiiz.com/detail.php?p=E51650V3BX&c=fr&pid=3289ab9aa40f3b4b614980982f4bb0ad9ea369362838b3fc2590281e1f777c00&gclid=CPrts7e1h8QCFahZ7AodYkYA6w

Motherboard: ASUS X99-A LGA 2011-v3 Intel X99 SATA 6Gb/s USB 3.0 ATX Intel Motherboard > $254

http://www.newegg.com/Product/Product.aspx?Item=N82E16813132261&cm_re=x99-_-13-132-261-_-Product

___________________________
TOTAL= $833

The X99 chipset is very fast and these boards are arranged to use M.2 SSD's- which require double the PCIe lanes to achieve 10GB/s.

I'm not an expert in gaming, but I think the calculation density of the Xeon and slightly higher (+200MHz) clock speed would make it very good at games- it's the non-overclocking cousin to the i7-5930K.

It's possible of course to use the i7-5820K and save $200, but if your idea is to gradually work towards a workstation configuration, it may save money in the long run to use the E5-1650 v3 and he speed and features of X99.

There's the subject of the DDR4 RAM for either the i7 or E5 choice and I'd suggest 2X 8GB of ECC if the budget extends to it.

The 750W PSU is a good size.

As for AMD, the CPU's they call 8-core, like the 8350 are excellent value but actually the equivalent of an Intel hyperthreading 4-core. When you assign the number of cores to rendering, an FX 8350 will show 8 as will a quad core i7 and the i7-5820K or Es-1650 v3 will show 12.= +50%.

Cheers,

BambiBoom


 
G335,

You have a conflict of interest introduced by gaming and a limited budget.

The i7-4790K is the best consumer-class (non-enterprise w/ECC memory support) CPU to use for viewport and gaming performance. Unfortunately, the best value GPU's for use as a 3D modeling worktation will be poor value GPU's for gaming. Ideally, you'd want to use this machine for gaming so that you could play games while the render node is busy. So, you may find yourself compromising here, and gaming on a workstation GPU at lower visual quality settings. For all around performance and support in the applications which you have specified, I would normally advise a Quadro K2200 for ~$420. For gaming, this will provide the same visual quality as a GTX750Ti. If you want to pick a workstation GPU that is a better balance for gaming and workstation use, the W7100 for $650 will give you the same performance in gaming as an R9 285. Unfortunately, the lightwave viewport performance and stability has historically been hit and miss on AMD hardware, so that may not be a good solution.

In the short term, if you're not using Maya initially, just use the GTX660 as your workstation GPU, it's an excellent value card for lightwave and the entire adobe suite. Zbrush's viewport is a proprietary all-CPU based deal anyway, so that will work fine on any GPU. Of course the GTX660 is also not a bad gaming GPU either.

Alternatively, In terms of visual quality capability, your render node (if you choose to build a machine with a 5820K) is apt to wind up equipped far better than your modeling node for gaming. As it *should* be equipped with multiple high end GeForce GPUs for best results, and you could even enable SLI mode across equal GPUs installed for gaming (disable SLI when using the machine for rendering). So ultimately, your render node is apt to be the most impressive gaming machine in terms of visual quality, however, you'd only be able to game on it while it's not being used for export rendering, so you'll probably wind up gaming more often on your modeling workstation, not the render node.

In the long run, you'll probably want to "upgrade" the GTX660 in the modeling machine to a quadro K2200 or K4200 for better Maya viewport performance without sacrificing lightwave viewport performance. At that time, you could move the GTX660 into the render node, where it will just add to your collection of CUDA accelerated in that machine.

-------------

Lets do a bit of cost analysis here...

3D modeling and gaming rig (example using E3 Xeon):
CPU: E3-1231V3 ~$250
HSF: Raijintek Aidos ~$23
MOBO: ASRock Rack C226M WS ~$180
RAM: 2 x 8GB DDR3 ECC UDIMM's ~$160
GPU: GTX660 (owned)
SSD: MX100 256GB ~$100
Storage: ST1000NM0033 ~$80
PSU: Antec TPC 550W ~$80
Case: CM N200 ~$50
~$925


Render rig (example using 5820K):
CPU: i7-5820K ~$380
HSF: Raijintek Themis ~$25
MOBO: GA-X99-UD3 ~$210
RAM: 4X4GB "value series" DDR4 ~$175
GPU: GTX970 ~$350 (4X $1400)
SSD: 120GB (owned)
Storage: Probably not needed
PSU: 750W (owned) (upgrade to 1.3KW EVGA G2 ~$180)
Case: Phanteks Enthoo Pro ~$90
$1250 - $2460


Render Rig Cluster (example using 2X FX-83XX rigs):
rig1
CPU: FX-8350 ~$170
HSF: Raijintek Themis ~$25
MOBO: GA-970A-UD3P ~$85
RAM: 2x4GB "value" DRDR3: $55
GPU: USED GTX480 ~$85 (throw your GTX660 in here when the time comes)
SSD: 120GB (owned)
Storage: Probably not needed
PSU: 750W (owned)
Case: Basic ATX mid-tower (200R, 220 etc) ~$50
rig2
CPU: FX-8350 ~$170
HSF: Raijintek Themis ~$25
MOBO: ASRock 970 Performance ~$100
RAM: 2x4GB "value" DRDR3: $55
GPU: USED GTX480 ~$85 (3X $255)
SSD: 120GB $60
Storage: Probably not needed
PSU: EVGA 1.3KW G2 ~$180
Case: Rosewill Stealth ~$70
$470 - $1385

------------

So yea, you can do a pair of AM3+ based render rigs for a lot less than the cost of the 5820K machine, at the cost of space and power. 2 X FX8350 should offer nearly 40% better CPU rendering performance than a single 5820K. Each used GTX480 you snag actually offers about half the performance in Octane as a GTX970, while costing about 1/4 as much, so there's a lot of value to be extracted there if you're willing to go with used stuff. Obviously you still have the option there to scale up with GTX970/980s

-------------

On the subject of the 8320E vs 8350. The 8350 is about 25% faster, but uses a ton of power. The difference is more significant than their "TDP" ratings might suggest. In order to use the 8350 for sustained continuous workloads, it must be installed on a motherboard with 6+2 or 8+2 phase power regulation for the CPU. Anything less and you'll burn it out or have throttling problems. The advantage of the 8320E, is that you could run it on cheaper motherboards with a lot less problem, so if you wanted to focus more heavily on performance in keyshot, it would probably be more cost effective in the big picture using 8320Es on GA-78LMT-USB3 motherboards. (If you live near a micro-center, you can buy the CPU+MOBO for like $110). Otherwise, if you're going to cheap/dirty solutions that are going to be filled with old Fermi cards like the builds I posted above, the 8350 makes sense I suppose. I'd be tempted to overclock 8320E's to save money but chances are that would be a huge time-sink that may not be worth cutting into your productivity.

Regards,
Eric

----------


Bambi,

I don't believe there would be any advantage to the additional PCIE lanes on alternative 2011-3 CPU's for this sort of application, as 28 lanes is plenty to run 4 GPUs for GPGPU compute work (in fact, we could probably do with far less for 4 GPUs)... Think about it, the whole of communications necessary to keep the render node busy will happily fit within the bandwidth of Ethernet. The kernal is run in-place on each GPU from my understanding, right from VRAM, so the bandwidth requirements should be very low.
On a 5820K with 4X GPUs installed, one or two of the cards installed will run at 4X. As I understand it, we could probably install GTX980s on 1X to 16X powered PCIE ribbon cable adapters and run 6 of them on a $50 motherboard (BIOSTAR Hi-Fi H81S2) on an open-air riser mount system with very little penalty to performance. (messy, but it's an option). Large scale amateur GPGPU clusters are often thrown together like this, (often on plywood structures, or other cheap furnishings to maximize value).

I would not use the ASUS X99-A , or any X99 board with E5 V3 Xeon's. There too many gray area's on memory support there to bother with that hassle. The C612 chipset is the one to use for Haswell-EP. Besides, assuming G335 wasnts to keep a "clean" build (no messy risers, open chassis, etc), the X99-A only has room for 3XGPU's, while the less expensive GA-X99-UD3 actually has room for 4, as long as it's installed in an 8+ slot case.

Believe it or not, clock for clock a PileDriver module is actually slower than a hyperthreaded haswell core. The Haswell core has more execution resources. A quad core 3.5GHZ E5-1620V3 has about the same compute performance as a 4.4GHZ FX-9370. The "dual-system 16 core" cluster example I gave above, only manages about a 40% compute advantage over the 6 core 5820K 😉 As expected, since it is lower density and less efficient, it's also cheaper.

Normally I would advise nothing less than a proper dual socket haswell-EP build to maximize compute density and efficiency and scalability. The problem with new enterprise hardware, is that there's a price scaling mechanism involving compute density and efficiency that they can charge a huge premium for because the entire commercial rack space and power dissipation is factored into implementation costs.

For ultra budget conscious small scale cluster computing projects, consumer hardware and previous generation enterprise hardware goes a lot further. 6 core 5000 series Xeons for <$100 on ebay might be worth a look if a decent dual socket motherboard could be found. Those 6 core Westmere chips offer performance comparable to FX-8320/8350's. The problem is the availability of good condition dual socket motherboards for cheap. (retailers are still selling new ones full price! and this is ~5 year old tech!). Bummer on that front...
 
Hello everyone

Thanks for your help. It has really help me to understand everything.

I lost the receipt for my DDR3 2133 16gig ram.

So looks like I will build the modeling pc first.

Only thing I need to buy for it to start is a cpu and mb.

Or I could just go for the cheap render farm for the rendering

The memory can be used for a i7 4790k or a FX cpu.

So either a Intel cpu and mb for about 540 dollars
Or a FX 8350 and mb for 336 dollars.

Would one pc with the FX 8350 cover me for modeling and rendering until I get a Intel build?

Or should I go for a i7 4790K first ?
 

TRENDING THREADS