Computer for architecture

nbarrett

Commendable
May 30, 2016
16
0
1,510
Hi all,

New to the forum so bare with me!

I'm working for an office that's going down the route of high end 3d work. The plan is to have me working on a lot of the 3d work, particularly things like 3d studio max, in-house rendering (probably with VRay), in house videos (either VRay or Lumion), VR models (most likely with Unreal Engine), and then the usual stuff like revit/cad/ps. The plan is to invest in a fairly high-spec machine that will be capable of handling all this, and it's going to be my call for the most part on that. The other side of all this is that the budget is flexible, and I'm familiar with a variety of rendering software, so I'm happy to go with a higher spec GPU with a view of using a GPU-based rendering software. Even though I've never built and entire machine myself, I've done things like swap out a PSU/GPU/RAM etc before, so I think I would be confident enough to build one, although if there is a company that would fit the bill, that's also an option.

Having done a bit of research, there's a few things I'm considering with this, and right now, I'm just looking for some suggestions. It'll be this fall/winter that I'll be getting the machine, so it's still early days.

So right now, I'm just looking for some feedback/info/suggestions on things like single vs dual cpus, any recommendations on graphics cards, etc. Budget wise, I'm anticipating this being somewhere in the 6-10k bracket, but that's ok for the moment. Like i said, it's still early in the research side of things.

Any help/tips/advice would be hugely recommended.
 


nbarret,

For this list of uses, the optimal system will derive from optimization of each use in order of priority of efficiency for the exact software used. 3D modeling and Revit need very high single-threaded performance while CPU rendering and video editing can in some cases use every CPU core. On the other hand, Adobe CS /CC maximizes processing efficiency in Premier at 5-6 cores, quickly tapering off to 8-cores and a dual CPU actually detracts from performance. Unfortunately, both Autodesk and Adobe have been concentrating on maximizing profits from forcing subscriptions and updates rather than improving the multi-core efficiency of the applications whereas Solidworks rendering is fully scalar to CPU cores. In GPU's, GPU rendering is many times faster than CPU rendering although the image quality can be reduced. Adobe does not recognize multiple GPU's. As a compromise, I would suggest using CPU rendering for important single-image renderings and GPU rendering for animation and video editing/processing. The best explanations on these topics are the articles on the Puget Systems site.

There is also the matter of the current transition of Xeon E5's to version 4 / 14nm. By the time you initiate the proposed system, there will be a whole range of E5-1600 and 2600 v4's available. For example:

Xeon E5-1680 v4 8-core @ 3.4 / 3.8 GHz ,8 × 256 KB, 20 MB ,140 W , LGA 2011-3, DMI 2.0, 4 × DDR4-2400, Q2, 2016 will be very tempting. See:

https://en.wikipedia.org/wiki/List_of_Intel_Xeon_microprocessors#Xeon_E5-16xx_v4_.28uniprocessor.29

In this environment, currently the best recommendation in my view is to use a single Xeon E5 v3 8-core with the highest single-threaded performance on an X99 workstation motherboard supporting extremely fast M.2 NVMe SSD.

An example for the proposed system:

BambiBoom PixelCannon Modelanimetricgrapharific iWork TurboBlast ExtremeSignaure SuperModel 9600 ®©$$™®£™©™_ 5.30.16

SYSTEM 1 > Modeling and Animation

1. CPU: Intel Xeon E5-1680 v3 8-core @ 3.2 / 3.8GHz, 20M Cache, 140W > $2057

____ http://ark.intel.com/products/82767 > (Passmark No, 10 CPU average CPU score =17166)

2. CPU Cooler: Cooler Master Hyper 212 EVO CPU Fan > $32.

3. Motherboard: ASUS X99-E WS LGA2011-v3/ Intel X99/ DDR4/ 4-Way CrossFireX & 4-Way SLI/ SATA3&USB3.0/ M.2&SATA Express/ A&2GbE/ CEB Workstation Motherboard > $494

____ http://www.superbiiz.com/query.php?s=ASUS+X99-E+WS+

4. RAM: 128GB Samsung DDR4-2133 8X 16GB/2Gx72 ECC CL15 Samsung Chip Server Memory > $696 ($87 each)

____ http://www.superbiiz.com/detail.php?name=D416GE21S

5. GPU: PNY Quadro M5000 VCQM5000-PB 8GB 256-bit GDDR5 PCI Express 3.0 x16 Full Height Workstation Video Card > $1,800

___ http://www.newegg.com/Product/Product.aspx?Item=N82E16814132052&cm_re=Quadro_M5000-_-14-132-052-_-Product

6. Drive 1: Samsung SM951 256GB (NVMe) MZVPV256HDGL-00000 MZ-VPV2560 Gen3 M.2 80mm PCIe 3.0 x4 256G SSD OEM> $224

____http://www.newegg.com/Product/Product.aspx?Item=9SIA12K3MH3372&cm_re=samsung_nvme-_-9SIA12K3MH3372-_-Product

7. Drive 2: Intel 750 Series AIC 400GB PCI-Express 3.0 x4 MLC Internal Solid State Drive (SSD) SSDPEDMW400G4X1 > $394 (Active Projects)

____ http://www.newegg.com/Product/Product.aspx?Item=N82E16820167359

8. Drive 3,4: 2X Seagate Constellation ES.3 ST4000NM0033 4TB 7200RPM SATA3/SATA 6.0 GB/s 128MB Enterprise Hard Drive (3.5 inch) > $406 ($203 each) (RAID 1)(Files, Backup, System Image)

____ http://www.superbiiz.com/detail.php?name=HD-ST40NM3

8. PSU: CORSAIR RMx RM1000X 1000W ATX12V / EPS12V 80 PLUS GOLD Certified Full Modular Nvidia Sli ready and crossfire support Power Supply> $160

____ http://www.newegg.com/Product/Product.aspx?Item=N82E16817139140

9. Optical Drive: Pioneer Black 16X BD-R 2X BD-RE 16X DVD+R 12X BD-ROM 4MB Cache SATA Blu-ray Burner BDR-209DBK > $64

10. Case: LIAN LI PC-A75X No Power Supply ATX Full Tower Case (Black) CA-A75 $179.99

11. Operating System: Microsoft Windows 7 Professional 64-bit w/ SP1 (1-Pack, DVD), OEM MSFQC04649 $138.99
________________________________________________________

TOTAL = $6,646

Performance should be remarkably good and the Quadro supports viewports, 10-bit color, and frame batch processing enhancements.

Notice that this is referred to as "SYSTEM 1 > Modeling and Animation" as there is a strong economic argument for two systems the other being SYSTEM 2 > Rendering / Processing. The second system uses dual obsolete, depreciated CPU's with high thread count and clock speed. In this approach the modeling system may use a faster 6-core.

I have similar uses to those listed and my two system solution is:

Modeling:

1. HP z420 (2015) > Xeon E5-1660 v2 (6-core @ 3.7 / 4.0GHz) > 32GB DDR3 1866 ECC RAM > Quadro K4200 (4GB) > Intel 730 480GB (9SSDSC2BP480G4R5) > Western Digital Black WD1003FZEX 1TB> M-Audio 192 sound card > 600W PSU> > Windows 7 Professional 64-bit > Logitech z2300 speakers > 2X Dell Ultrasharp U2715H (2560 X 1440)>
[ Passmark Rating = 5064 > CPU= 13989 / 2D= 819 / 3D= 4596 / Mem= 2772 / Disk= 4555]

System: Purchased new /open box for $937 including shipping
Quadro K4200 used: $520
16GB RAM: $160
Intel 730 480GB: $200
WD Black 1TB: $80

TOTAL = $1,737

Rendering:

2. Dell Precision T5500 (2011) (Revised) > 2X Xeon X5680 (6-core @ 3.33 / 3.6GHz), 48GB DDR3 1333 ECC Reg. > Quadro K2200 (4GB ) > PERC H310 / Samsung 840 250GB / WD RE4 Enterprise 1TB > M-Audio 192 sound card > Logitech z313 > 875W PSU > Windows 7 Professional 64> HP 2711x (27", 1920 X 1080)
[ Passmark system rating = 3844 / CPU = 15047 / 2D= 662 / 3D= 3550 / Mem= 1785 / Disk= 2649] (12.30.15)

System: Purchased for $190 including shipping:

Dell Precision T5500 (2011) (Original): Xeon E5620 quad core @ 2.4 / 2.6 GHz > 6GB DDR3 ECC Reg 1333 > Quadro FX 580 (512MB) > Dell PERC 6/i SAS /SATA controller > Seagate Cheetah 15K 146GB and 300GB > Windows 7 Professional 64-bit
[ Passmark system rating = 1479 / CPU = 4067 / 2D= 520 / 3D= 311 / Mem= 1473 / Disk= 1208]

Upgrade:

CPU 1: $220 CPU 2: $170
Quadro K2200, used $330
2nd CPU /Memory/ Fan Riser: $75
RAM: $140
PERC H310 SAS/ SATA RAID controller $60 (converts disks system to 6GB/s)
Samsung 840 240GB SSD Value $60 (depreciated /reused from other system upgrade)
WD RE4: $85

TOTAL = $1,330

The PERC 6/i and disks were reused in an upgraded Dell Precision T3500.

I mention this approach as buying a proprietary system and upgrading is both faster, much more cost effective, and saves complex research, ordering, assembly. configuration, and testing time.

The performance is very good for both systems as the E5-1660 V2 has among the highest single-threaded ratings of Xeon E5's of 2105. The E5-1680 v3 single-threaded rating is the highest for any Xeon E5 at 2153, but keep in mind that CPU alone cost more than twice as much as the entire HP z420 E5-1660 v2 system. Notice the CPU's 12-cores / 24 threads at up to 3.6GHz rating of 15047 and costing a total of $390. A pair of 6 core Xeon Es v3's with a combined CPU score of around 15000 will, basing the cost of each Passmark per rating point /$, = a cost of about $2,100 for the same performance as the $390 Xeon LGA1366 CPU's. Both of these systems have had 100% reliability.

Many companies will not risk used systems or any used components, but the MTBF of workstation components is astounding good and since 2010 I've had six used systems and most components except disks without a single failure of a single part.

The cost /performance is impossible to improve with new components. There is an inefficiency in the duplication of components but the key is that the long- duration function is separated such that the real-time function of modeling can continue, thereby a full utilization of that user's time and as the rendering /processing may be queued, both systems may be in effect operated simultaneously by one person.

Anyway, sorry for the very long post but this might start the conversation.

Cheers,

BambiBoom





 
Hmmmm.... Plenty to think about there BambiBoom! I really appreciate the input! I suspect there's some savings in there to be had, although there's a few other bits to point out.

Firstly, the easy issue, software. You're completely right about the performance issues of adobe/autodesk. However, even with my current machine, which is about 6 years old and desperately underspec'd, things like AutoCAD, Revit, PS etc are all running fine. I'll never be doing more than 2d in cad, and with revit, our projects and models, even with things like Mech + Elec, Structure etc in them, are not complex enough to start to see a slow-down in performance.
The software issue really comes up when it comes to the visual side of 3d. By that I mean 3d studio max, rhino, sketchup, unreal engine, and various rendering packages we use. The problem with these software packages is during the modeling phase, they're demanding on the processor and ram, but then when it comes to rendering, it depends on the engine. We're looking at going down the route of something like Octane, which is more GPU heavy, but the RT rendering with it is really top end. Similarly, Vray supports multiple GPUs, and by the looks of things, 3ds is going to become increasingly better at supporting more threads, as it's in their interest to keep their market share.

So, with that in mind, it brings me to the two system suggestions you've come up with. At this point, I think I can rule out the system 2 option of 2 machines. We really want to go down the route of things like VR, and I would be apprehensive that the machines would become dated quite quickly. With system 1, a lot of the components, even basic ones like ram and hard drives are things I didn't really think about. My plan was to have 3 drives; 2 x ssd's (one for system, one for saving renderings) and 1 big normal drive for saving files. With the VR side of things, there's quite a few graphics cards coming out that are VR ready, and the general consensus seems to be that the NVidia gaming cards are outstanding at both GPU rendering and VR stuff. Granted, the likes of the PNY card you suggested, and those that are "designed" for things like architecture and rendering are excellent, but the NVidia gaming cards with VR ready support seem to be better value.

So, with all that in mind, and having re-read your initial point about multiple cpu vs single cpu, I'm now wondering whether I would be better off putting in a bit more money to all this, get better processors in a dual cpu setup, and sticking with multiple gpus for the rendering side of things.

Again, many thanks for your input on all this. I'm very aware of what I don't know in all this, so I hope I'm not coming across like I'm dismissing your points, I'm just throwing the question back out there.
 


nbarret,

Your clarification of the applications and forward thinking is very useful.

I wish I were as optimistic as you concerning the improvement in multithreading of visualization applications. However, it seems to involve a basic restructuring of millions of lines of code for fairly low volume, specialized programs and the software makers are moving very slowly. In Autodesk, Revit, given it's both visualization and heavy dataset /computational I think is in the greatest need of this of any but Inventor, Maya, and 3ds could benefit too. Autodesk should look at Solidworks for lessons. With Adobe, the only recent good news on that front has been that Premiere will get back an improved version of render frames simultaneously where each thread works on a frame at the same time. This was used in Premiere 2014 but removed for 21015 and will back for version 2016- if they hurry so it's not v. 2017. Sketchup is another pet peeve and I'm struggling now with two projects. I had to redraw several part of a large building (600,000 sq. ft.) as I'd used 600 segments on curved walls 1,200 ft. long and the model was 75MB. It took forever to edit intersected /subtracted faces. Intersecting the faces on a 4.0GHz Xeon E5 /Quadro K4200, 32GB system took over two hours. I watched that process in Task Manager and Sketchup was using three of the twelve threads and 97% of the time, only 11% of proccessor. I've also had terrible probelms because Sketchup is only accurate to .1 degree and in AutoCad I use .0001. Also, AutoCad drawings imported in to Sketchup are out of scale to maddeningly small amounts- last time it .976 scale. There is a new related "SketchCad" that has better 2D. Here endeth the rant.

____However, it's not necessary to tell an architect to spend more money twice and follows is an idea that responds to your comments. This is the most modern concept possible as the proposed CPU was released on May 20, 2016- eleven days ago- so they're still fresh! This is based on the Xeon E5-2690 v4 which is a 14-core @ 2.6 /3.5GHz. This produces a quite astounding Passmark CPU rating of 23199 for a single one. More impressive is that even with so many cores, the Passmark single-threaded rating is 2023. The highest single-threaded performance is the E5-2637 v3 at 2154 but that is a 4-core. The Xeon E5-2690 v4 is so capable- it's now the 2nd highest rated CPU of all after the E5-2697 v4 at 24509 and ahead of the previous NO1, the E5-2699 v3 with 22740- and costing $3,800. The E5-2690 v4 rating and single-threaded performance is such that this proposal uses one CPU to start with on dual-CPU board and I suspect a second would probably never be needed.

The CPU is mounted in a Supermicro SuperWorkstation SYS-7048A-T which comprises a case, chassis, dual LGA2011-3 motherboard, 1200W power supply, and includes two CPU coolers. This means that many complicated decisions have been made and it's only necessary to plug in the CPU, RAM, GPU, and drives. This saves researching performance, compatibility, and prices of every single component, ordering, assembling, wiring, configuration, and testing. Three hours instead of forty.

A pair of Intel 750 PCIe disks are used for OS/Programs and Active projects and a RAID 5 of mech'l disks for storage. I believe that by the time this system is done, there will be a number of other attractive alternatives for example M.2 NVMe that will work with any motherboard.

BambiBoom Pixel Cannon Cadarendermodeanimagraphilicious iWork TurboSignature Extreme ModelBlast 9900 ®©$$™®£™©™_5.30.16


Case /Motherboard /Power supply : Supermicro SuperWorkstation SYS-7048A-T Dual LGA2011 1200W 4U Rackmount/Tower Workstation Barebone System (Black) > $1,000

https://www.supermicro.com/products/system/4U/7048/SYS-7048A-T.cfm
http://www.superbiiz.com/detail.php?name=SY-748AT

CPU: Intel Xeon E5-2690 v4 14-Core 2.6/3.5GHz, 35MB LGA 2011-3 CPU, 135W > $2,090

Passmark: Average CPU Mark: Single= 23199 / Single-threaded Rating: 2023

http://ark.intel.com/products/91770/Intel-Xeon-Processor-E5-2690-v4-35M-Cache-2_60-GHz?q=e5%202690%20v4

____ Motherboard > X10DAi > https://www.supermicro.com/products/motherboard/Xeon/C600/X10DAi.cfm

Memory: 128GB (8x16GB) SAMSUNG 16GB 288-Pin DDR4 SDRAM ECC Registered DDR4 2400 (PC4 19200) Server Memory Model M393A2G40DB1-CRC > $752 ($92ea.)

http://www.newegg.com/Product/Product.aspx?Item=N82E16820147563

PNY Quadro M5000 VCQM5000-PB 8GB 256-bit GDDR5 PCI Express 3.0 x16 Full Height Workstation Video Card > $1,800.

http://www.newegg.com/Product/Product.aspx?Item=N82E16814132052&cm_re=M5000-_-14-132-052-_-Product

RAID Controller :LSI MegaRAID SAS 9361-4i (LSI00415) PCI-Express 3.0 x8 SATA / SAS High Performance Four-Port 12Gb/s RAID Controller (Single Pack)--Avago Technologies > $379

http://www.newegg.com/Product/Product.aspx?Item=N82E16816118228&cm_re=lsi_megaraid-_-16-118-228-_-Product

Drive 1: Intel 750 Series AIC 400GB PCI-Express 3.0 x4 MLC Internal Solid State Drive (SSD) SSDPEDMW400G4X1> $395 (Operating System / Programs

http://www.newegg.com/Product/Product.aspx?Item=N82E16820167359&cm_re=intel_750-_-20-167-359-_-Product

Max Sequential Read: 2200 MBps
Max Sequential Write: 900 MBps

Drive 2: Intel 750 Series AIC 400GB PCI-Express 3.0 x4 MLC Internal Solid State Drive (SSD) SSDPEDMW400G4X1> $395 (Active Projects / Libraries)

Drives 2, 3, 4: 3X Seagate Constellation ES.3 ST4000NM0033 4TB 7200RPM SATA3/SATA 6.0 GB/s 128MB Enterprise Hard Drive (3.5 inch) > $612 ($204 ea) (RAID 5)(Files, Backup, System Image)

http://www.newegg.com/Product/Product.aspx?Item=N82E16822236624&cm_re=Western_Digital_Black_2_TB-_-22-236-624-_-Product

Optical Disk: ASUS Black 16X BD-R 2X BD-RE 16X DVD+R 5X DVD-RAM 12X BD-ROM SATA Blu-ray Burner BW-16D1HT > $83

http://www.newegg.com/Product/Product.aspx?Item=N82E16827151266

Operating System: Microsoft Windows 7 Professional SP1 64-bit English (1-Pack), OEM > $139.

http://www.newegg.com/Product/Product.aspx?Item=N82E16827135306&cm_re=blu_ray_drive-_-27-135-306-_-Product
_______________________________________

TOTAL = $7,645

Performance should be incredibly good and the potential to add a second E5-2690 v4 and a second GPU or Tesla coprocessor would make one of the fastest, high computational density visualization, animation, simulation, or scientific workstations existing- and still less than $13,000. However, in architectural use, I can see this level being surplus to need for the next five years regardless of improvements to software, addition of VR, and etc.- it could do anything

Cheers,

BambiBoom

 
Yeah I've always found that with autodesk, you could be waiting ages for an upgrade, and once it comes out, it's either staggeringly amazing, or makes it worse. As for solidworks, I do have a colleague who uses it, and sings it's praises all the time. However, that does present a whole new host of problems for things like compatibility. It's a similar case with some of the other software, so I think that giving you an insight into both the office workflow and my own workflow will also help to clarify things.

For most projects, we go back and forth between Revit, Autocad, Rhino and Sketchup. Typically, early designs might be done in Rhino/Sketchup, depending, for example, on the complexity of the geometry. Where we need to use parametric modelling, we would use Rhino with Grasshopper. Eventually, the workflow will somewhat split. Other members of the office will take the model into Revit on their machines to produce a more detailed model, so things like the structure, services (ducting, ventilation etc) can be explored in more detail. In the meantime, I will take the model into 3ds, where things like materials and the fine-tuning of the geometry goes on. Ultimately, my side of this will probably end up in some rendering of both still shots and of videos. However, our plan is to eventually take the 3d model from 3ds into Unreal Engine for use of VR. HOWEVER, the likes of VRay seem to be going down the route of VR, particularly with their RT engine, so whether this becomes a reality from within 3ds itself, is anyone's guess. In short, I'll be doing mostly rendering, other members of the office will have a more revit-heavy workflow.

Back to the fun stuff!

Going for a single CPU in this case is sounding more and more like a better choice, especially if some software manufacturers are being slow to adapt. I'll have to do a bit of research into some of the other bits of software I use, like the rendering engines (some split between cpu and gpu), however, when it comes to modeling, which is very heavily weighted on CPU/RAM, a single, heavy duty processor seems to be the better choice.

So am I correct in thinking that essentially the best solution might be to make an initial investment of a CURRENT but top end single processor, a good bit on the GPUs, but get a dual-cpu motherboard in the event that the software catches up and a second cpu is going to make a big difference? And also, have you any input on going with a single GPU vs multi GPU?

again, many many thanks for this! I bow down to your knowledge!
 


nbarrett,

In turn I very much appreciate your description of office sequences. As a designer of scribbling/sketching school, I was isolated a long while from CAD- from 1980 to 1995 when it becoming practical and affordable. My brother is an architect also and is going to retire never having learned a thing about CAD. In all but quite large offices, increasingly, one person does everything or will do everything following a medium level of design development. AutoCadRevit and often Sketchup are extremely common in every office that I visit. There is more ArchiCad lately too and it appears to me better integrated 2D /3D- every object is 3D and really ArchiCad is but like having AutoCad 2D, 3D, Sketchup, and a slice of Revit together.

The problem is the variation in the best hardware to suit all the applications. A person would have to be incredibly methodical to understand in depth all the software /hardware implications. I began separating systems to particular hardware for optimization, but it's proving inefficient in some ways as there is duplication of hardware and transferring projects to different systems takes careful tracking of history and versions. That aspect, by the way is another great feature of Solidworks, the history and automatic updating is fantastic. or would be if I could ever learn it well enough.

As for the fun part, yes I think the most currently effective and most forward-looking solution is to have a strong all-rounder system that can do everything well- 3D modeling, animation, and bogth CPU and GPU rendering. Building with a dual CPU socket motherboard but only populating one means a fantastic 14 core /28-thread start, and a 28 core / 36 core future.

The system listed above though is possibly more than necessary and between now and the assembly of the proposed system ther will be more information as to the performance of the new E5- 2600 v4 series. For example the E5-2643 v4 is 6-core @ 3.4 /3.7GHz but still over $2,000. Or would a single E5-2687w v4 12-core be the sweet spot of cores to single-threaded performance. Or would a single E5-1680 v4 8-core (3.4 /3.8GHz) be sufficient?

There are other, slightly more adventurous alternatives and for example, in particular might suggest considering an LGA-2011 version of the Supermicro Superworkstation and buying a pair of used Xeon E5-2687w v2 which are 8-core @3.4 / 4.0Ghz. It's also possible to buy a used Precision T7610 with one of these in it already. The E5-2687w v2 has a Passmark Rating of 16670 / 24501 double and single-threaded score of 2060- near the top for dual E5's. This might be completely adequate to you use and by the time you buy them, perhaps $800-900 each. A T7610 / E5-2687w v2 sold yesterday (5.30.16) on Ebahhh for only $1,500. It was all I could do to not buy it, but it's too near a duplication of the E5-1660 v2 system I have and I don;t think a dual Xeon is in my near future as I have the Precision T5500 with dual 6-cores.The 4GHz turbo clock speed is the highest of any Xeon E5 , shared only by the E5-1660 v2.

So, it's a complicated equation with infinite answers and given the release of so many new Xeon E5's, you're wise to work through the specification process carefully.

Cheers,

BambiBoom





 
Yeah the problem with software is that there's no 1 correct answer. In many ways, they all do the same thing. Take revit for example. I've used Vectorworks for a very long time, and in many ways, it does the same as revit. However, most engineers/consultants we work with will favor revit. So we have to take that into account too. As a young professional, it is a bit of a risk to line myself up as "the 3d guy" in the office. Fortunately, the practice I'm work are extremely design driven, so I spend most of my time designing rather than just producing images. It's a bit of a rare setup, and believe me, I'm insanely lucky to find myself in such a pleasant position.

I'm liking the sound of this single-cpu option with a future expansion to dual. It seems to be the best value for money, and leaves a big open option for upgrading. I'd love to go down the route of liquid cooling too, especially as our office is very stuffy and hot in the summer months, but liquid cooling will be something to look at in due course. I also need to do some research on the GPUs, but as an initial start on my research, I'm sold on the dual-socket motherboard with single cpu. The other components like ram, hdds, psu etc, all tick the boxes. The GPU is definitely next to think about.
 


nbarrett,

CAD has certainly blurred the definitions of office positions. My last employment for someone else, was in 1998 and in that 60-person firm, already those more expert in AutoCad (it was R14) had the upper hand as even quite junior staff with little experience became the go-to authority. The older draftsmen had to consult the new guys and the dynamic was different. And now, just about everyone has to be able to do complete sets and organize presentation materials and etc. The nerds will inherit the Earth.

The single-CPU on a dual motherboard does seem to be the direction most adaptable. Intel is slowing processor development and I think LGA2011-3 will be the thing for quite awhile- at least 5-7 who knows even 10 years. The Xeon v4's are 14nm and it's said that 7nm is absolute limit for lithography- the transistors are only a few Atoms apart.

___ As for the GPU, that too is in a state of flux. At the moment, given the use of 3ds and Rhino with viewports, and the need to support multiple light sources (which GTX does not ) I'd say that a Maxwell Quadro is the best choice, either the M4000 or M5000, both with 8GB of memory. The M4000 is so good- performing at the level of the previous generation K5200 and with plenty of CUDA cores for GPU rendering, that it may be a worthy contender. I use the previous generation, a Quadro K4200 (4GB)(Kepler) and being as bad and slow a draftsman as I am, I don't think I'm pushing it to it's limit. I did four of five 3180 X 1964 VRay / Sketchup rendering earlier today- quick tests of a building shell- and each took perhaps 5-6 minutes at the most. These had only single light sources. I'm very impatient setting up renderings and had very odd results as some planes in cast concrete apparently had interior surfaces in glass so here are renderings with 60' foot span Steel /Concrete beams but entirely in green glass. It's a stupid mistake but actually looks fantastic. I'd love to do that and light them inside- real "light beams" . I really need to study rendering when I change programs but I just start pushing buttons.

The thing, is, in the same way that Xeon E5's are going to v4, Quadros are going to be going to Pascal GPU's. There is already one Pascal Tesla GPU coprocessor, the $13,000, 24GB PT100 and so I assume that which is Maxwell M's will become Pascal P's.

There are some strong arguments in favor of GTX in certain uses, particularly animation, video editing and I've thought a long while about the possibility of setting up multiple VM's in a system and each VM would be the leanest possible, some only running two or three programs, others more general. That system would be based on s special Supermicro motherboard that has four GPU PCIe x16 slots- it's made for GPU computing, In this scenario, there would be four GPU's, one Quadro, one Tesla, and two GTX. There would be a VM for Quadro only, Quadro plus Tesla (Maximus) and the dual GTX. The Primary GPU would have to be selected in BIOS for each VM/configuration, but that would avoid driver conflict. Just a thought.

So, in summary, I'd say think about the Quadro /GTX situation and keep an eye on new Pascal Quadros over the Summer. There may be back to school announcements.Have a look at the excellent articles on the Puget Systems site.

[Optional Content: I thought I'd mention something that really surprised me. A few weeks ago I was visiting a research facility (particle physics) and saw their computational experimentation/ simulation system. This comprises eleven parallel dual Xeon E5 systems mounted on the Supermicro 4-GPU motherboard mentioned. Each mother board has four Tesla K20X ( Telsa/ Kepler). Fair enough as they need that computing power to run experiment simulations.

I spoke to the head draftsman who models the superconductor beam accelerator units, immensely complex and precise. They are using Siemens NX which is becoming more common for aircraft, ships, and cars. I think, Catia may be losing a bit of ground- not sure. Anyway, I was curious as to what kind of super workstations this incredibly complex, ultraprecise - 0.0001mm devices could possibly run it. The answer? - Dell Precision T3500's with Quadro K6000 and M6000's! That a $400 system with a $4,000 or $5,000 GPU. I have much more respect for the T3500 I bought for $53 but overall, I now question everything I ever knew! But it appears GPU's can solve every problem.]

Cheers,

BambiBoom
 
A bit more developments today. I had a chat with some of the people in the office. It's a bit of a trade-off at the moment because I'm going to be partly funding the machine, and the office are funding the rest. However, I've plenty of bargaining chips. They want to go down the direction of this high-end graphics stuff, more complex geometry, more visuals etc. So showing them videos of some of the things produced in UE, and even some of the stuff from Ronen Bekerman's site (http://www.ronenbekerman.com/) has them very interested. The other big bargaining chip is that they like the idea that we can make an initial investment of a system that is a single cpu, 2 x gpus, plenty of ram and hdd space, with a view of upgrading the system in a year or two if we find that it needs it. So as far as getting this up and running, they're on board. As I'm sure you will know, this is a big step.

So having done some research on the GPU side of things, I've been reading through some articles like the puget ones you posted, and some of the articles by toms hardware too. If you take that the two front runners are a GTX Titan vs Quadros (particularly some of their newer models) it's both interesting and frustrating to see that they both have certain criteria where they wipe the floor with the other one. So in many ways, it's going to come down to deciding exactly what our priorities are. However, the GTX titan, at the moment, might just have the edge.

So the other big question that I have is on liquid cooling. I like the idea of this for many reasons, especially as our office at the moment is very hot and stuffy. Looking at sites like frozencpu, it doesn't seem to be excessively expensive, or that difficult to setup, but then there's things like fitting sizes and such that I don't know about. Have you any input on liquid cooling or tips?
 


nbarrett,

Very few architectural /engineering and research groups will commission special systems, preferring the safety of proprietary, but in the current odd collisions of software that each have different demands, I think it's essential to refine the specification to suit the exact use.

The single CPU on dual CPU MB seems the best, most flexible approach. I think the Xeon E5 v4's may be worth waiting for as well. There may be quite few more released by September or so.

The GPU is still a difficult decision and has to be based on the having the capabilities all the software demands. When looking, keep the new GTX 1070 and 1080 in mind- the 1080 at least is faster than Titan X. This is why I end up with Quadros as I keep thinking that I will work in Solidworks a lot- for which GTX are terrible, but these days I'm thinking of trying a GTX again as so much of of the 3D I'm doing is Sketchup. I might buy a used GTX 780 TI and see what happens. Also, I'm think Rhino may be a better choice for the industrial engineering and mesh/surface projects.

As for liquid cooling, I'd advise being around it awhile. As CPU's - and GPU's moreso have dropped in power use, a good quality fan /heatsink provides sufficient heat transfer on CPU's that are not overclocked. The Dell Precision with 2X 130W 6-core 3.7Ghz CPU's has fairly small diameter, remote fans fans that blow on the heatsinks in a shroud. The modern fan/heatsinks have much larger fans- 120 and 140mm, one pushing air and the other drawing it from the other side. The real problem is that many liquid coolers make an odd, annoying noise. I would need the case to be under a desk several feet away. The custom configurations can be quite the project in themselves and I would be constantly worried about failures - liquid spilled, and maintenance. Overall, my tendency would be to use the system with a strong fan.heatsink and monitor temperatures. The worst case would be that a $60-80 fan heatsink would be replaced.

I enjoyed seeing Ronen Bekerman's site- an amazing standard.

Cheers,

BambiBoom

 
Well now the 1080 is certainly very interesting. Newegg has them listed for $700, and benchmarks vs the Titan X speak for themself! It looks very impressive, and considering the price of it, it could allow for some other upgrades with the system.

I definitely overlooked the maintenance side of liquid cooling. I forgot that the coolant itself has to be replaced from time to time, which is a hassle in itself. The noise wouldn't really bother me so much. I guess it's one of those things where I was looking to do it for the sake of doing it, rather than for any real benefit. I would still like to do it to be honest, but it does seem to be a lot of hassle where realistically, the solution you suggested would be both sufficient and more practical. As far as monitoring temperatures, is that done through software or through actual temperature probes?
 


nbarrett,

For your use, the GTX 1080 and 1070 are worth considering as these will be the fastest in both 3D modeling and GPU rendering. On Passmark there are now 27 systems tested using GTX 1080 How about:

Rating: 6849
CPU: 12434 (i7-6700K)
2D: 1035
3D: 16098 (12065 is average so that is possibly a pair- not sure)
Mem: 3775 (32GB)
Disk: 12545 (Samsung 950 Pro 256GB NVMe)

For comparison the average Titan X 3D rating is 10850 and GTX 980 ti = 11570 the fastest Quadro is the M6000 ($5,000) at 10580 and my favorite for use by mortals, the M4000- 6455. The Quadro K4200 I use now scores 4555 in my system.

The lessons from this are: 1. The GTX 1080 is quite amazing 2. Use a CPU with the highest possible single-threaded performance 3. Use a Samsung 950 Pro 256GB NVMe

Liquid cooling: That is useful for overclocked systems, but modern Xeons are designed for continuous running at full performance- for example in servers, which is why the multipliers are locked against overclocking, plus the power use continues to be reduced which means less heat. Servers do typically have roaring general case fans, but in a workstation, I think something like this would accommodate continuous rendering on a hot day:

Noctua NH-D14 120mm & 140mm SSO CPU Cooler > $73

Airflow: 64.96 CFM (NF-P14) / 54.36 CFM (NF-P12)
Airflow with U.L.N.A.: 49.29 CFM (NF-P14) / 37.34 CFM (NF-P12)

Noise Level
Acoustical Noise: 19.6 dB(A)(NF-P14)/ 19.8 dB(A) (NF-P12)
Acoustical Noise with U.L.N.A.: 13.2 dB(A) (NF-P14) / 12.6 dB(A) (NF-P12)

And compare:

Corsair Hydro Series H100i GTX Extreme Performance Water / Liquid CPU Cooler. 240mm > $105

Fan Air Flow
70.69 CFM

Fan Noise
37.70 dBA

As dB are logarithmic, 37db is proportionally much louder than 19dB, but not terrible. It could be heard probably 12' away. I simply don't like computer noise- any. It's possible with custom systems to have them be relatively quiet, and there is something to be said for tying into the liquid system, water blocks on a top performing GTX, but over time, with computers I've become simplicity oriented. When I added the PERC H310 RAID controller to the Precision T5500, it took many tries to get the system to recognize it as the boot drive, and so far it was never able to see the system image from the z420 whereas it did instantly on the T3500 where I'd installed the PERC 6'i taken from the T5500!

As for monitors, I like HWMonitor -free and tells the story. For example. currently the z420 CPU is running between 43 and 54C and 3.8 and 3.9GHz and the Quadro k4200 between 43C and 46C.

As the E5-1660v2 is 6-core, 130W and runs at up to 4GHz and cooled by a fan blowing from one direction in a shroud towards a heatsink, the Noctua NH-D14 should manage to maintain a good margin of safety on any Xeon E5.

Cheers,

BambiBoom

 
Yeah there's actually a nice video done by NVidia of Adam Savage (from Mythbusters) testing out NVidia's new VR Fairground on a GTX1080 and it's impressively smooth and detailed. Benchmarks on things like Octane are also staggeringly good, and at ~$700, it seems to be a good option for going with something along the lines of 4 x GTX1080s and as mentioned, a top end Xeon CPU. Definitely SSD is the way to go, and some high speed RAM would do the trick.

As for the cooling, I had a look at both corsair and thermaltake this morning. Particularly with Thermaltake, they have some exquisite cases that have a nice cross-ventilation type cooling setup. You have 2 x 200mm (or possibly 160mm) fans at the top of the chassis, and 2 at the rear. This paired with some good CPU cooling seems to be a good option. I would be tempted to slap on some ram fans and maybe some extra gpu fans also. It might be overkill, but considering the cost of some extra cooling, it seems to be a relatively good option.

As for monitors, I already have 2 x Samsung 17" monitors in work, which are pretty good. They're only a year old, so I'll probably stick with those for the moment.

The other thing you might be able to advise me on is a bit more information on the motherboard. I was looking at the ones on newegg, and there's a selection of SuperMicro dual-cpu E-ATX motherboards. BUT, what I don't quite know is what I should be looking for in comparing them. There's a few hundred dollars in the difference in prices, but I'm not quite sure what I should be looking at. Even when I narrow it down to just E-ATX dual CPU motherboards, there's still a selection of 9 or 10.
 


nbarrett,

Certainly the GTX 1080 /1070 are already causing a stir and I think the 1070 is still not out fro a few more days/ a week. When I was visiting at a research facility a few weeks ago they had an Occulus Rift running a test enhanced reality program- CG objects in the real space and it was uncanny. That was interesting. However, I tried the roller coaster ride and I was seasick in about ten minutes. I've never been motion sick in my entire life-except for a few minutes during "Avatar" and on the ORift roller coaster. I can see fantastic possibilities in the scientific world for VR- and I assume it be used in the military / law enforcement, medicine, and all kinds of things.

The CPU cooling is an important component. One thing to keep in mind is to decide on the motherboard first as it should be chosen with an eye on the clearance between the RAM slots and the CPU coolers. There are boards- and it's almost inevitable on a dual CPU board with 16 RAM slots, that the RAM slots can be crowded up to the CPU sockets. If the CPU doesn't a have a narrow base with some height to it, the RAM will have to selcte for it's height. Overall, it's better to find a motherboard with a generous layout.

These aspects plus the number of subtlety different iterations of motherboards, are the reason I recommended:

Case /Motherboard /Power supply : SuperWorkstation 7038A-I Dual LGA2011 900W 4U Rackmount/Tower Workstation Barebone System (Black) >

http://www.superbiiz.com/detail.php?name=SY-7038AI >$650

And there is another version for about $750 or so. As the Case, fans, MB, CPU coolers and power supply are included -everything is solved. The motherboard included (X10DAi) accommodates three GPU's and while not the very highest performer (The ASUS WS gets a bit more out the CPU, but is almost $500 by itself)- but it's close and ultra-reliable. With the Superworkstation, you just plug in the CPU(s), RAM, GPU(s), and drives- very fast and easy and those systems are rated to be especially quiet.

In my view, sorting with a strong, well-thought out barebones system solves so many detailed issues that are distracting- like the cooling and power supply- but really fairly generic- the motherboard has to be the right socket,a good chipset- there are really only two choices for dual Zeons, and have the right array of PCIE slots . With all the fuss already sorted, the user is released to concentrate on the particular choices of CPU, GPU, and drives.

I would say that the most direct and efficient method for the best cost and effort /performance system would be select a Superworkstation and then focus all the remainng efforts over time on the CPU, GPU, and drives (the RAM being less critically different) effectiveness in the programs and fro the project scale.

Very good discussion.

Cheers,

BambiBoom
 
Ok, so I think i'm FINALLY narrowing this all down. The next big thing will be to sit down with my boss about it. I didn't realize that the supermicro board is only 3 gpus big, so I've got a rank outsider option. Here's my FIRST DRAFT CONFIGURATION:

Motherboard:
Asus Z9PE-D8 https://www.asus.com/Motherboards/Z9PED8_WS/

CPU (Depending on budget, 2 of these):
Intel Xeon E5-2640 V4 2.4 GHz LGA 2011 90W BX80660E52640V4 Server Processor

HDD1:
SAMSUNG 850 EVO 2.5" 500GB SATA III 3-D Vertical Internal Solid State Drive (SSD) MZ-75E500B/AM

HDD2: (Note, this will primarily be for storing 3d models of trees, cars, people etc)
Seagate Desktop HDD ST4000DM000 4TB 64MB Cache SATA 6.0Gb/s 3.5" Internal Hard Drive Bare Drive

PSU:
EVGA 120-G2-1300-XR 80 PLUS GOLD 1300 W 10 yr Warranty Fully Modular NVIDIA SLI Ready and Crossfire Support continuous

Case:
Thermaltake Chaser A71 VP400M1W2N Black SECC ATX Full Tower Case

Ram:
CORSAIR Vengeance LPX 128GB (8 x 16GB) 288-Pin DDR4 SDRAM DDR4 2133 (PC4 17000) Memory Kit - Black Model CMK128GX4M8A2133C13

CPU Cooling:
Noctua NH-D9L 92mm SSO2 Low-profile Premium CPU Cooler, NF-A9 PWM Fans

GPU:
4 x NVidia 1080 (these are out of stock on most places like newegg etc. I'll wait till they become more available, but they seem to be the way to go)

Now that's all just a rough draft for the moment, but as a starting point, it seems to tick all the boxes. Coming out at approximately 5-7k (Canadian) depending on whether I go for things like a second CPU.
 


nbarrett,

The row of GTX 1080's is interesting. However, look more into the efficiency of SLI as the benefits drop off quickly at three GPU's and for example Adobe CS /CC will not recognize more than one GPU- not even the GTX 690 that has two GPU's in one chassis. As with the CPU, consider having a motherboard supporting three but start with two and see.

The ASUS Z9PE-D8 WS is a very good performer, but does have a potential layout conflict if the goal is to have 4-GPU's. There are a number of motherboards with 7 x16 PCIe slots, but using a double height GPU will cover the adjacent slot and the last slot will not have clearance. The Supermicro X10DRG-Q is the only motherboard that realistically accommodate four GPU's:

Motherboard: Supermicro X10DRG-Q (4X PCIe x16 GPU slots) > $499 (Superbiiz(Review of this motherboard)

This board is however, also a proprietary size. While there will be cases that can be modified- I'd suggest a CaseLabs which are also great designs or the Supermicro 7048GR-TR which is called a "GPU Superworkstation" and that is a great solution as it provides the case with 8X hot swap drive bays,the X10DRG-Q MB, CPU coolers, and 2000W redundant power supplies. The 7048GR-TR is expensive at $1,700. Serious stuff, but really having four GPU's is serious.

The Xeon E5-2640 v4 10-core @ 2.4 /3.4GHz is a good choice. On Passmark a single one has a CPU mark of 15776 and dual configuration scores 22730. The single-thread rating is 1860 though and in the 3D modeling use, that I think is not sufficient. An E5-1620 4-core @ 3.6 /3.8GHz - $75 used today, has a single-thread mark of 1930. They would be used I'd recommend a pair- well starting with one- of Xeon E5-2687w v2 > 8-core @ 3.4 /4.0 with a CPU mark of 16666 /dual 24501 and single threaded of 2059. Compare this to this Xeon E5-2640 v4. The thing is, all the data suggests that the relatively rare programs that are multi-threaded appear to peak in efficiency at 5-6 cores (Solidworks excepted) and so much of the time- in 3D modeling- the single-threaded rate is the critical one. Of course, the E5-2687w v2 is LGA2011 and not LGA2011-3.

The Noctua NH-D9L appears to a very good choice-and has several configurations .

Of course, a lot will happen in the next few months and I'm watching the Xeon E5- v4 releases as the first offerings are impressive.


Cheers,

BambiBoom





 
With the GTX1080s, four probably is a bit much, but they are considerably more affordable than I was anticipating. Also, although the performance benefits drops off, I'm thinking strategically here, as VRay allows you to set a GPU memory limit. That way, I could have a render running and allocate some memory for doing modelling in the background. At least that's the theory.

I did a bit of digging on the motherboard with clearance and found an interesting image: http://rog.asus.com/wp-content/uploads/2012/02/ASUS-Z9PE-D8-WS.jpg
Now that to me looks like the GPU is in the slot closest to the ram slots. It's tight, but it looks like it fits. What do you think? It might save me a few bucks versus the 7048 you posted.

Also, in waiting for the new E5 v4 processors, I suspect there's a chance that other V4s might come down a bit in price, so i could potentially step up to the 12 or 14 core options. The single thread rating for the one I picked doesn't bother me a whole lot. 3ds, for example, have really improved their multithreading support, and i've just sent an email off to a friend of mine who has a xeon based machine, just to find out what one he has. I know he does 3ds models as complex as mine are. If his 2 year old machine can handle it, this one should be ok.... I think.
 


nbarrett,

Yes, the ASUS WS does have the required slots and these are reference spaced, but the issue is that using all double height cards means that the GPU's will cover all the other slots, leaving no room for any other card. Also, the last card probably will not have enough room on the end and conflict with components mounted on edge of the board. The Supermicro X10DRG-Q spaces the slots so there can be 4 double height GPU's and the other PCIe slots are grouped together separately so every slot is available.

If the single-threaded performance of the E5-2640 V4 is acceptable, then I think it's a great choice. At $985 (Newegg) the cost /performance is impressive.

I find some references to multi-threading confusing. I was working recently on a big Sketchup model and I had to an intersect faces of the whole 58MB model which was exploded and this took almost 1-1/2 hours. However, when I watched the Task Manager / Performance graph, it appeared as though 6 of the 12 threads on the E5-1660 v2 were working on the problem in pulses- 10% usage interspersed with incredibly short 100% spikes. I need to study!

The E5 v4's, story so far has been encouraging as I didn't feel that the E5 v3's were a an important progress after the v2's except M.2 took off. At least if the CPU has more but slower cores it should have a different designation. For example the E5-1660v2 6-core 3.4 /4.0Ghz) has both a higher CPU and single-threaded rating than the the v3 (8-core 3 /3.5GHz). Probably, Intel feels that the market appeal is for CPU's as a line but I think the v-naming should be abandoned. How about Series. number of processors. number of cores. base clock speed. turbo clock speed. socket. So, the E5-2640v4 would be: "E5-2.10.24.34.2011-3"- more 21st Century. Here end the rant.

Cheers,

BambiBoom

 
Yeah I'm now looking at other motherboards. It's tricky because I don't really want to have to go down the route of a propriatry sized board and a special case, but finding a dual-cpu with enough space for the 4 cards is definitely tricky. However, with the WS, there's this image: http://cdn.overclock.net/9/9e/9e52f963_DSC03434.jpeg
That looks very encouraging I think! However, there's also an EVGA one, the SR-X, which looks like this when populated: http://i.imgur.com/ofQnjLJ.jpg
Granted, the second one has liquid cooling, but I kinda think the Asus offers a little bit more space. The difficulty is sourcing the motherboard, it seems to be sold out everywhere, so I'm wondering now if it's discontinued.

On another note, I find myself back looking at liquid cooling. I know it's not really necessary, and having to drain/clean the loop at least once a year does sound like a pain, but now I'm looking at closed-loop systems, which seem to require less maintenance. I'm still with you on the not-really-necessary front, but it's indulgently tempting! (although realistically, doubtful that I'll do it.)
 


nbarrett,

The EVGA Classified SR-X does extract a lot from Xeon E5's first version and version 2, but is from 2012 and I'm quite sure not being made. Looks fast standing still , but still that closely spaced row of PCIe slots means probably only three double-height GPU's and nothing else.

As you have time to make hardware decisions, my thought at this point is to cycle back from focusing on hardware and more fully re-review the way the software structures it's performance, the priorities of use, and also review the expectations for performance. It's quite easy today to end up with a variety software that demands high performance in every way.

Any system is a compromise of cost /performance and any system will be better at some tasks than others and work faster with a particular software. In the priority of use, my attitude is that I want the real-time working to be fast as I'm waiting to do something else, (= high single-threaded performance). With rendering /processing /analysis /simulation that take a lot of time, once the run is set up to run, I'm not going to be watching, and if it multi-threaded or responds to GPU coprocessing, that is almost a different system. this is why I've ended up with the HP z420 with a 6-core 4GHz E5 and a Dell Precision T5500 with 12-cores at 3.6GHz that are LGA1366- $1,600 CPU's that now cost $200.

If your firm is doing a lot of rendering and animation, videos, etc. and has more than 6 people designing/ drafting I'd say set up a dedicated dual CPU /multiple GPU rendering engine with a queue - it just plugs in as a node on the office network. Then the modeling system is a single CPU, single GPU single-thread hotrod- whatever 6 or 8 core E5 has the highest single-threaded rating (it's the E5-1680 v3 8-core at the moment). The rendering/ system could be a Supermicro Superworkstation with a pair of used E5-2690's (8-cores @ 2.9 /3.8GHz) those cost $250 these days, 128GB of RAM, three GTX 970's, and a very fast disk system with a RAID 5 OS/Programs and RAID 5 storage. This would not be the absolutely fastest rendering possible, but for a reasonable cost it releases the modelers, drafters, designers to continue with real-time tasks. All the final output is also in a single location and protected. There is some duplication of hardware, The work time is made much more effective.

Trying to make a single, sort of ultimate system that does everything is going to have more compromised unless the budget is $12,000-$14,000- unnecessarily expensive and if it's used by only one person, the cost in being unavailable for real-time work is costing the firm heavily in labor.

By the way, how long is a single rendering taking to run currently?

Anyway, a thought to take a step back a bit to priorities and hardware to labor allocation.

Cheers,

BambiBoom


 
Yeah the software side of things has been a bit of a discussion in the office. I'm new enough to the practice, so right now it's a mix of software I'm more familiar with and a learning curve of new ones. For example, Revit is not my strong point, but I'm working on projects in Rhino and Grasshopper so we can parametrically manipulate geometry.

The software side of this machine is going to come down to three brackets; daily use, presentation and advanced visuals. For daily use, it will be Revit, AutoCAD, Rhino with Grasshopper, and sketchup. AutoCAD and Revit will be no problem to it, as even my current machine can handle these with ease. With the other two, unless you're rendering (I'm currently using vray but I'll be hopefully moving to octane) they're demand tends to be on the CPU/ram. With Grasshopper, a more powerful CPU/ram combo allows for quicker computations. The easiest way to explain this is that in some instances, it can figure out the geometry instantly, in others, a single change can take 10-15mins.

On the presentation side, I mean mostly still shots from renders. This will be either vray or octane, and our aim is to move to 3ds, as there's some plugins I've used and worked with like forestpack that would make a big difference. This is where software has a big impact. With vray, it's CPU heavy, although my understanding is that gradually moving towards a split between CPU and GPU. Fortunately for vray, you can allocate the amount of memory to dedicate to rendering, and they recently removed the limit on this (used to be 8gb max). On the other hand, octane is pure GPU as far as I know. Videos are not a great concern. I've used lumion for years and this is the way we will most likely go. Again, very light on resources.

Finally, you've what I call advanced visuals. This is things like the vr side of things. For this, the workflow will most likely go Rhino>Revit>3ds>Unreal Engine. The good side about our workflow for this is that the other members of the office can easily collaborate at different times. Rhino can sort of sync to Revit, and Revit can sync to 3ds. At the end, we're aiming to have a full vr model in Unreal Engine. I've seen very impressive videos of unreal running on a single GTX1080, so I think the system should be fine for this.

There's another point on rendering that makes a big difference. The benefit of having a high spec single machine is for the use of real-time rendering, like with vray or octane. It wouldn't necessarily be a final render during the day, but it makes it easier to set up 5 or 6 scenes, render them over night, but still be confident of how the final render would come out.

I think the real objective of this machine is the vr stuff. My current machine, which is a 3rd gen i7, 32gb ram and a measly 1gb card can render a decent image in 3-4 hours. But I'm having to hold back on renders because my GPU struggles. We're not aiming for 30 second final renders, because think the priority is for vr.
 
It's just gotten even more confusing. GTX1080 is only going to support 2 way sli, although people have been saying that based on the layout of their sli-bridge ports, it will support at least 3 way. So I wonder if I got 2, is it likely that in due course it'll support 3 or 4? And if it does do that, would it be the case that a software/driver update will allow me to add 2 more 1080s or would it mean buying 4 new cards?
 


nbarrett,

In my view, it would be worthwhile to refine the complexity / cost / performance expectation for the new system. For example, the efficiency of SLI in graphics acceleration has been described as seriously falling off after 2-GPU's. Very related to this is the ability of programs to utilize multiple threads efficiently, and Autodesk and Adobe programs that can use multiple cores appear to peak efficiency at 5-6 cores and begin to drop off in efficiency at 8-cores and in the case of Adobe CS /CC, the performance can actually drop in dual CPU systems.

The other, more personal aspect is the expectation for performance. In my view, with a new system, have the initial configuration be the simplest possible. For the proposed system: one CPU, one half of the target maximum RAM- 64GB, a single GPU, and no RAID. I setup a new system on a mech'l HD so I can repeatedly optimize the disk by degfrag, consolation and placing system files first and then migrating to the SSD. It's hardly necessary with SSD's, but I noticed when first done, the Intel 730 scored 4706 on Passmark where the average for that drive is 3950 (though many seems to are in the 4300 range).

Assemble, configure,test, and use the system for at least a month, sorting out, updating software, and refining the disk and startup etc. In this way, any performance deficit will become obvious as more importantly will the effectiveness of the changes. If there is no reference, then the additions and changes may be unnecessary or have an unfavorable cost /performance ratio. In this aspect, I keep thinking of the visit to the research facility a few weeks ago in which I learned ultra-precise and extremely complex particle accelerator modules were being drawn in Siemens NX on Dell Precision T3500's with a Quadro K6000. only a LAN trip away, they could've used 11X paralleled dual 14-core Xeons with 4X Tesla K20X processors each - but didn't- it's dramatic over-specification of hardware for the work. That is a signal of extremely refined understanding of the task and focus on the necessary hardware.

Cheers,

BambiBoom



 
Realistically, the Adobe/Autodesk software side of it is not a major concern. The extent of the Adobe stuff we use is minimal. A bit of post production in Photoshop, and that's really it. With Autodesk, it's Revit, and we're not rendering in it, so if my own machine can handle it, this one will be fine. The only remotely demanding Autodesk software with be 3ds. They're getting much better at supporting multithreading, and even with that, the actual final renders, which will be still shots with 3ds, will be done with Octane or VRay, both of which are GPU heavy. On the actual modelling side of things, as the complexity of the model increases, the demand is put on the CPU and RAM, as opposed to the GPU, which really is just used for anti-aliasing. Autodesk still has a bit to go with the multithreading, but they've been making progress, which is encouraging at the least. Video rendering will be done in Lumion, which is a mixture of both CPU and GPU, and we do little or no post production on our videos. As far as the system goes, the spec is definitely enough to handle Lumion, revit etc. It's 3ds with things like RT rendering that will be the real test.

I like the idea of doing a mid-point system and evaluating it over a few months. I'm already lined up to do 4 videos in lumion on separate projects, but my computer's cpu and gpu makes it a time-consuming challenge. That being said, they're nearly ready to give the go-ahead on this machine. Once it's up and running and we do evaluation, it'll tie in with me testing out things like Octane, 3ds and UE. We can see how the machine handles it and make adjustments as necessary.

The good thing with the GTX 10 series is that it IS 3/4 way sli compatible, they just won't be releasing a new high-bandwith bridge any time soon. So you can do it, but it's not as straightforward as anticipated. I'm actually now leaning towards the 1070, because it's a bit cheaper and still offers more than enough performance for our current situation. I think the 1080 is more "gaming" driven, but for the architecture field, 2 x 1070s in SLI should cover us for what we're looking to do.