Workstation Graphics: 14 FirePro And Quadro Cards

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
best-workstation-graphics,Y-N-382271-22.jpg


Are you looking for the world to end or something? :D
 

mapesdhs

Distinguished
BambiBoom writes:
> mapesdhs, Many excellent points.

Thanks! 8)


> Yes, I agree completely that pure clock speed is useful and desirable
> in workstations, my point was that if I were predominately rendering,
> I would rather have more cores / threads than a high clock speed. ...

There's the irony though; due to the efficiency losses when spreading
a rendering load across multiple cores/threads, a higher-clocked CPU
with fewer threads does a lot better than one might expect.

For example, what score do you get from your Dell T5400 for running the
Cinebench 11.5 benchmark? My Dell T7500 gives 10.90 (2x X58 XEON X5570,
8 cores, 16 threads @ 3.2GHz), whereas my 5GHz 2700K gives a very
impressive 9.86 despite it only having half as many cores. Meanwhile,
the Dell is beaten even by a stock 3930K (scores 11.13) while my current
3930K oc (4.7GHz) gives 13.58. Of course there are differences such as
ECC RAM, etc., but even so, the way the higher clock makes up for having
fewer cores is fascinating. Have a look at my CPU results:

http://www.sgidepot.co.uk/misc/tests-jj.txt

Since high clocks then benefit certain pro apps to such a strong
degree, no wonder an oc'd consumer chip gives good results when
paired with a pro GPU.

The sad part is it looks like there will not be a desktop enthusiast
8-core Haswell. Sheesh, two generations on and still no 8-core for X79. :\

Note that some 'consumer' enthusiast X79 boards do support a range of
XEONs, eg. the Asrock X79 Extreme11 can use the 8-core SB-EP E5, and the
board does support ECC RAM, but alas of course those CPUs are costly and
don't have unlocked multipliers, so an oc'd 3930K would outperform a
single E5-2687W. Such CPUs make more sense on multi-socket boards where
as you say they really fly (two 2687Ws score about 25+ for CB11).

It's likely that being able to run the RAM on my non-Dell systems at
much higher speeds helps for some tasks aswell. The T7500 is locked to
1333, whereas I'm running 2400 with my 3930K, and 2133 with the 2700K.


> ... But yes, I'd love a couple of twelve core Xeons at 4.5GHz. ...

Compromise - sell your soul, get a UV 2000. ;D


> 12-15 core, use DDR4 ,and be quite fast, though I've not heard any
> specific number. Intel seems to do development from the lower speeds
> at first.

Progress does seem somewhat slower now though. Lack of competition once
again perhaps.


> Your comments are also very welcome as you mention some of the
> important experiential qualities that come into play when using
> workstation applications. ...

Thanks! Many people who don't use pro apps seem to think judging one pro
product vs. another is a simple benchmark comparison process, but as you
clearly are aware from your own experiences, reality is a lot more complex.

One of the most impressive 3D demos I ever saw was on an SGI Onyx RE2
almost 20 years ago (1995 I believe; I was doing Lockheed-funded
undersea ocean systems VR research at the time). A multi-screen
simulation system created by BP, it depicted a small section of an oil
rig, complete with real-time shadows (something the BP guys thought was
very important for safety testing and assessment), rendered with full-
scene subsample AA, etc. See:

http://www.sgidepot.co.uk/misc/oilrig.jpg

The update rate was about 7 to 10Hz. At first I thought that was kinda
slow, but then the BP guy explained how the system worked...

They did not have the model available in the native IRIS Performer
database format used by the gfx system for rendering real-time 3D
scenes; instead, they only had their multiple internal proprietary
databases which their engineers use every day for their work, eg. steel
infrastructure, electrical piping, gantry ways, water supplies, emergency
systems, cooling ducts, etc.

BP wanted something that could bring these all together to give them a
broader picture and to reveal things which the separate databases couldn't
show, eg. if a gas feed pipe in one database was spatially going to
interfere with a gantry way from another database. Preventing such issues
can saves hundreds of thousands, perhaps millions when a maintenance job
includes the attendance of other seas vessels at the rig.

Thus, the system combines and converts ALL these databases into a
Performer database again and again and again for every single frame.
Though the frame rate was low at the demo (note the later IR gfx runs it
at 60Hz no problem), I was more impressed by it than some of the other
demos because I realised the others were not pushing what the system
could really do (at least not according to the tech guy I spoke to
afterwards who turned out to be one of the designers of the later IR
gfx), whereas the oil rig demo was showing something fundamental about
the system, namely that it could handle very complex real-life problems
that don't match the simplistic idea of what most people imagine a 3D
task must be (ie. pushing polygons).

It was obvious talking to the BP guys afterwards that they were very
proud with what they'd achieved. They said that the model, good though
it was, only used 0.5% of the available image data since the full oil rig
database contained 3.5 trillion triangles. Their system was doing
custom LOD management to stop the whole demo's complexity racing away.


Though an old example, these concepts in terms of real-world complexity
still apply today. Defense imaging, GIS, medical, aerospace and other
tasks involve very large datasets (tens to hundreds of GB) and various
kinds of host preprocessing, often on a per-frame basis just like the
oil rig system above. Running these kinds of tasks on gamer hw isn't
remotely viable.


> ... One of the problems in this kind of
> discussion is that those with gaming oriented systems have not
> experienced use of 3D CAD and rendering applications to the level
> where the workstation cards become not only useful, but mandatory.

Exactly! I've talked to lots of people who use professional systems for
an incredibly wide range of tasks. Gamers can no doubt appreciate the
basics of the demands of, say, 3D animation and video editing, but other
tasks in industry are a lot more demanding and often quite unexpected in
their performance behaviour. Everything from controlling textile
knitting/printing machines (datasets as large as 5GB+) to more efficient
cutting of pork carcasses, they all involve complex real-time processing
of 2D/3D data with great precision, where reliability is critical.

Pro-type benchmarks are useful, but they can only be one of many data
points used in the decision making process. In many cases other factors
are more important, especially reliability. I know places still using
SGIs that are 20+ years old because reliability is so important,
especially in the fields of medical imaging, aerospace and industrial
process control. I hear from companies who've been running the same
system for almost 15 years non-stop. :D When they finally decide it's
time to replace it with an all-new setup, they struggle to find anything
remotely similar in terms of long-term reliability. Someone at BAe told
me they have to design systems which they will have to maintain for as
long as 50 to 75 years.


> Especially important are the viewports , artifacts and reliability.

Back in about 2000, when SGI really started to lose out to emerging pro
cards for PCs, I had an opportunity to compare an SGI Octane2 V12 to a
P4/2.4 PC with a GF4 Ti4600, running Inventor. The Ti4600 was about five
times faster than the V12; in the research department in which I worked
at the time, this difference was a key factor in their opting for PCs as
upgrades over the existing lab of old SGIs. However, the image quality
of the GF4 was absolutely dreadful. The texture mipmapping was rubbish,
the geometry precision was poor, etc. But the GF4 was massively cheaper,
and that's where industry as a whole was heading - low cost above all
else. At that time, SGI kept opting for ever greater image quality &
fidelity such as 64bit colour, instead of raw performance improvements,
even though many customers especially in film/TV/CAD wanted the latter.
In the end, this mistake killed their gfx business.

Anyway, although consumer cards have come a long way since then, the
same type of quality issues still apply today, though these days most
performance differences are typically related to driver optimisations.

Also, sometimes the obvious choice based on a simple test can lead to
unexpected issues. In 2002, after upgrading from SGI VW320 systems to P4
PCs with the above Ti4600, the researchers found the PCs to be as much
as one hundred times slower than the old SGI dual-PIII/500 VW320s. The
cause was the way in which the researchers had created their large 3D
models, exploiting the VW architecture that allows "unlimited" textures
to be used (the urban models they created had a lot of texture,
typically 200MB+, but not that much geometry), and they used a great
many large composite 16K textures as a means of referencing multiple
subtextures in the models. This works fine on a VW320 (since main RAM
and video RAM are the same, so loading a texture means just passing a
pointer), but on a normal PC back then it was VRAM thrash-death,
dropping the frame rate from about 10 to 15Hz on a VW320 to less than
one frame per minute on the PC. So the researchers had to completely
redesign their models: no more composite textures, lots of detail
management, always mindful of the 128MB VRAM limit on the GF4. Only
once this was done did they finally see the big speed improvements
they'd been expecting.

The irony is that SGI's custom architecture - designed to solve certain
3D bottlenecks - ended up at least in this case making the researchers
kinda lazy in how their created their models. Perhaps these types of
experiences are why performance improvements today so often tend to be
from brute-force changes rather than anything really innovative. Note
that many ex-SGI people moved to NVIDIA, eg. when I was testing the O2
for SGI in the late 1990s, the SGI manager I reported to, Ujesh Desai,
is now a senior person at NVIDIA. Later, some of the same people then
moved to ATI and elsewhere.


> After using a Dell Prevision T5400 with the original Quadro FX 580

That reminds me, I have an FX 580 I've not tested yet...


> ... and I could always add a second one in SLI if needed.

Such a shame that so many pro apps can't exploit SLI, and even when
they can, the driver lockouts linked to mbd/chipsets are infuriating.


> instead of x16. The navigation in my large Sketchup models is not
> blazing fast, but it doesn't freeze in Solidworks- in short all
> problems solved. ...

:cool:

How does the 5500 compare to the 5800? Makes me wonder if even a
used Quadro 4000 would give you a good speedup.


> mentioned, are among the most important aspects of evaluating
> workstation graphics cards and missing in a speed-only focus.

Indeed. Real-world decisions can be messy & rather involved.


> When Quadro K5000's are sold used for $1,000,...

:D

Ian.

 

mapesdhs

Distinguished
skit75 writes:
> This forum needs more tales from the field in this department, such
> as yours. ...

Most welcome! I just listen to what I'm told by those who do use pro
systems, in what I've learned is a bewilderingly wide variety of ways.


> ... The people who would actually be considering these pro
> cards need these examples to confirm what they already suspect and

That's why I've been doing so many tests, to try and fill in the gaps.
Never enough hours in the day though.


> the gamer/pc hobbyist system builders need these examples to
> realize they are talking about apples while standing in an orange grove.

:D

Ian.

 

Rob Burns

Distinguished
Oct 9, 2010
14
0
18,510
Why no 3DS Max in this article??? I've been waiting for years for exactly this comparison but really wanted to see the results with 3DS Max. It's a very popular app and seems to be getting neglected lately by TH.
 

FormatC

Distinguished
Apr 4, 2011
981
1
18,990

Take a look at the CUDA section (but I have only iRay). The Direct3D part is similar to AutoCAD (2D and 3D).
 

Rob Burns

Distinguished
Oct 9, 2010
14
0
18,510
I wondered if 3DS Max would be a similar story to the AutoCad results since both are direct 3D. It's hard to know for sure since specific software drivers seem to create unique hardware preferences, but I guess I'll have to assume the results would be pretty similar. Anyway thanks for a really great article, as a graphics professional would love to see more articles like this!
 

factory3

Honorable
Jul 10, 2013
1
0
10,510
Too bad that Cinema 4D wasn't included in this review. I am in the process of upgrading and would love to see a comparison between these cards. Does anybody have information on a review somewhere else?
 

somebodyspecial

Honorable
Sep 20, 2012
1,459
0
11,310


I'd say you have that backwards. They run 4 cuda test, then to be fair run nearly a DOZEN opencl tests, most of which don't make any money as they are not used. IE basemark (useless, do the same in REAL APPS)- I'd use Adobe PS CS6 with CUDA as any pro would, or Premiere for vid editing again with Cuda. I'm not sure why they REFUSE here at toms to test openCL vs. Cuda and opt for FAKE synthetic crap for opencl. Surely you can run some tests on photos or vids with the same operations in adobe with cuda, vs. whatever with opencl (or even use opengl for amd in adobe). They pull out their adobe license when it suits them, but not when it SHOULD be used like here. Why adobe is missing with cuda is a mystery. You can directly compare anything in adobe with cuda vs. the same app running AMD and openGL etc or pick another app that supports opencl to do the same operations (gaassian blur is gaussian blur as long as you're doing the same on the same pic etc...LOL). Why not compare the two?

bitcoining...ROFL whatever...
F@home...LOL...Whatever
luxrender? How about testing Blender CUDA (iray or octane?) vs. Luxrender Opencl (yeah, lux runs in Blender too!). Why so afraid to pit cuda vs. opencl?
Maya can do the same. Pic an opencl rendering plug vs. Cuda plug.

How about 3dsmax running iray or octane/cuda for NV vs. 3dsmax luxrender/ratgpu opencl...Why afraid of this? WE WANT NV VS. AMD. NV hates opencl so why keep using it with them? You should always run CUDA when available. DUH.

Cinema 4d, can be done the same way Cuda using octane/iray/mentalray etc vs. lux/opencl. AGain why afraid of giving what we REALLY need, which is REAL comparisons. Not just cuda, cuda , cuda...and then 10 opencl only benchmarks. WE WANT THEM COMPARED!

Daz studio, again you can go LUX vs. Octane etc. (opencl vs. cuda here again!). What they heck is the problem?

Sandra for more opencl? AGain, FAKE. Use a REAL app and test the same encrypt/decrypt operation on opencl for AMd and Cuda of NV. Jeez, just google cuda AES and check the results. You act like NV has to use Opencl and has no other options. So far from true.

The sum total of all the opencl stuff just highlights there isn't REAL stuff out there using it much or you wouldn't be using SANDRA or BASEMARK for the majority of the tests. How about doing some of the above suggestions for some REAL results we can use showing how good opencl or cuda is for MAKING MONEY. Which is the point of plunking down for a WORKSTATION gpu right?...These continued screwy tests make me laugh.
 

deanjo

Distinguished
Sep 30, 2008
113
0
18,680
It would have been nice of Tom's to at least have run the DP tests on the Titan while it was in DP mode since that is a major feature of the Titan.
 

sb26

Honorable
Jul 25, 2013
1
0
10,510
"The driver situation in Inventor is identical to what we saw in AutoCAD 2013, since both applications are part of the AutoCAD Design Suite 2013 Premium."

This is not true-- Inventor and AutoCAD are two completely different applications, and the fact of their appearing together in *Autodesk* Design Suite Premium does not mean that they have any commonality at all in their graphics systems.

In addition, the screen shot appearing at the top of the article is not of Inventor, but of Inventor Fusion, a third distinct application which does not necessarily share graphics characteristics with either of the other two. Might a re-test be in order using Autodesk Inventor?
 

Horkheimer

Honorable
Jul 25, 2013
10
0
10,510
A really useful article, thank you Igor!

It would be really great to see some of these benchmarks for the latest GTX range.
Does anyone know where this could be found?
Thanks!
 

mapesdhs

Distinguished
Horkheimer wrote:
> It would be really great to see some of these benchmarks for the latest GTX range.

I don't see the need. There's already plenty of data to know that, as always, compared to pro cards,
the latest GTX cards will perform very poorly for the vast majority of pro apps, with the occasional
exception such as Ensight.

Have a look at my results, just extrapolate from the GTX 580 data and the Quadro 4000 data.
Conclusion is the same as always: except for certain very specific exceptions, use a pro card for a pro app.


FormatC, any chance you could add K6000 numbers to the charts? Might NVIDIA be wiling to lend
you one even though it's so new?

Ian.

 

cypeq

Distinguished
Nov 26, 2009
371
2
18,795

quadro and tesla are tuned to computation tasks (two different workloads)
Titan is tuned for high frame rendering speed with disregard for quality (glitches artifacts skipped frames you name it) aka FPS in games.
It's only natural Titan will win in many performance benchmarks.
Compare them to performance cars that can too be tuned to different tasks.
So oldschool muscel V8 engine car can win versus some Ferraris in drag race, question is how well could it corner ?
All three have same GPU architecture but are vastly different.
 

xtzman

Honorable
Aug 5, 2013
1
0
10,510
So many tests, and you miss the most important for many people.
3D MAX, VIEWPORT PERFORMANCE (nitrous/opengl)

Since 3d max its one of the most used 3d software, and 98% of the time we spend moving objects on the viewport, scaling, rotating, moving camera, etc etc
 

folem

Distinguished
Oct 9, 2011
58
4
18,635
I'm not sure if some of the other comments noted this or not, but it is important to note. It is impossible to to compute - openCL or CUDA - on a Quadro. With Kepler, nVIDIA removed double precision capabilities from GeForce and Quadro entirely to accomplish their extreme efficiency. If you want to do compute on an nVIDIA you have to buy a Tesla and run it in SLI with a K2000 or better - branded as nVIDIA Maximu,s. Yes you get much better performance per watt that way, but it will also run you at least $2200. All of the AMD cards can do openCL (and soon Mantle), granted the W5000 and W7000 have horrible double precision performance. Also only the K5000, W8000, and W9000 have ECC memory, the rest are plain GDDR5 (or in the case of the K600 DDR3). All of the server GPGPU cards (Tesla or AMD S series) except the S7000 have ECC memory.
 

folem

Distinguished
Oct 9, 2011
58
4
18,635


You make a point of ECC memory, but of the cards listed here, only the K5000, W8000, and W9000 feature ECC memory the rest are plain GDDR5. Also worth noting is that none of the nVIDIA cards are capable of performing GPGPU tasks - something most professionals would be interested in - with Kepler they reserved CUDA and openCL for the Tesla cards to accomplish their extreme power savings; whereas, all of the AMD cards are openCL compatable (and Mantle coming soon), albeit the W7000 and lower are extremely slow in double precision performance.
 

mapesdhs

Distinguished


I don't know where you've gotten that from but it's absolute nonsense.
Just look at the AE benchmark thread on Creativecow, or the Arion
benchmark page, there are lots of Quadro-based results submitted
which are all obtained by using the GPUs for CUDA. Please edit your
posts to remove this false info.

Or would you like me to submit a screenshot showing a Quadro 4K
for CUDA in After Effects? Sheesh.

I've been running CUDA tests on Quadro cards for months, and I've
built mixed systems which exploited both types of card for CUDA,
eg. a Quadro 4000 and two 800MHz 1GB GTX 460s. However, it
makes sense to exclude the Quadro from the CUDA pool when the
GPUs are mismatched, eg. I recently upgraded the aforementioned
system to have three 3GB GTX 580s instead of the two 460s, so
now the Q4K is dedicated to OpenGL while the 580s are for CUDA,
which is a very nice optimal setup for AE (excellent performance).

Anyway, key point is you're wrong about Quadros and CUDA.

Ian.

 

folem

Distinguished
Oct 9, 2011
58
4
18,635


Ok I shouldn't have said no CUDA, they do have CUDA, just not for serious CAE or scientific computing. The Kepler cards do not have double precision capabilities, they can only use CUDA for integer vector and single precision compute - with three exceptions: the Tesla K20, the Quadro K6000, and the Titan (the Titan and K6000 use the same GPU, with the Titan having one block of cores turned off). All of the cards you mentioned were older than Kepler, which had the double precision capabilities and thus used considerably more power. Kepler cards are the Quadro K series, Tesla K20, and GeForce 600 and 700 series. They also market the Tegra 4 (in the Shield and Surface RT 2) as Kepler based even though it is an entirely different platform.

My source is on http://www.develop3d.com/, I don't remember the link to the exact article because it was well over a month ago that I first read it.
 

mapesdhs

Distinguished


Not all 'serious' work needs 64bit CUDA. Absolutely true that the really
heavy stuff suffers on the gamer cards from having very limited 64bit
CUDA (1/24th of the shaders usually; more on Titan but at a lesser
clock with the mode switch). It very much depends on the task.
I would class working with AE for special effects as being a serious
task in that's in the professional space & nothing to do with gaming,
but it doesn't need 64bit CUDA most of the time.

But for QCD/CFD & all that sort of thing, sure, a boat load of Teslas
would be just the ticket. 8) Assuming that is it's not more sensible
to run a particular code on large shared memory system instead, if
the granularity is unfavourable to the GPU approach, eg. galaxy
simulation is one such example IIRC (big datasets), hence the
Cosmos machine being used for this task, though a lot of work is
going into scaling the codes beyond 512 CPUs.

Ian.

 

ZeeeBrush

Honorable
Oct 25, 2013
2
0
10,510
I am currently considering purchasing a new machine. I would like to
help other people out there that are new to the 3d industry and could easily be confused by some comments on here.

I am going to disagree with alot of what Bambiboom said about rendering, years ago before GPU renderers his points make sense but now.. not so much. He does give useful information but in my opinion the conclusions are misleading. It is all about your workflow and this is what you should consider before purchasing anything.

Here is the main thing you need to consider:

1.) WHAT IS YOUR WORKFLOW GOING TO BE LIKE?
2.) WHAT PROGRAMS WILL YOU BE USING TO MODEL WITH?
3.) VIEWPORT PERFORMANCE IN MAYA/MAX VS. RENDERING PERFORMANCE.
4.) HOW WILL YOU RENDER YOUR ANIMATIONS?

Common Industy Workflow:
1.)Sketch. Model High Poly Version in Zbrush.(its meant to handle millions of polygons in a viewport and is software rendered) Your performance here does not depend on a graphics card. A fast cpu can help here.

2.) Retopogolize to low poly. Export High poly Displacement Maps/Normal Maps. (These are used at render time in a program like Maya/3ds Max to bring back most if not all the resolution of your model at rendertime by applying a Displacement node in your rendered models based on the gray scale levels of the Maps.(Vray, Mental Ray, Etc)
These maps are also used for gaming systems like UDK,Cryengine etc..
when developing for games.

Why is this done? because rotating a 60 million poly model in Maya viewport is impossible and unneccesary. I am not even sure if you would get good performance with a k6000. Working with a 20 thousand poly model is easily done in maya with a decent Nvidia GTX card.

Ok so what does this mean? Zbrush can rotate these models in the viewport
very well. Does any of your hardware make a difference? ~kinda sorta.
Zbrush is completely software rendered. So having a fast card for this is kind of pointless at the moment. Fast RAM (Mhz) will make a small difference. Also Zbrush is 32bit and only accesses 4gigs of RAM atm. Yet through its amazing technology can handle crazy high poly counts that maya/max could only dream of rotating in a viewport.

3.) Viewport performance. Here is where a lot of confusion is.
Do you really need to ever rotate super highpoly models in Maya? NO.
Why would you need to do this when you have tools like zbrush/mudbox to manipulate and model with which are far superior to the tools in maya/max. Rendering/Animation/Low Poly modeling are where Maya/Max excel + all the special effects particle systems, hair, materials, plugins etc that will be used.

Now, if your machine was going to be used to set up a render of a super complex animation at the Studio you might want to consider a Quadro/Fire Pro based solution for viewport performance only to set up the scene and animate with good model manipulation performance. Other wise in the case of a student or home user wanting to build a portfolio or make short turnstile animations to show off Models do not even consider wasting your money on a Quadro/Fire Pro it is useless.

4.) Rendering: Will you render using a renderer that is cpu based or gpu based? As an up and coming 3d modeler/ animator will you really be rendering out scenes like you see in the movies? Probably not. These are done on dedicated rendering machines with many core processors and usually a large network of them and these machines will not be used for modeling at all, they will be sent jobs and dedicated to only rendering.

GPU VS. CPU
With solutions like Octane and all the new unbiased renderers that are emerging with GPU rendering you can render much faster with say 2000+ cuda cores vs any 6core/12core etc cpu based solution. Do the math and its easy to see this. The results here in this article support this.
Whats good about this? You can use that Titan or GTX Card and the software doesnt need a 5 thousand dollar k6000 to do it. In fact if you look at the rendering performance the Titan smokes the workstation cards in this area.
There is a stipulation here. You are limited to scene size based on the VRAM in the card. But you can still use your gtx card for gaming.
Now depending on the look you want to achieve in your animation you might want to consider some of the CPU based renderers but they will be much slower.

A Sample Workflow:
Zbrush High Poly> Zbrush Low Poly> UVmaping> Photoshop manipulation of Maps> 3dCoat or Mari PTEX in some cases> Export to Maya/Max of low poly models> Set up scene> Attach displacement or Normal map nodes > Render using octane or gpu plugin renderer to render the scene unless it requires realistic hair simulation. <OR> Render using Vray/Mental with CPU based rendering. I am not worried about viewport performance because I work with low poly models only in the viewport and this is a good practice.

Conclusions: I would get a good GTX card over a Quadro/Fire Pro for a home user or studio. I would spend the money you save on RAM and a good CPU and monitor. You can do alot with the 4 thousand you save not buying that k6000. Also, ECC Ram is over rated.

Sample System I am considering:

i7 4770k (overclock this)
32GB 2133MHz or higher ram (check your QVL for motherboard)
*with this much ram you can make ram disks that can be handy*
Maximus 6 Extreme Motherboard
2X 256GB Samsung 840 PRO SSD's (RAID 0)
WD BLACK 2-3 TB (Store all those models textures and animation files)
GTX Titan (For me because of 6GB RAM means bigger scene in Octane)
*remember that if you have 2 of these you still only reap 6GB RAM for scene size which is why I choose the Titan over say 2x 680gtx.*
27" 2560x1440 or 30" 2560x1600 monitor
PC Power and cooling 1200watt Platinum rated PS

If you are modeling primarily you can scale this system back alot
if you have a tight budget.
 

ZeeeBrush

Honorable
Oct 25, 2013
2
0
10,510
Also look at Nitrous now in 3dsMax.. SPECViewperf is synthetic OPENGL bm, and MAX is on NITROUS driver, and that makes hugeeee difference!
Also VRAY RT gonna use the GPU's. Workflow is everything consider what you want to do first.. peace..
 

somebodyspecial

Honorable
Sep 20, 2012
1,459
0
11,310


No DP should mean these cards would fail right?:
http://www.tomshardware.com/reviews/best-workstation-graphics-card,3493-25.html
http://www.tomshardware.com/reviews/best-workstation-graphics-card,3493-27.html
You can say they suck at it, but you can't say they don't do it at all or a DP test would fail.
http://www.creativebloq.com/nvidia-quadro-k5000-12123060
He's noting DP drops for K5000 vs the card it replaced, but not that it can't do it. They just upped the SP (because they are aimed at that) and dropped some DP (a lot). If you're doing DP all day clearly you need to buy the appropriate card, but these CAN do it.
 

mapesdhs

Distinguished
ZeeeBrush writes:
> i7 4770k (overclock this)

Why not a 3930K-based system instead? Much better performance,
and re your comments about the sw tools, surely the far higher mem
bw will help, ditto the waaaay better PCIe lane provision. Get an
ASUS P9X79E WS - same oc'ing quality as the ROG series, excellent
PCIe setup, and only a dribble more than the M6E (cover the cost re
comment below). Oh and re the CPU, if you can't afford a 3930K in
the first instance, get a 4820K to begin with - same price as the
4770K, faster base clock (3.7 instead of 3.5, though they both have
the same max Turbo), easier to oc due to higher TDP of S2011, etc.


> 2X 256GB Samsung 840 PRO SSD's (RAID 0)

Bad idea. RAID0 of SSDs is asking for trouble. It will not offer any
useful speed gain over a single SSD of the quality of a top-end
model such as the 840 Pro, and if one does break then you're
screwed. Just use one, spend the saved cash on something else,
such as an extra WD Black in order to have a RAID1 of your valuable
models & textures - that's a far more sensible config than two SSDs
in RAID0 and only one mechanical.

Ian.

 
Status
Not open for further replies.

TRENDING THREADS