Workstation Graphics: 19 Cards Tested In SPECviewperf 12

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Rob Burns

Distinguished
Oct 9, 2010
14
0
18,510
0
No offense meant to you personally FC, you did a great job on your part with the review and I can see it was tons of work and probably above and beyond commitment level. Just frustrated with a larger trend at Tom's Hardware that has nothing to do with you. As a graphics professional it's frustrating to see much less of the software I use reviewed here these days and I need to voice that.
 

FormatC

Distinguished
Apr 4, 2011
981
0
18,990
1
Hi Rob,
we have here as media one big problem: we can use only licensed software. To get / hold a large bandwith of content we have to organize or buy a lot of very expensive stuff. Only one example: it is nearly impossible to get a stand-alone license for Solidworks as press sample and I'm so lucky to get a friends machine for my tests with more or less similar hardware. But that means for him that he can't work on it for a few hours or days. All this testing is very difficult because we get no support from software vendors.
 

mapesdhs

Distinguished
Jan 22, 2007
2,507
0
21,160
111
Hologram writes:
> So let me get this straight. As A Maya user the 780 Ti or even a titan Black
> would be a better choice than a Quadro???

If you're ok with less reliable drivers, less precise geometry, flawed AA lines
support and a greater chance of hw failure, then sure, use a gamer card. But
my results show quite clearly you're better off using an older workstation card
like a Quadro 4000, paired with an oc'd CPU like a 5GHz 2700K (I'm talking
specifically about Maya here). Or to put it another way, even a lowly Quadro 600
will be faster for Maya than a traditionally strong gamer card like a GTX 580 (pm
me for a link to my Viewperf 11 results).

A 780Ti will be the same speed as a Titan if the core clocks are the same,
simple as that, unless the data set can't fit into the 780Ti's 3GB RAM, in
which case the Titan will have an advantage, at least for the moment, given
it's very likely we'll see 6GB 780Tis soon. The Titan's real strength is as a
cheaper entry card to 64bit fp CUDA, but even then it lacks many features
present in Tesla cards, especially ECC RAM, a full speed PCIe return
path, various caching functions, etc. Not having ECC rules out the Titan for
numerous pro tasks. Ok for development maybe, but not for production
deployment.

FormatC is correct, as has been discussed before in earlier articles; those
using gamer cards for pro apps are asking for trouble. It's merely that for
pro tasks which don't use OGL (or only use old OGL standards), a gamer
card can offer decent performance, but it'll have missing features and exhibit
less reliable/consistent viewport behaviour. For Maya, where wireframe and
shaded modes are important, this really matters if you value precision in
your modelling & design.

And of course for a production environment, the warranty setups are different,
with a higher failure rate expected of gamer cards.

Having said all that, the one thing I do recommend for the Quadro 4000 is
replacing the stock cooler with a Gelid Icy Vision II which makes a huge
difference to temps/noise & thus long term usage. 8) I sold a 5GHz 2700K
with 32GB RAM and Quadro 4K to an engineering company last year for
use with CATIA, ProE and various other CAD apps; it blew away the typical
1-socket Dell XEON they'd been considering, saving them 50% aswell.

hologram, if you want to see what I mean, try getting hold of a cheap Quadro 600
off eBay, they go for not much these days. Compare it for Maya via Viewperf11
against your existing gamer card.

Ian.

PS. It's a pity Viewperf12 doesn't include ProE because a lot of companies
still use it.

 

mapesdhs

Distinguished
Jan 22, 2007
2,507
0
21,160
111


Indeed, sw costs are a pain. It's taken me years to get hold of various original Flame/Smoke/Inferno, etc.
systems so I can run benchmarks. Bought another Smoke SGI recently for only 250 UKP. Funny old world...

Ian.

 

FormatC

Distinguished
Apr 4, 2011
981
0
18,990
1
The SPECapc for Maya is not the way to test ist. May be someone has a better workload to test both - the OGL performance (some wireframe/shaded things) and viewport 2.0 with a expensive memory usage? The problem of the most projects I found around the web: too small.

Feel you so free to send me a mail or PM to find a good Maya solution, for 3ds Max too. I have the licensed AutoCAD Design Suite and Autodesk Maya Entertainment Creation Suite here - both Premium 2013. If we can figure out a handful of real good workloads for benchmarking I will be glad to use it here :)
 

edhap

Reputable
Apr 14, 2014
3
0
4,510
0
Sure the SPECviewperf viewsets are synthetic, but they represent actual traces of how the application is engaging with graphics subsystems.
That makes no sense. Cherry picked traces? You must be in marketing! :)

But I get some of your other points. Still, the result is something that is useless for me (and must be useless for everyone else using workstation applications). This does nothing to tell me if a card is right for me, so I need to determine that myself.


 

ddna

Reputable
Apr 16, 2014
1
0
4,510
0
It's unclear why nobody pays attention that consumer card beat professional cards for CATIA benchmarks. This sole fact is sensational and warrants a separate investigation.
 

edhap

Reputable
Apr 14, 2014
3
0
4,510
0
It's unclear why nobody pays attention that consumer card beat professional cards for CATIA benchmarks. This sole fact is sensational and warrants a separate investigation.
Well, it shows that synthetic benchmarks are flawed. Only a rookie or layman can justify using a consumer card for Workstation applications. Workstation cards are superior in their tuning for applications. This is true for performance, but more importantly, quality. Our engineers are making six figure salaries. We don't want to see them in the IT department with issues. Workstation cards pay for themselves quickly for professionals. That said, it does not mean that you should buy the really expensive ones. We find the best value is the W7000 and K4000 level of cards for almost everything we do. A few users go beyond those two, and I think we could go lower, but then the engineers feel less "special". We did try consumer cards. That went badly.
 

Para_Franck

Reputable
Apr 17, 2014
1
0
4,510
0
Nice review, it's always interesting to see the workstation side of the spectrum. You guys really try to help every one, not just gamers. At the price of these cards, a good review is a must to select the right cards for your computer park, based ont the work load you intend to use it for.

I mostly work with Autodesk Inventor, any benchmarks on this one? I know it is more Direct 3D oriented than the other ones, so the results would be interesting. I use a Firepro W5000, wich has become very difficult to purchase here (it's been out of stock for several months), I really do not know wich card to purchase for my new computers, because I have not found any other cards that offers as much performance per dollar, and I am hesitant to go Nvidia, because all my computers run firepros, wich simplifies driver update process and stuff.

Also, is there any benchmarks that showcases compute power for stuff like finite element analysis? Rendering is one thing, but I do ALOT of FEA here. For now, I have not seen any GPU compute enhanced FEA programs, and I don't understand. FEA are like giant matrixs to be solved and GPU compute are excellent at this, I think. Accelerating it would enhance my productivity.

(Sorry for my bad english guys, it's not my native language)
 

Relayer

Distinguished
Oct 11, 2008
25
0
18,530
0
*It is a shame Tom's did not include the results of the latest AMD FirePro 9100 card. They do actually have this card for eval and testing in house and It's a mystery to me why they chose not to include the results here.

tsk tsk tsk
What? Doesn't Tom's crap on AMD hardware hard enough for you? Now you've gone and hurt their feelings. I can't wait to read their next AMD review, "AMD hardware to blame for climate change".
 

mapesdhs

Distinguished
Jan 22, 2007
2,507
0
21,160
111
Para_Franck writes:
> I mostly work with Autodesk Inventor, any benchmarks on this one? I know it is more Direct 3D oriented ...

IIRC Inventor is one of the few tasks that does run well with gamer cards precisely because it's
more D3D. One thing though, the old Inventor was CPU-bound, so I recommend exploring any CPU
bottleneck issues before choosing your card; ProE is limited in this way, ie. a strong CPU and a
Quadro 600 is better for ProE than a standard CPU and a much more expensive Quadro. Having said
that, perfomance issues aside, you should (as another post said) bare in mind that consumer cards
come with quality/reliability issues. Thus, if you did decide to buy a gamer card on performance grounds,
then at least get a good one, eg. considering the GTX 580 as an example, find an MSI Lighting Xtreme
3GB instead of a reference card (better made, should last longer, runs much cooler, more reliable).

Speaking of which, even when it comes to Quadros, one can improve them. Replacing the awful stock
cooler on a Quadro 4000 with a Gelid Icy Vision II makes a massive difference to core/PCB temps
(drops by more than 35C) and noise levels (almost silent). Varies I guess to what extent this could be
done with other Quadros. PM if you'd like some pics of the Q4K mod.

Do you know if there's a standalone Inventor benchmark? I could run it on a couple of cards from
which you could extrapolate a degree of info. My own Inventor tests are no use though, they were
designed for SGIs.


> hesitant to go Nvidia, because all my computers run firepros, wich simplifies driver update process and stuff.

TBH I've always found driver management easier with NVIDIA cards, but there ya go.


> Also, is there any benchmarks that showcases compute power for stuff like finite element
> analysis? Rendering is one thing, but I do ALOT of FEA here. ...

SGIs used to be strong for FEA, but I've no idea what the modern solution would be for such
work. Are there any FEA forums on which you could ask?

Searching around though, it does seem to be the case that one can use CUDA to solve
FEA problems, in which case there are numerous options depending on your budget,
your purchasing goals and the nature of FEA calculations (I forget offhand, does it use
a lot of 64bit fp? If not, then one or more used GTX 580 3GB would be quite good, and
cheap; otherwise, Titan would be better, unless you really could afford a Tesla). See:

http://www.nvidia.com/object/tesla-abaqus-accelerations.html
http://www.youtube.com/watch?v=MsHm-KBVsLU


On a related note, these are rather interesting:

http://www.youtube.com/watch?v=-QJ4bAtS2rk
http://www.youtube.com/watch?v=hUezoHa1ZF4


> For now, I have not seen any GPU compute enhanced FEA programs, ...

I found the above with 5 seconds on Google. ;D


> (Sorry for my bad english guys, it's not my native language)

Don't apologise, IMO it's better than the quality of English spoken by most native Brits.
You should hear teenagers yabbering away on a bus with their mobiles... yeah, like,
whatever & stuff, y'know?

Ian.

PS. I've been collecting results when I can, but alas the Viewperf sets don't include Inventor:

http://www.sgidepot.co.uk/misc/viewperf.txt

 

Stewartlud

Reputable
Mar 22, 2014
3
0
4,510
0
This test does not take into account many real world situations like even screen draws. A Quadro card handles the entire scene fluidly where as most gamer cards freeze and chunk away. Also, the newer Kepler cards are WAY slower at rendering out / computing than the previos Fermi architecture was. It obvious there is " creative" accounting style comparisons being touted by all manufactures. A cuda core today is not equal to a cuda core from two years ago and thus is now an irrelavent measurement. Shame on manufactures for creativly "warping " inside the cores.
 

folem

Distinguished
Oct 9, 2011
55
0
18,630
0
Something I think is worth noting is that the W8000 isn't simply a higher end model than the W7000. The W7000 is heavily optimized for graphical applications, like the ones tested here, whereas the W8000 is heavily optimized for compute (specifically openCL) applications that weren't really featured here. This showed in the benchmarks but wasn't mentioned explicitly, so some people may be confused looking at them since the 8000 costs about 2.5 times as much as the 7000. Also worth noting is that the K6000 is the only Kepler based Quadro card that is double precision compute capable; double precision calculations must be perfomed by the CPU on the rest of them, effectively them useless for simulations and the like. Coupling these two, I bought W7000s for my company, even though we really could have afforded to spend about twice as much.
 

folem

Distinguished
Oct 9, 2011
55
0
18,630
0
Something I think is worth noting is that the W8000 isn't simply a higher end model than the W7000. The W7000 is heavily optimized for graphical applications, like the ones tested here, whereas the W8000 is heavily optimized for compute (specifically openCL) applications that weren't really featured here. This showed in the benchmarks but wasn't mentioned explicitly, so some people may be confused looking at them since the 8000 costs about 2.5 times as much as the 7000. Also worth noting is that the K6000 is the only Kepler based Quadro card that is double precision compute capable; double precision calculations must be perfomed by the CPU on the rest of them, effectively them useless for simulations and the like. Coupling these two, I bought W7000s for my company, even though we really could have afforded to spend about twice as much.
 

mapesdhs

Distinguished
Jan 22, 2007
2,507
0
21,160
111


That's why GTX 580s are still so good for CUDA, especially tasks like AE. The shaders are 2X faster,
the mem bw per core is a lot higher, and the 64bit fp is comparatively stronger being 1/8th of FP32.
NVIDIA halved the shader clock after the 580 for various reasons (heat & power delivery mostly), so
newer cards need a lot more cores to offer the same performance, though in some cases a 580 can
still beat a 780, etc.

I fitted my 3930K AE setup with four MSI 580 3GB L.X. cards, making it faster than two Titans for CUDA.

Ian.

 

Mike Hoopes

Reputable
Jun 9, 2014
3
0
4,510
0
Question: I have an HPE-510t desktop machine (Sandy Bridge core i7-2600) with PCI Express 2.0 x16 and a 300 W power supply. Would the FirePro W5000 @ 75 W require a PS upgrade, and does my PCI-e 2.0 x16 bus mitigate any of its performance advantage over the nVidia K2000?

Thanks, Mike
 

mapesdhs

Distinguished
Jan 22, 2007
2,507
0
21,160
111
It would probably be ok with your existing PSU, though personally I wouldn't trust any 'pro' system
to such a low-end model of PSU. Such low-wattage units tend to be of lesser quality in general.

Don't worry about the PCIe bandwidth, really not an issue with a card as far down the scale as
K2000s or equivalent. If you were doing work that needed such bandwidth, you'd be better off
with a more powerful card that has more RAM anyway, like the K5000, and hence a better
system in general.

Ian.

 

Draven35

Distinguished
Nov 7, 2008
806
0
19,010
9


might require a PSU upgrade depending on other components in the machine. I doubt you're fully saturating the PCIe 2.0 bus.
 
Status
Not open for further replies.

ASK THE COMMUNITY

TRENDING THREADS