Workstation Graphics: 14 FirePro And Quadro Cards

Status
Not open for further replies.
Gentlemen?,

A valiant effort, but in my view, a very important aspect of the comparisons has been neglected, namely, image quality,

It is useful to make quantitative comparisons of workstation cards performing the same tasks, but when gaming / consumer cards are also compared only in terms of speed, the results are not necessarily reflective of these cards' use in content creation. Yes, speed is critical in navigating 3D models- shifting polygons, but the end result of those models is likely to be renderings or animations in which the final quality- refinement of detail and subtlety is more critical than in games.

A fundamental aspect that reflects on the results in this comparison is that the test platform using an i3-3770K is not indicative of a workstation platform for which the workstation cards were designed and the drivers optimized. There are a number of very good reason for Xeons and Opterons and especially, for the existence of dual CPU's with lots of threads. There are other aspects of these components that bear on results, e.g., the memory bandwidth of the i7-3770K is only about half of a Xeon E5-1660. Note too, that that there are good reasons why Xeons have locked multipliers and can not be overclocked- speed is not their measure of success in priority to precision and extreme stability. Also important in this comparison is the presence of ECC RAM which is present in both the system and workstation GPU memory, which was treated a bit lightly, but that is essential for precision, especially in simulations and tasks like financial analysis. Also, ECC affects system speed in it's error correcting duties and parity checks and therefore runs slower than non-ECC. Again, to be truly indicative of workstation cards, it would be more useful to use a workstation to make the comparisons.

An aspect of this report that was not sufficiently clarified, is that the rendering based applications are entirely reflective of CPU performance. Rendering is one of the few tasks that can use all the available system threads and anyone who renders images from 3D models and especially doing animations will today have a dual CPU six or eight core Xeon. I That comparisons were made involving rendering applications on a four-core machine in conditions of which the number of cores / threads matters significantly. believe that some of the dramatic differences in Maya performance in these tests may have been related to the platform used. I have a Previous generation dual four-core system yielding eight cores and sixteen threads at 3.16GHz (Xeon X5460) and during rendering, all eight cores go from 58C to 93C and the RAM (DDR2-667 ECC)from 68C to 85C in about ten minutes.

Also, it's possible that the significant variation in rendering performance then may be due to system throttling and the GPU drivers that are finishing every frame under error-correcting RAM. In this task, the image quality is dependent on precision polygon calculation and i.e, particle placement, such that there are no artifacts, that shadows and color gradients are accurate and refined. Gaming cards emphasize frame rates and are optimized to finish frames more "casually" to achieve higher frame rates. This is why a GTX can't be used for Solidworks modeling either as tasks like structural, thermal, and gas flow simulations must have error correcting memory and Solidworks can produce as much as 128X anti-aliasing where a GTX will produce 16X. When a GTX is pushed in this way, especially on a consumer platform they perform poorly. Again, the image precision and quality aspect was lost in favor of a comparison of speed only.

The introduction of tests involving single and double precision and comments regarding the fundamental differences of priority in the drivers were useful and in my view might have been more extensive as this gets more to the heart of the differences between consumer and workstation cards.

Making quantitative comparisons of image quality is contradictory by definition, but in my view, quality is fundamental to an understanding of these graphics cards. As well, this would assist in explaining to content cobsumers the most important reason content creators are willing to spend $3,500 on a Quadro 6000 when an $800 GTX will make some things faster. Yes, AutoCad 2D is purposely made to run on almost any system- but when the going gets tough***, the tough get a dual Xeon, a pile of ECC, and a Quadro / Firepro!

***(Everything else!)

Cheers, BambiBoom

 

falchard

Distinguished
Jun 13, 2008
2,360
0
19,790
Actually most of these tests don't hit on why you get a Workstation card. In a CAD environment the goal is to get the most amount of polies on screen in real time. SPECview is the benchmark suite to test this and you can see the difference the card makes.
In the tests I found the CUDA numbers disappointing, but you would get a Tesla card for CUDA not a workstation card.
On the OpenCL numbers it paints a different picture where there is almost no difference between the consumer card and the workstation card. I was actually expecting the workstation cards to perform better, but once again I think that's an avenue of FireStream and Tesla cards.
 

rmpumper

Distinguished
Apr 17, 2009
459
0
18,810


People buy workstation cards for better viewport performance and better image quality and as you can see from specviewperf numbers, gaming GPUs are completely useless for that.
 

catmull-rom

Honorable
Jul 8, 2013
1
0
10,510
I don't really get why pro cards are recommended so easily? I know this site want's manufacturers to keep sending them cards but the data just doesn't support a simple end conclusion.

I totally get that in some work-areas you want ecc, you want certified drivers, you want as much stability and security and / or extra performance in specific areas. Compared to the work the hardware cost is of little importance, so I totally agree, get a pro workstation with a pro card. You want to be on the safest side while doing big engineering projects, parts for planes, scientific and / or financial calculations etc.

But that being said, and especially for the content creation / entertainment / media sector you really need think if a pro card is useful and worth it. Most 3D apps work great on game cards, and as you can see as far as rendering is concerned game cards are your best choice for speed if you can live with the limitations. Also for a lot of CAD work you can get away fine with a game card.

So it's not just Autocad or Inventor which don't need a pro card. Most people will be just fine with them on 3ds max or alike, rhino and solidworks.

I don't get why there are no test scores with Solidworks and game cards in this article? Game cards work fine mostly and pro cards offer little extra featurewise in this app. The driver issue really seems like a bad excuse not to have some game scores in there.

Also, I have never really looked at Specview. It's seems to heavily favor pro cards while it doesn't tell you most apps will work fine with game cards.
 

vhjmd

Distinguished
Sep 2, 2010
1
0
18,510
Wonderful article, you should make an update with intel HD 3000 and HD 4000 because at least Siemens NX now supports officially those cards because have performance for Open GL.
 

mapesdhs

Distinguished
With the pro cards at last not hindered by slower-clocked workstation
CPUs, we can finally see these cards show their true potential. You're
getting results that more closely match my own this time, confirming what
I suspected, that workstation CPUs' low clock rates hold back the
Viewperf 11 tests significantly in some cases. Many of them seem very
sensitive to absolute clock rate, especially ProE.

And interesting to compare btw given that your test system has a 4.5GHz
3770K. Mine has a 5GHz 2700K; for the Lightwave test with a Quadro 4000,
I get 93.21, some 10% faster than with the 3770K. I'm intrigued that you
get such a high score for the Maya test though, mine is much lower
(54.13); driver differences perhaps? By contrast, my tcvis/snx scores are
almost identical.

I mentioned ProE (I get 16.63 for a Quadro 4K + 2700K/5.0); Igor, can you
confirm whether or not the ProE test is single-threaded? Someone told me
ProE is single-threaded, but I've not checked yet.


FloKid, I don't know how you could miss the numbers but in some cases
the gamer cards are an order of magnitude slower than the pro cards,
especially in the Viewperf tests. As rmpumper says, pro cards often give
massively better viewport performance.


bambiboom, although you're right about image quality, you're wrong about
performance with workstation CPUs - many pro apps benefit much more from
absolute higher speed of a single CPU with less threads, rather than just
lots of threads. I have a dual-X5570 Dell T7500 and it's often smoked for
pro apps by my 5GHz 2700K (even more so by my 3930K); compare to my
Viewperf results as linked above. Mind you, as I'm sure you'd be the
first to point out, this doesn't take into account real-world situations
where one might also be dealing with large data sets, lots of I/O and
other preprocessing in a pro app such as propprietory database traversal,
etc., in which case yes indeed a lots-of-threads workstation matters, as
might ECC RAM and other issues. It varies. You're definitely right though
about image precision, RAM reliability, etc.


falchard, the problem with Tesla cards is cost. I know someone who'd
love to put three Teslas in his system, but he can't afford to. Thus, in
the meantime, three GTX 580s is a good compromise (his primary card
is a Quadro 4K).


catmull-rom, if I can quote, you said, "... if you can live with the
limitations.", but therein lies the issue: the limitation is with
problems such as rendering artifacts which are normally deemed
unacceptable (potentially disastrous for some types of task such as
medical imaging, financial transaction processing and GIS). Also, to
understand Viewperf and other pro apps, you need to understand viewport
performance, and the big differences in driver support that exist between
gamer and pro cards. Pro & gamer cards are optimised for different types
of 3D primitive/function, eg. pro apps often use a lot of antialiased
lines (games don't), while gamer cards use a lot of 2-sided textures (pro
apps don't). This is reflected in the drivers, which is why (for example)
a line test in Maya can be 10X faster on a pro card, while a game test
like 3DMark06 can be 10X faster on a gamer card.

Also, as Teddy Gage pointed out on the creativecow site recently, pro
cards have more reliable drivers (very important indeed), greater viewport
accuracy, better binned chips (better fault testing), run cooler, are smaller,
use less power and come with better customer support.

For comparing the two types of card, speed is just one of a great many
factors to consider, and in many cases is not the most important factor.
Saving several hundred $ by buying a gamer card is pointless if the app
crashes because of a memory error during a 12-hour render. The time lost
could be catastrophic it means one misses a submission deadline; that's
just not viable for the pro users I know.

Ian.

 

mapesdhs

Distinguished


A strange thing to say given how well AMD's cards clearly do in many of the tests. Can you please
explain in exactly what manner toms is being biased? Are you saying they're rigging the tests in
some way? If so, how?

Build the system, install the OS/drivers/apps, run the tests. If the data ends up looking better for
NVIDIA in some cases, that's a problem for AMD, not THG.

Ian.

 



mapesdhs,

Many excellent points.

Yes, I agree completely that pure clock speed is useful and desirable in workstations, my point was that if I were predominately rendering, I would rather have more cores / threads than a high clock speed. But yes, I'd love a couple of twleve core Xeons at 4.5GHz. These may be coming too as the next generation of 14nm E7 (2015) are said to be 12-15 core, use DDR4 ,and be quite fast, though I've not heard any specific number. Intel seems to do development from the lower speeds at first.

Your comments are also very welcome as you mention some of the important experiential qualities that come into play when using workstation applications. One of the problems in this kind of discussion is that those with gaming oriented systems have not experienced use of 3D CAD and rendering applications to the level where the workstation cards become not only useful, but mandatory. Especially important are the viewports , artifacts and reliability.

After using a Dell Prevision T5400 with the original Quadro FX 580 (512MB) I soon realized that 3D CAD- for which I bought that system would need more 3D capability and memory. I fell for the idea that, as I was primarily a designer and not working at an extraordinarily high level in 3D CAD, that a GeForce GTX should be adequate and possibly faster at that level than a Quadro- and far less expensive. I bought a GTX 285 (1GB) because it was more or less a 1GB version of the 4GB Quadro FX 5800- same GPU, same 512-bit, 240 shaders, only less memory, $350 instead of $3,200, and I could always add a second one in SLI if needed.

The GTX 285 seemed ostensibly to have all the right hardware and in Sketchup the 3D navigation at first seemed to be blazingly fast. But, after the Sketchup model became larger, the navigation had a quirk in which it would spin in any direction, but if I stopped moving for only a second it would freeze such that most often I'd have to close the program. I stumbled and stuttered around in monochrome, only including as little visible geometry as possible but it was no good- if I for one second included another large component it would freeze.

The model eventually became 125MB and when I added textures and tried shadows, it made impossible artifacts, a rain of short black lines from any polygon, the shadows became solid planes at bizarre angles, and sometimes, textures would drop out. Extracting renderings from the model- the whole point- were useless as the rendering application would import for about 25 minutes and crash Sketchup. I was never able to make a single rendering of a model more than 9 or 10MB with the GTX 285.

Then, I began learning Solidworks in preparation to do a 6,000 part assembly- great first project? > and the system would not open viewports, and the limited anti-aliasing made curves so crude I couldn't make accurate solids intersections.

In short, the situation was impossible and I realized how extremely expensive my cost-savings had been. I went back to the idea of my favourite Quadro, the FX 5800 and bought an FX 4800, same GPU, but 384-bit instead of 512, and 192 CUDA cores instead of 240, and 1.5GB in place of 4GB. Perfect renderings, viewports and x128 anti-aliasing instead of x16. The navigation in my large Sketchup models is not blazing fast, but it doesn't freeze in Solidworks- in short all problems solved. Eventually, I added a second Xeon X5460 and went from 12GB to 15GB to have more cores/threads for rendering and all is well -though this system gets very hot during rendering (it's the DDR2).

Sorry for the long, historical ramble, but I think that these experiential episodes are the kind of information that, as you mentioned, are among the most important aspects of evaluating workstation graphics cards and missing in a speed-only focus.

When Quadro K5000's are sold used for $1,000,...

Cheers, BambiBoom

"No matter your wealth, power,or friends, the cheapest things in life are free."

[ Dell Precision T5400 > 2X Xeon X5460 quad core @3.16GHz > 16 GB ECC 667 > Quadro FX 4800 (1.5GB) > WD RE4 / Segt Brcda 500GB > Windows 7 Ultimate 64-bit > AutoCad 2007, Revit 2011, Solidworks 2010, Sketchup 8 Pro, Corel Technical Designer X-5, Adobe CS4 MC, WordP Office X4, MS Office2007]







 

happyballz

Distinguished
Mar 15, 2011
269
0
18,780
These just shows that software itself has very crappy support of gaming-oriented cards. Most high-end gaming cards can do just as good of a job if not better.
 

FormatC

Distinguished
Apr 4, 2011
981
1
18,990
Ok, a lot of stuff and questions...

I have here a Dual-Opteron Workstation (4284) with 32 Gigs of ECC to, but I have the same problem with AMD an their test bench with a current Xeon. This CPUs are limiting the newer and powerful pro-cards in many cases. I can't show you the real difference between the possible performance of this cards if I'm using this workstations. If we are benchmarking gamer cards with 5 GHz CPUs nobody is crying. But how much readers are using this higher clocks for gaming? It is the same: we wan't to show the performance of this cards only (without limitations) and not for the complete workstation.

Yes, the older Pro/E is mostly single-threaded.

AMD was not able to send us workstation APUs because the big OEMs (Dell, HP) are not interested in to build such systems. It is better to sell a system with an expensive CPU AND an expensive, separate VGA card.

ECC on VGA cards is important for some things but how many cards are supporting ECC? The new K5000 is not able to do this. I have only the older big Quadros and the W8000/9000 from AMD.
 

FormatC

Distinguished
Apr 4, 2011
981
1
18,990
I've tried to get something in this direction. No response from Dell and no comment from AMD.
This is only good for buyers but not for OEMs like Dell :D
 

FormatC

Distinguished
Apr 4, 2011
981
1
18,990
I saw this board long times ago by Sapphire. But this guys were also not able to ship me one real existing retail board. A lot of rumors and question marks.
 
@mapesdhs

This forum needs more tales from the field in this department, such as yours. The people who would actually be considering these pro cards need these examples to confirm what they already suspect and the gamer/pc hobbyist system builders need these examples to realize they are talking about apples while standing in an orange grove.
 

JPForums

Distinguished
Oct 9, 2007
104
0
18,680
You've got the W8000 batting for the green team in the Tropics benchmark graph.

That aside, it is interesting to see AMD's consumer grade cards excelling in some of these benchmarks relative to their Nvidia counterparts while Nvidia dominates the very same benchmarks with their workstation cards. Either there is significant bleed over between the two AMD driver sets, or there is a significant effort on Nvidia's part to cripple workstation performance on their consumer cards. I suppose a third option of AMD's hardware is just that much better than Nvidia's, but the AMD workstation driver team doesn't have a clue also exists. Somehow I doubt AMD would put out the effort to add in workstation optimizations to their consumer grade cards. They have enough to worry about with gaming. I could see Nvidia intentionally crippling their consumer grade workstation performance to maintain the appearance of value in their workstation lineup. However, as far ahead as the workstation cards are, this seems like a waste of time and money. However, I also find it hard to believe that AMDs hardware is simply stronger than Titan. I'll give them compute superiority as their architecture is clearly geared towards it, but rendering should go to Titan. Furthermore, it seems to me that they would have a driver team at least as good as their consumer driver team. After all there is far more return on investment in the workstation arena. While AMD's consumer drivers aren't the equal of Nvidia's, they don't suck this bad. I guess that leaves me with a combination of Nvidia slightly crippling their consumer cards and AMD needing a better (bigger) driver development team for workstations.
 
I see the difference professional certified drivers make in those apps and can't help but ask, "What would it be like to have consumer gaming cards using fully optimized drivers for certain games?"
 

cadder

Distinguished
Nov 17, 2008
1,711
1
19,865
I've used Autocad 2D for 24 years on a wide variety of hardware and I've learned that there is no speed advantage to an expensive workstation card vs. a standard desktop card, but in particular when running under a 64 bit OS it can be tricky getting reliable drivers unless you are running a professional video card. I have the same opinion of the dual Xeon workstations vs. a single i7 at a high clock rate.

I've been trying to convince people that the same holds true for Revit. Many industries that were dominated by autocad have now switched to Revit.

Lots of people are asking "why buy a workstation card?" One reason is that these are the cards that are marketed to professionals, and they believe that they are better without doing any research or tests. The other reason is that under some situations the drivers that come with a gaming card are very unreliable for autocad and revit.

AMD vs. Nvidia people need to find something else to do. I built workstations for my company a few years ago. I used the cheapest Nvidia workstation cards and the cheapest AMD workstation cards and they worked equally well under Revit and 2D Autocad. At the beginning of that effort I put a basic workstation card in one of the machines and could not get it to reliably run Revit. OTOH my standard Dell Latitude laptop has no problem with Autocad or Revit and it has generic video.

Keep in mind if you run something other than Autocad or Revit then my experience may not be applicable.
 
Status
Not open for further replies.