[quotemsg=11116733,0,117741]With the pro cards at last not hindered by slower-clocked workstation
CPUs, we can finally see these cards show their true potential. You're
getting results that more closely match
my own this time, confirming what
I suspected, that workstation CPUs' low clock rates hold back the
Viewperf 11 tests significantly in some cases. Many of them seem very
sensitive to absolute clock rate, especially ProE.
And interesting to compare btw given that your test system has a 4.5GHz
3770K. Mine has a 5GHz 2700K; for the Lightwave test with a Quadro 4000,
I get 93.21, some 10% faster than with the 3770K. I'm intrigued that you
get such a high score for the Maya test though, mine is much lower
(54.13); driver differences perhaps? By contrast, my tcvis/snx scores are
almost identical.
I mentioned ProE (I get 16.63 for a Quadro 4K + 2700K/5.0); Igor, can you
confirm whether or not the ProE test is single-threaded? Someone told me
ProE is single-threaded, but I've not checked yet.
FloKid, I don't know how you could miss the numbers but in some cases
the gamer cards are an order of magnitude slower than the pro cards,
especially in the Viewperf tests. As rmpumper says, pro cards often give
massively better viewport performance.
bambiboom, although you're right about image quality, you're wrong about
performance with workstation CPUs - many pro apps benefit much more from
absolute higher speed of a single CPU with less threads, rather than just
lots of threads. I have a dual-X5570 Dell T7500 and it's often smoked for
pro apps by my 5GHz 2700K (even more so by my 3930K); compare to my
Viewperf results as linked above. Mind you, as I'm sure you'd be the
first to point out, this doesn't take into account real-world situations
where one might also be dealing with large data sets, lots of I/O and
other preprocessing in a pro app such as propprietory database traversal,
etc., in which case yes indeed a lots-of-threads workstation matters, as
might ECC RAM and other issues. It varies. You're definitely right though
about image precision, RAM reliability, etc.
falchard, the problem with Tesla cards is cost. I know someone who'd
love to put three Teslas in his system, but he can't afford to. Thus, in
the meantime, three GTX 580s is a good compromise (his primary card
is a Quadro 4K).
catmull-rom, if I can quote, you said, "... if you can live with the
limitations.", but therein lies the issue: the limitation is with
problems such as rendering artifacts which are normally deemed
unacceptable (potentially disastrous for some types of task such as
medical imaging, financial transaction processing and GIS). Also, to
understand Viewperf and other pro apps, you need to understand viewport
performance, and the big differences in driver support that exist between
gamer and pro cards. Pro & gamer cards are optimised for different types
of 3D primitive/function, eg. pro apps often use a lot of antialiased
lines (games don't), while gamer cards use a lot of 2-sided textures (pro
apps don't). This is reflected in the drivers, which is why (for example)
a line test in Maya can be 10X faster on a pro card, while a game test
like 3DMark06 can be 10X faster on a gamer card.
Also, as Teddy Gage pointed out on the
creativecow site recently, pro
cards have more reliable drivers (very important indeed), greater viewport
accuracy, better binned chips (better fault testing), run cooler, are smaller,
use less power and come with better customer support.
For comparing the two types of card, speed is just one of a great many
factors to consider, and in many cases is not the most important factor.
Saving several hundred $ by buying a gamer card is pointless if the app
crashes because of a memory error during a 12-hour render. The time lost
could be catastrophic it means one misses a submission deadline; that's
just not viable for the pro users I know.
Ian.
[/quotemsg]
mapesdhs,
Many excellent points.
Yes, I agree completely that pure clock speed is useful and desirable in workstations, my point was that if I were predominately rendering, I would rather have more cores / threads than a high clock speed. But yes, I'd love a couple of twleve core Xeons at 4.5GHz. These may be coming too as the next generation of 14nm E7 (2015) are said to be 12-15 core, use DDR4 ,and be quite fast, though I've not heard any specific number. Intel seems to do development from the lower speeds at first.
Your comments are also very welcome as you mention some of the important experiential qualities that come into play when using workstation applications. One of the problems in this kind of discussion is that those with gaming oriented systems have not experienced use of 3D CAD and rendering applications to the level where the workstation cards become not only useful, but mandatory. Especially important are the viewports , artifacts and reliability.
After using a Dell Prevision T5400 with the original Quadro FX 580 (512MB) I soon realized that 3D CAD- for which I bought that system would need more 3D capability and memory. I fell for the idea that, as I was primarily a designer and not working at an extraordinarily high level in 3D CAD, that a GeForce GTX should be adequate and possibly faster at that level than a Quadro- and far less expensive. I bought a GTX 285 (1GB) because it was more or less a 1GB version of the 4GB Quadro FX 5800- same GPU, same 512-bit, 240 shaders, only less memory, $350 instead of $3,200, and I could always add a second one in SLI if needed.
The GTX 285 seemed ostensibly to have all the right hardware and in Sketchup the 3D navigation at first seemed to be blazingly fast. But, after the Sketchup model became larger, the navigation had a quirk in which it would spin in any direction, but if I stopped moving for only a second it would freeze such that most often I'd have to close the program. I stumbled and stuttered around in monochrome, only including as little visible geometry as possible but it was no good- if I for one second included another large component it would freeze.
The model eventually became 125MB and when I added textures and tried shadows, it made impossible artifacts, a rain of short black lines from any polygon, the shadows became solid planes at bizarre angles, and sometimes, textures would drop out. Extracting renderings from the model- the whole point- were useless as the rendering application would import for about 25 minutes and crash Sketchup. I was never able to make a single rendering of a model more than 9 or 10MB with the GTX 285.
Then, I began learning Solidworks in preparation to do a 6,000 part assembly- great first project? > and the system would not open viewports, and the limited anti-aliasing made curves so crude I couldn't make accurate solids intersections.
In short, the situation was impossible and I realized how extremely expensive my cost-savings had been. I went back to the idea of my favourite Quadro, the FX 5800 and bought an FX 4800, same GPU, but 384-bit instead of 512, and 192 CUDA cores instead of 240, and 1.5GB in place of 4GB. Perfect renderings, viewports and x128 anti-aliasing instead of x16. The navigation in my large Sketchup models is not blazing fast, but it doesn't freeze in Solidworks- in short all problems solved. Eventually, I added a second Xeon X5460 and went from 12GB to 15GB to have more cores/threads for rendering and all is well -though this system gets very hot during rendering (it's the DDR2).
Sorry for the long, historical ramble, but I think that these experiential episodes are the kind of information that, as you mentioned, are among the most important aspects of evaluating workstation graphics cards and missing in a speed-only focus.
When Quadro K5000's are sold used for $1,000,...
Cheers, BambiBoom
"No matter your wealth, power,or friends, the cheapest things in life are free."
[ Dell Precision T5400 > 2X Xeon X5460 quad core @3.16GHz > 16 GB ECC 667 > Quadro FX 4800 (1.5GB) > WD RE4 / Segt Brcda 500GB > Windows 7 Ultimate 64-bit > AutoCad 2007, Revit 2011, Solidworks 2010, Sketchup 8 Pro, Corel Technical Designer X-5, Adobe CS4 MC, WordP Office X4, MS Office2007]