AMD FirePro W8000 And W9000 Review: GCN Goes Pro

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]fuzznarf[/nom]Are you smoking crack?!?! the 6k quadros are there. Perhaps it would help if you read the whole article.[/citation]

shin0bi272 clearly stated consumer cards. Quadros are not consumer cards.

[citation][nom]shin0bi272[/nom]Love how you left out the 6 series nvidia cards in your consumer card tests. Very sneaky THG.[/citation]

The consumer Kepler cards are next to useless if used in double-precision compute tasks compared to Fermi and GCN. I think that it would have been nice to see exactly where they stand, but they might have been left out because they simply can't compete except in single precision tasks.
 
Nice to see AMD finally getting serious about their professional cards... it has always been rather embarrassing in the past how far behind they were. These new cards (young drivers and all) are a clear improvement over the old series, and that they can do such high AA without blinking much is extremely impressive!

However, trading blows with a 2 year old nVidia card is nothing to be proud of, especially with the new Quadros bring released very soon. But at least they are in the game this time, where they were clearly behind before.

What are the odds of aftermarket cooling solutions being made for these cards to shut them up a bit? It has always amazed me that they would charge so many thousands of $$$ and not put on a better thermal solution.
 

lordstormdragon

Distinguished
Sep 2, 2011
153
0
18,680
As a long-time Maya user, I think it's great to see AMD pushing ahead with their GPU tech.

That said, for the price (again, primarily as a Maya user), there's simply no reason not to stick with the 7K series Radeons or the 500 series Nvidia cards. For Maya, these cards yield very diminishing returns, and would take many months to pay for themselves if they did at all.

Mid to high-end gaming cards perform almost as well in Maya's viewport(s) at a fraction of the price. I'd have liked to see more gaming cards tested in the Maya section here, but understand why they were not. It'd be a low blow to show a GTX550Ti smoking a $4,000 "pro" card.
 

nebun

Distinguished
Oct 20, 2008
2,840
0
20,810
[citation][nom]mayankleoboy1[/nom]Typical of AMD : releasing cards without proper drivers.I bet most professionals wont touch these cards until atleast 3-4 driver revisions. These cards are newer, and perform worse than competitions older.[/citation]
that's the main reason why i will never own one ever again...drivers are crap, literally
 

nebun

Distinguished
Oct 20, 2008
2,840
0
20,810
[citation][nom]rmpumper[/nom]The review needs at least one gaming GPU as comparison.[/citation]
these aren't mainstream cards...why game on them?
 

nebun

Distinguished
Oct 20, 2008
2,840
0
20,810
[citation][nom]CaedenV[/nom]If you ask AMD and nVidia really nice do you think they would let you borrow 2 of the same cards to do a SLi vs xFire scaling test? I am curious about the new 0 core tech.[/citation]
zero core tech is a gimmick, i don't see the point in it....most of nVidias cards will clock down if no 3d graphics are detected....much more efficient if you ask me
 

falchard

Distinguished
Jun 13, 2008
2,360
0
19,790
Results are a bit predictable. The professional market is not like games that are based on the same API. nVidia and AMD have different software that caters to their hardware better. Its up to the person buying to do the research and figure out what would be the best card to get in their price range. Brand loyalty has very little to do in the decision. Neither AMD or nVidia are subpar in this space.
Also yea, still no Blender?
 
[citation][nom]nebun[/nom]zero core tech is a gimmick, i don't see the point in it....most of nVidias cards will clock down if no 3d graphics are detected....much more efficient if you ask me[/citation]

Zero Core goes beyond just clocking down, it turns pretty much EVERYTHING off that isn't necessary. It can pretty much disable all auxiliary AMD Radeon 7000-based GPUs in a multi-GPU computer. Nvidia can't yet do this and that makes them use more power in situations where Zero Core is working. AMD's display-off advantage with it is also fairly good and lets AMD cards idle well below even the Kepler cards. This isn't a gimmick, it's a very practical technology that works quite well.
 
[citation][nom]nebun[/nom]these aren't mainstream cards...why game on them?[/citation]

Some professional users like to game too. If they're already buying an expensive professional card, then there's not much point in buying another consumer card for gaming if the pro card can do it quite well.
 

markem

Honorable
May 1, 2012
37
0
10,530
[citation][nom]nebun[/nom]zero core tech is a gimmick, i don't see the point in it....most of nVidias cards will clock down if no 3d graphics are detected....much more efficient if you ask me[/citation]

Go away troll
 

markem

Honorable
May 1, 2012
37
0
10,530
We all know, the drivers need optimizing, AMD has this in the bag, thi is an embarrassing time for nvidia, give it anther month and the Pro market will have forgotten nvidia apart from a few here and there benches/software that nvidia may still win

AMD drivers team will work their magic as they did with the HD7 series Gamer cards (wait & see)
 
[citation][nom]markem[/nom]We all know, the drivers need optimizing, AMD has this in the bag, thi is an embarrassing time for nvidia, give it anther month and the Pro market will have forgotten nvidia apart from a few here and there benches/software that nvidia may still winAMD drivers team will work their magic as they did with the HD7 series Gamer cards (wait & see)[/citation]

Compute performance is more complex than gaming performance and from the looks of things right with current drivers, I would say that what card you get depends entirely on what you're doing. I'm not sure if any amount of driver improvements can change this fact. It might make it so AMD wins more often and/or doesn't lose as badly when they lose, but I doubt that AMD can get a clean sweep from them, especially since some software still relies on CUDA.
 

mapesdhs

Distinguished
mayankleoboy1 writes:
> 1.How does the CPU performance affect the benchmarks ? ...

(I don't know why people have marked you down. Perfectly sensible questions...)

Alas CPU performance can affect the results quite a lot. I've tested a range of Quadro cards
and gamer cards with a number of CPUs/mbds, the results are most revealing. See:

http://www.sgidepot.co.uk/misc/viewperf.txt

Inparticular, compare the results for my systems with absolute high clocks, irrespecitve of the
no. of cores. The 2700K @ 5GHz dominates, but the i3 550 @ 4.7GHz is not far behind (I'm
expecting good results when I sort out my i5 670 which ought to run over 5GHz no problem).

Indeed, some of the Quadro 600 results I've obtained completely obliterate the data in the
article for supposedly much more powerful cards. CPU speed really does matter, in many
cases absolute clock rate. It's a shame this is the case, but worth noting if one needs good
performance on a budget (very much depends on the task though).

Note especially how my 'consumer' systems like the 2700K beat my Dell dual-XEON X5570
setup (the latter has two 4-core CPUs, so 8 cores, 16 threads, 24GB DDR3/1333 RAM, 600GB
15K SAS).


> ... that changing the CPU to a much better Intel Xeons wont affect the performance much ?

Perhaps not to older XEONs, but the newer versions would probably make a difference since
their IPC is better.


> 2. Also, how do the consumer cards perform on these pro softwares ?

In general, rubbish, the exception being Ensight and Maya in certain cases. Likewise, don't
use pro cards for games (I've tested that too; Quadros are terrible for 3DMark06).


Caveat: what Viewperf can't show you is how a pro system will cope with as 'real' dataset,
which often is very large and thus may need lots of cores to run well, and lots of RAM. To
some extent Viewperf is rather narrow because of this, eg. it can't convey how good a
system would be for visualising medical datasets, GIS, huge sat images, etc., all of which
also depend on strong I/O.

Ian.

 

mapesdhs

Distinguished
[citation][nom]lordstormdragon[/nom]... I'd have liked to see more gaming cards tested in the Maya section here, but understand why they were not. It'd be a low blow to show a GTX550Ti smoking a $4,000 "pro" card.[/citation]

What one doesn't get from gamer cards is top-end image quality, and AA line performance is woeful
with gamer cards (or at least it used to be).

Ian.

 

sam_m

Honorable
Aug 13, 2012
3
0
10,510
mapesdhs / ian - thanks for the post with info and your viewperf results. Can I add another caveat to consider... There are professional applications that use DirectX (e.g. nearly the entire Autodesk range has moved over from OpenGL) in the same way there are games that use OpenGL instead of DX.

Viewperf only measures OpenGL performance of cards, so it's not representative of their ability with those professional applications that run DirectX - so a pro system benchmarked highly with viewperf may well not be an ideal pro system for the end user who's not using OpenGL.

All workstation cards need to excel within both OpenGL and DirectX which means DirectX benchmarking alongside Viewperf. It's just a shame the majority only associate DirectX with games and thus think gaming-benchmarks are suitable, which can only be questionable. But, that's not to say gaming cards (arguably tailored more to DirectX than OpenGL) shouldn't be considered when benchmarking DirectX professional software as the pro software is now kicking about in their (api) camp.

As a follow-on pondering... If we ignore OpenGL entirely then what do these expensive cards add to those users above and beyond the cheaper gaming cards as the OpenGL enhancements of workstation cards is pretty much wasted on those professionals using DirectX software.
 
[citation][nom]sam_m[/nom]mapesdhs / ian - thanks for the post with info and your viewperf results. Can I add another caveat to consider... There are professional applications that use DirectX (e.g. nearly the entire Autodesk range has moved over from OpenGL) in the same way there are games that use OpenGL instead of DX.Viewperf only measures OpenGL performance of cards, so it's not representative of their ability with those professional applications that run DirectX - so a pro system benchmarked highly with viewperf may well not be an ideal pro system for the end user who's not using OpenGL.All workstation cards need to excel within both OpenGL and DirectX which means DirectX benchmarking alongside Viewperf. It's just a shame the majority only associate DirectX with games and thus think gaming-benchmarks are suitable, which can only be questionable. But, that's not to say gaming cards (arguably tailored more to DirectX than OpenGL) shouldn't be considered when benchmarking DirectX professional software as the pro software is now kicking about in their (api) camp.As a follow-on pondering... If we ignore OpenGL entirely then what do these expensive cards add to those users above and beyond the cheaper gaming cards as the OpenGL enhancements of workstation cards is pretty much wasted on those professionals using DirectX software.[/citation]

Professional cards are optimized for their compute performance. DirectX is not a compute language as far as I'm aware, although there is Direct Compute that might count. Much (perhaps most) professional software now supports one or more of the following: CUDA, OpenCL, and Direct Compute. All of these are languages that can be used for compute to accelerate programs that rely on workloads that can be run highly parallel with GPUs and other such highly parallel processors. OpenGL and DirectX, to my understanding, aren't really capable of this, so supporting one over the other doesn't change most of what workstation/compute cards do nor how they do most of what they do.
 

mapesdhs

Distinguished
sam_m writes:
> [/nom]mapesdhs / ian - thanks for the post with info and your viewperf results. ...

Most welcome! Note that I will be testing with some AMD CPUs at some point (I have a 6000+,
Athlon II X4 635, Athlon II 250, Ph2 X4 965, though no X6 as yet), but so far I've not obtained
any FireGL cards.


> ... There are professional applications that use DirectX ...

Interesting, do you have some definite examples?


> (e.g. nearly the entire Autodesk range has moved over from OpenGL) ...

AFAIK the IFFFS suite, Maya, etc. are still all OGL based. Ditto Nuke and numerous others.


> ... If we ignore OpenGL entirely then what do these expensive cards add to those users
> above and beyond the cheaper gaming cards as the OpenGL enhancements of workstation
> cards is pretty much wasted on those professionals using DirectX software.

Most of the pro users I know do use apps though which employ OGL, so I'm not sure it's really
that much of an issue. Plus, as blazorthon says, most apps if they've any sense will use OpenCL,
CUDA, etc. for hw acceleration support - remember the origins of OGL were for real-time vis sim
(check the original specs of IR, on which the GF256 was based), not compute acceleration, though
that's always been possible via the ARB extensions (thus the real-time imaging functions on SGI
IRIX workstations, eg. real-time roam/pan/zoom/rotate/brightness/contrast/sharpness/etc. on
large images, all the way up to multi-GByte images on the Group Station for Defense Imaging).

The main difference is driver optimisation, which for performance can mean up to an order of
magnitude gap between a gamer card and a pro card for a particular task. For example, games need
2-sided textures, pro apps don't; likewise, pro apps need high quality AA lines, games don't. Check
individual test results for Maya and other apps, you can see pro cards doing some tests 10X faster
than gamer cards (imagine the difference that would make to an animator or CAD person working
with a large wireframe model), but these differences are often 'smoothed' out for the final average
by other tests where the gamer cards are not so bad, but in reality it really is a false economy to use
a gamer card for a pro task, with just the occasional exception such as Ensight. The real advantage
of gamer cards is raw cost, but their utility re productivity is a different matter. Also, pro cards use
parts which should last longer, the support is far better, and so on. Years ago it was the case that
some cards could be modded back & forth in various ways between pro & gamer, but it's not like
that now.

The optimal approach of course is to use both: pro card for the main app/display(s), gamer card(s)
for CUDA acceleration. I'll be testing this soon with CS6 (AE); I have a test render dataset which
takes about 4 hours to render CPU-only on a 5GHz 2700K.

Ian.

 

sam_m

Honorable
Aug 13, 2012
3
0
10,510
I'm a design engineer that uses something called Autodesk Inventor - 3d CAD software like Solidworks. They're both pretty equivalent pieces of kit and very comparable - but there is a different hardware requirement because Inventor uses DirectX and Solidworks uses OpenGL.

They're both single-threaded applications (at the time of writing) so there is no support/benefit of CUDA or DX11's gpu-assisted processing (or OpenCL or DirectCompute, etc.). To prevent being flamed, this is wrong, there are specific times like FEA analysis and image rendering when more than 1 core is used, but for the meat and veg 3D modeling and designing it's only one 1 core. This has been speculated at as the nature of the programs are a linear progression of features creating the models - how can the calculations be paralleled when feature y cannot be considered without first calculating feature x? I hope that makes sense.

Why is Inventor DirectX? Well, historically it was OpenGL too but Autodesk moved to DirectX about 6-7 years ago with their first Vista release. When they were programming it M$ specified that all Vista drivers had to be WHQL signed and thus Autodesk feared a problem with OpenGL so moved over to DirectX. If you are interested (and having trouble sleeping), here is a pdf of a collection of comments from an Inventor programmer at the time of the announcement explaining reasonings for the change:
http://archicad-talk.graphisoft.com/files/autodesk_inventor_opengl_to_directx_evolution_788.pdf

Autodesk also moved it's other professional cad software over to DirectX, eg. AutoCAD (and it's varients like Mechanical, Electrical, etc), Revit, etc. Tbh, I thought 3D Studio was moved over to DirectX too.

Since the move over to DirectX, all those years ago, there is still confusion in the Inventor community over the need/benefits of these workstation cards as there is a general lack of information and education everywhere from end-users to hardware vendors. Speaking of which, we still have problems of hardware vendors pushing workstation cards, stating OpenGL benchmarks saying "it's what cad software uses." From personal experience, if we asked for comparisons between gaming cards and benchmarks for DirectX applications then we get told workstation cards are "more accurate" etc. but, again, I believe these kind of comments are related to the OpenGL environment and the better certification to the OpenGL standards than the gaming cards (think I've read that somewhere).

Due to all the confusion within the Inventor CAD community there have been a number of user unofficial benchmarks and comparisons - where the results have almost always swayed towards the gaming cards (and usually Nvidia ones). The cynic in me thinks hardware vendors are rubbing their hands while selling expensive workstation cards to Inventor users that possibly don't need them.

So, I hope this explain an existance of professional DirectX software and my desire for non-gaming DirectX benchmarking for workstation cards (and the need to include gaming cards within any non-gaming DX benchmarking).

And, sorry if there's any confusion from my use of "professional" software for the CAD market and admit I know little of the requirements elsewhere - guess that's the problem as technically there's almost an unlimited range of "professional" software...

(sorry to keep posting in this topic about DirectX benchmarking, just trying to explain my situation better each time).
 

mapesdhs

Distinguished
sam_m writes:
> I'm a design engineer that uses something called Autodesk Inventor - ...

Ahh, brings back memories... *sigh* I was using Inventor back in the early 1990s when it
was still all-SGI, all OGL, and incidentally indeed all single-threaded. I developed a bunch of
3D tests based on it for SGI systems, though these ended up being somewhat CPU bound for
large models (viewing the same model in perfly could be an order of magnitude faster). See:

http://www.sgidepot.co.uk/perfcomp.html

It's the "Inventor 3D (single-buff)" link, top-left.


> OpenGL.They're both single-threaded applications (at the time of writing) so there is no

After all these years and they didn't change it. Pretty lame really.

Oddly enough though, this means you're exactly the kind of user who would - on a budget - get
good results from the kind of oc'd dual-core systems I've been testing, eg. i3 550 in the 4.7 to
5GHz range.


> creating the models - how can the calculations be paralleled when feature y cannot be
> considered without first calculating feature x? ...

Perhaps the nature of Inventor has changed, but back when I was using it the system could
certainly have been threaded, but performance in that sense wasn't its focus at the time, and
it certainly wasn't intended as the basis for a CAD system (VRML was more the target market
back then; I was using it for undersea ocean systems modelling).


> moved to DirectX about 6-7 years ago with their first Vista release. When they were programming

And behold I am become Vista, the destroyer of worlds. :}


> over to DirectX, all those years ago, there is still confusion in the Inventor community over the
> need/benefits of these workstation cards as there is a general lack of information and education

In reality you'd see a bigger gain by having a much quicker CPU. Just look at my i3 550 results
with the Quadro600 for many of the tests.


> again, I believe these kind of comments are related to the OpenGL environment and the better

Hmm, I'm not sure that an app using OGL rather than DirectX would affect exactly how a GPU
will render a texture wrt mipmap quality... I wouldn't have thought so. Comments blazorthon?
Not sure myself.


> there's almost an unlimited range of "professional" software...

Doesn't help that Autodesk now owns almost all of them. Very unhealthy IMO. The media is
quick to pounce on competition issues wrt high street banking & all sorts of other areas, but
when it comes to sw we can end up with one corp owning the whole show & nobody seems
to care. Remember the days when Max was cheap & Maya was expensive? It's the other
way round now...


Anyway, sorry I can't help with DX pro benchmarks. I'm kinda limited in what tests I can run,
partly due to time constraints.

Ian.

PS. What system do you have btw? Take this to PM/email if preferred (mapesdhs AT yahoo DOT com).

 

Shin-san

Distinguished
Nov 11, 2006
618
0
18,980
That is awesome that you benchmarked video games with these cards. I know that's not their purpose, but a lot of us are simply just curious. Also, I can see some companies looking for something that offers both in compute and video game loads.
 
G

Guest

Guest
yosoyyon.com: I like you project I wander how it looks like when is render and a put a seen I do 3D on my site also but I wander how long you usually take to do some thing like that berry good.
 
Status
Not open for further replies.