AMD FirePro W8000 And W9000 Review: GCN Goes Pro

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
G

Guest

Guest
"We used Maya 2013 with a modified script based on the "MayaTest.mel" (SPECapc 2009) file. We decided on the Nvidia Quadro 6000 as our reference card at 100% and converted all other cards' performances to a percentage of its result."

Why? Is it because you didn't want to show how badly the less expensive AMD boards slaughter the NVIDIA boards? Case in point: TexturedHQ: W9000 is 3.4 TIMES faster than Quadro 6000. About the same on Smooth ShadedHQ. Why don't you re-do the graph to match all the others?

And where's the W7000? I'd love to see how the $770 W7000 compares to the $3700 Quadro 6000
on Maya...
 

mapesdhs

Distinguished
[citation][nom]Shin-san[/nom]That is awesome that you benchmarked video games with these cards. I know that's not their purpose, but a lot of us are simply just curious. Also, I can see some companies looking for something that offers both in compute and video game loads.[/citation]

I tested a Quadro 4000 recently; it wasn't too bad for the older-feature aspects of 3DMark06, but as one might
expect it was pretty awful for 3DMark11. See:

http://www.3dmark.com/3dm06/17017198
http://www.3dmark.com/3dm11/5263959

Ignore the overall score for these tests, focus on the individual test frame rates. CPU peformance severely distorts
3DMark06 results and is beginning to have the same effect on 3DMark11 results.

I'll run all the other tests later (X3TC, Stalker, etc.)

Ian.

 

mapesdhs

Distinguished
[citation][nom]blazorthon[/nom]... OpenGL and DirectX, to my understanding, aren't really capable of this, so supporting one over the other doesn't change most of what workstation/compute cards do nor how they do most of what they do.[/citation]

Not entirely sure how it might relate to your comments about compute, but remember OGL does have the ARB imaging
extensions which offer hardware acceleration of a wide range of 2D image processing operations on supported hw. Now
whether vendors for PC gfx cards ever bothered including this in their drivers, I have no idea, but it was supported by
SGI on its systems since the mid 1990s, allowing real-time pan/roam/zoom/rotate/haze/brightness/contrast/sharpness/etc.
adjustments on large images using the gfx hw alone, even with the supplied free program imgview (uses texture tiling to
split the image and feed the pieces to the 3D subsystem - more on this in the various technical reports). In the professional
market (especially defense imaging) there was the ELT system (electronic light table), able to manipulate images up to 115K
pixels across (eg. the Group Station for Defense Imaging).

Have NVIDIA/AMD included the ARB extensions in their OGL drivers? They certainly ought to. It's old though, so maybe
they've been replaced by something else since the early days, I don't know.

Anyway, just thought I'd mention as it does mean OGL offers hw acceleration of various 2D tasks if the drivers support
it, but I'm not sure whether this counts as 'compute' acceleration within your post's context.

Ian.

 

folem

Distinguished
Oct 9, 2011
58
4
18,635
With the older FirePro V-series cards beating the newer W-series cards (in some tests the V5900 even beat the V7900) there is a problem. Either those tests are invalid/outdated or AMD has some serious driver development to do. Looking at the raw hardware on the cards the W-series should be able to massively overpower the V-series (and the Quadro cards).
 

FormatC

Distinguished
Apr 4, 2011
981
1
18,990
Take a look at the date please :)

Consumer cards or workstation cards, the situation is equal. AMD has taken over a year to optimize the Catalyst for GCN, the Catalyst Pro was not better
 

mapesdhs

Distinguished
He has a point though. One would think AMD would test their new cards against the old,
and notice when basic tests might show older products appear initially faster. Not exactly
good for sales/launch PR...

Ian.

 
[citation][nom]mapesdhs[/nom]He has a point though. One would think AMD would test their new cards against the old,and notice when basic tests might show older products appear initially faster. Not exactlygood for sales/launch PR...Ian.[/citation]

These are professional workloads and they vary to an extreme. That some are just going to favor or not mind older architectures shouldn't be too surprising, especially looking at things for Nvidia too.
 

mapesdhs

Distinguished
I don't think that was his point; rather, unless I misread, he's saying that a newer product isn't
any better than an older product. OTOH, if he's referring to comparisons that are/were not at
the same price point then that's another matter, but he's absolutely right if he's saying a new
(for example) mid-range product is slower than an old mid-range. One can argue it's a driver
issue, but the whole *point* of paying more for Pro cards is that the drivers are supposed to be
far better optimised and debugged prior to launch.

This of course also assumes the bottleneck in whatever test is the GPU. If it's not, then again
that's another matter.

I have a lot of different Quadro cards and there is a huge degree of performance overlap, but
as I say this is usually something that crosses over target price ranges which doesn't really
matter so much.


folem, can you give a more specific example please?

Ian.

 
[citation][nom]mapesdhs[/nom]I don't think that was his point; rather, unless I misread, he's saying that a newer product isn'tany better than an older product. OTOH, if he's referring to comparisons that are/were not atthe same price point then that's another matter, but he's absolutely right if he's saying a new(for example) mid-range product is slower than an old mid-range. One can argue it's a driverissue, but the whole *point* of paying more for Pro cards is that the drivers are supposed to befar better optimised and debugged prior to launch.This of course also assumes the bottleneck in whatever test is the GPU. If it's not, then againthat's another matter.I have a lot of different Quadro cards and there is a huge degree of performance overlap, butas I say this is usually something that crosses over target price ranges which doesn't reallymatter so much.folem, can you give a more specific example please?Ian.[/citation]

I wouldn't chalk it up to driver issues as much as simply being differences in architecture. Driver quality may be the issue, but there doesn't seem to be as much evidence supporting that as there is evidence supporting the difference in architectures being the issue, especially if we compare all of the Nvidia and Ati/AMD cards as a whole rather than just one or the other.

Overall, the newer cards beat the older ones, but there are simply some tasks that an architectural change might not have taken as well too. At least there are only a few true losses compared to the majority of wins for the newer cards. Heck, in a few situations, we've got Nvidia trailing by minuscule fractions and the other way around. I'd think that only having a few losses is not too bad when it comes to this since you shouldn't expect being newer to necessarily mean better in every possible way (something that we see proof of all too often).

If the issue was the drivers, then better drivers will probably come out and fix it, assuming that they haven't already (can't ignore the age of the article). If the issue wasn't drivers, then there's no reason to blame the driver quality.
 

mapesdhs

Distinguished
I agree. And it doesn't help that some apps respond a lot more to better types of CPU power in specific ways
(I don't necessarily means no. of cores), such that an apparently weaker CPU at a high clock with a middling
card can give benchmark scores that beat a typical normal quad-core 'pro' CPU with a newer card (ProE is a
good example), though such results don't necessarily reflect real application workloads which might involve
large datasets or other types of processing, such as was often the case with SGIs that did on-the-fly IRIS
Performer database conversion prior to each frame being rendered (eg. interactive oil rig models). Benchmarks
can only take one so far wrt making a purchasing decision.

This is why I'm running AE tests with real workloads (tests that grab as much as 40GB RAM) to at least gain
some insight & offer practical info for that particular app. But as you say, it's hard to know how data for one
app might apply to another, and the reality might be different to what benchmarks suggest. AE is a good
example: site reviews talk a lot about final scene render times (something that's easy to test & measure) but
that's not as important as the degree of interactive response available to the artist (much harder to measure).
There can also be the unexpected: certain types of visual effect in AE can take *longer* to render via CUDA
when done with multiple GPUs, because of the way AE tries to process the scene - the sort of thing no
benchmark will reveal.

Ian.

 

Haravikk

Distinguished
Sep 14, 2013
317
0
18,790
I know I'm a bit late here, but it seemed to be the most recent workstation GPU review on the site (sorry if I missed one).

What I'd like to say is that we could really do with some more tests that highlight the benefits of the workstation GPUs over the consumer ones; in the Direct X and OpenCL sections the consumer cards seem to perform just as well. While I realise that a part of what makes the workstation cards good are the optimised drivers for AutoCAD etc., there are supposedly hardware differences too but reviews never really do anything to highlight these.

A good test to see would be a stress test where a FirePro card performs a long-running task for several days to see how much work is done in total alongside how it performed (speed, temperature, noise, maybe errors if a test can detect those). Then do the same for the equivalent consumer card.

For example, the consumer cards seemed to do somewhat better at Bitcoin mining, which may be symptomatic of the programs being optimised to run on these cards, but it might be nice to see if that lead is maintained after running for several days, and what kind of heat and noise each card generates during the test.

It's just something to keep in mind for future workstation GPU reviews, but we could really do with this being highlighted as no-one ever seems to have addressed whether there are true hardware differences or not.
 

mapesdhs

Distinguished


The difficulty comes when considering what kind of test would
better reflect the usage of such cards. For example, a real-world
task on a pro card might involve a large data set, lots of I/O,
probably some preprocessing, and perhaps may be of a type
that must run for a long time without failure. GIS, medical,
engineering analysis, all sorts of things. Thus, apart from the
features of a pro card that may benefit relevant applications,
the reliability of a pro card may be just as or more important,
something that's hard to quantify in a simple performance
benchmark. No point using a cheap gamer card if it means
the system crashes after a while because the card can't
cope with the sustained load and large amounts of data
being processed.

Hence, a 'realistic' test of a pro card may end up hiding
important aspects of the card among other things, whereas
a deliberately smaller dataset that gamer cards can handle
aswell could justifiably be discounted because it's not realistic.
For example, the AE test dataset I've been using grabs about
40GB RAM and causes a lot of I/O. GIS tasks would be much
more severe.

This is why Viewperf can be misleading, ie. it doesn't use
real-world datasets. Real problems often involve layers of
complexity that are hard or impossible to include in a benchmark.

Ian.

 

Haravikk

Distinguished
Sep 14, 2013
317
0
18,790

Oh absolutely, and it'd be great to see tests for these kinds of things, even if they can only end up comparing other workstation cards in the end.

But even just running a crypto-currency mining test but leaving it running for several days (which is a more relevant form of such a test anyway) could give some indication of how the performance, temperature etc. cope over a long period. I mean, given the generally slightly lower clock-speeds of workstation cards it isn't surprising that consumer cards will outperform them for less money in short bursts, so it'd be nice to at least see long tests to give us more performance graphs for the results, rather than just the usual bar-charts that don't show much of what's really going on.

I mean, when it comes down to it, graphics cards for gaming don't expect to be pushed to 100% for long periods since games simply don't do that. But how does that show up in usage over-time when they are pushed hard? i.e - will consumer cards spike initially but be forced to ease off with time? How does the workstation card compare; can it ramp up to full speed and then stay there like we expect them to? Even just these kinds of basics aren't really covered enough IMO, and I think it makes sense for a review to do so as every time a new workstation card comes out there are the inevitable questions of what exactly you're paying for, whether it's all just a scam etc.
 

mapesdhs

Distinguished
Haravikk writes:
> Oh absolutely, and it'd be great to see tests for these kinds of things, even if they can only
> end up comparing other workstation cards in the end.

One often sees a demand by readers for a gamer card to be included in any review of a pro
card. Fine & fair enough for certain apps that are not OGL-based, but really it misses the point.


> But even just running a crypto-currency mining test but leaving it running for several days (which
> is a more relevant form of such a test anyway) could give some indication of how the performance, ...

Perhaps, but when it comes to writing a review, such lengthy testing times are not practical. Also
rather expensive re power consumption.


> ... I mean, given the generally slightly lower clock-speeds of workstation cards it isn't surprising
> that consumer cards will outperform them for less money in short bursts, ...

Depends on the task as I said; in many cases pro cards can be an order of magnitude faster
than a gamer card.


> I mean, when it comes down to it, graphics cards for gaming don't expect to be pushed to
> 100% for long periods since games simply don't do that. ...

Hmm, don't think I can agree there, games often do hammer cards constantly, and gamers
will play games for many hours.


> But how does that show up in usage over-time when they are pushed hard? ...

In most cases they handle it just fine, because they have better coolers.


> ... i.e - will consumer cards spike initially but be forced to ease off with time? ...

Except for AMD's recent cards with their new throttling mechanism, no.


> How does the workstation card compare; can it ramp up to full speed and then
> stay there like we expect them to? ...

Yes. Caveat: old cards could do with a clean. Dust build-up in something like a
Quadro 4000 can mean it gets a bit hot under load.

Good idea to replace the stock cooler on a pro card if it's viable. I did this with a
Quadro 4000 recently, it reduced load temps by a massive 40C, and the PCB
temp dropped by 35C.


> as every time a new workstation card comes out there are the inevitable questions of what
> exactly you're paying for, whether it's all just a scam etc.

Apart from the optimised drivers for pro apps which give much faster performance most of
the time, what you're really paying for is better warranty, reliability, consistency, etc. At least
in theory anyway. And (I hope) the ability of the product to be able to cope with heavy loads
that go beyond just stressing the GPU the CPU.

I discuss this more here:

http://forums.creativecow.net/thread/2/1019120#1038491

Ian.



 

Haravikk

Distinguished
Sep 14, 2013
317
0
18,790

What about a follow-up review or something? e.g - leave some of the current cards running to get this data now, then when the next workstation review comes around, cover all the basics that can be done in a timely fashion, and afterwards do the same long tests on the new cards to compare those as well as a part 2 or an update or whatever?

The power issue I can understand though; I'm one of those bastards that abuses a good electricity deal to keep my current Mac Pro running affordably ;)


Sorry, I really meant longer periods than that, i.e - overnight renders and that kind of thing. I know gamers who do play solidly for some ungodly amount of time, but they do need to sleep, work or go to school eventually!

I do understand most of the differences in theory, as I'm sure a lot of people do, but there's very little evidence that really shows any of it, which is what I think would be nice to see in a review, if possible. Especially when consumer cards are featured, as in the tests they're shown for they usually appear to beat the workstation cards quite handily, which ends up being misleading. I know I've never been in a situation where I can try a workstation card against a near equivalent (architecture) consumer card as my upgrades always come generations apart and I can rarely afford to just buy a GPU out of curiosity (meanwhile I can get workstation cards as a business expense, though only every few years).
 

mapesdhs

Distinguished
Haravikk writes:
> What about a follow-up review or something? ...

Still rather impractical to do this. Would take too long to gather the data for multiple cards.


> The power issue I can understand though; I'm one of those bastards that abuses a good
> electricity deal to keep my current Mac Pro running affordably ;)

Heh, that's nothing, I know someone who owns a hefty Onyx3800 system; every time he
uses it, he puts 5 Euro in a jar to cover the power it'll use that evening. :D


> Sorry, I really meant longer periods than that, i.e - overnight renders and that kind of thing.
> I know gamers who do play solidly for some ungodly amount of time, but they do need to
> sleep, work or go to school eventually!

Depends. I played Crysis for 16 hours non-stop once. :D

Doesn't matter though; a gamer card is not going to succeed in the market if it can't withstand multiple
hours of being under load, and it's unlikely that a card will be able to cope with hours under load but
not a day or two. Afterall, what exactly are you thinking of in terms of a physical scenario that would
cause a failure after (say) 48 hours but not after 6 hours?


> I do understand most of the differences in theory, as I'm sure a lot of people do, but there's very little
> evidence that really shows any of it, ...

Which differences are you referring to? There is plenty of performance data showing the issues I
describe in my Creativecow post.


> as in the tests they're shown for they usually appear to beat the workstation cards quite handily, ...

Again, largely not true, and don't get blind-sided by GPU acceleration tests. Check my Viewperf
results. Gamer cards only perform well for apps that don't use OGL, such as Ensight and Inventor.
Beyond basic performance, there are other issues aswell as my C-cow post explains. There are
also big differences in available support, warranty length, etc.

The right tool for the right job. Tight budget? Using ProE? Get a used Quadro 4K and an oc'd 2700K.
Using AE? Get lots of GTX 580 3GB cards for an X79 with max RAM.

Ian.

 

Haravikk

Distinguished
Sep 14, 2013
317
0
18,790

It doesn't have to fail, I've just never seen any data on how well the cards compete over longer times, i.e - will performance eventually degrade over 48 hours, or will it settle somewhere, and how does it compare to the workstation cards? Where does performance spike (if at all) and so-on.

Other interesting performance data would be performance with increasing VRAM requirements; I know games can push VRAM quite hard too, but it'd be good to see some kind of test that can scale VRAM usage over time so we can see how a consumer card copes with 512mb of textures, 1gb of textures, 2gb etc.

It just seems to me that there should be tests that can highlight some of the hardware differences (if any) between consumer and workstation cards, as AMD at least have claimed that FirePros do use different hardware (to a degree), more aggressive binning and so-on, but no-one ever seems to really show it as all we see are where the drivers usually differ.
 

mapesdhs

Distinguished
Alas I think what you're asking for just isn't practical in terms of time & preparation for a
review involving multiple cards and other possible component combinations. It's not as if
review sites have multiple systems so they can test more than one GPU at the same time
(far too costly to support such a thing).

Besides, I highly doubt performance would degrade after that amount of time. If it did,
most likely someone would have noticed by now.

How an application makes use of a GPU is probably more important than how a card
behaves over extended periods. AE can use lots of GPUs, but often it does not split
the workload that well, eg. resulting in 4 GPUs running at maybe 40% load each (with
no option for round-robin frame loading AFAIK).

Consumer cards cope with VRAM loads exactly as one would expect them to and as
benchmarks generally show - go consistently over a card's limit and games will begin
to stutter (collapsing minimum frame rates), though again some games are coded better
than others, eg. Crysis2 handles it very well, and of course having an SSD can help a
lot (eg. the opening-sequence stutter in the Stalker COP benchmark is greatly reduced
with an SSD).

Point is, the hw differences between cards are just not as important as the differences
in the drivers which already result in performance behaviour that can be an order of
magnitude
apart, eg. AA line performance in apps like ProE. Exceeding the VRAM
limit of a card isn't something I'd describe as a hw "difference" though, since choosing
a card while baring in mind VRAM issues is a subject on which there is plenty of info
available these days.

It's much more complex to cover the VRAM issue for pro cards though because the
variance in VRAM usage can be two orders of magnitude, eg. just a few tens of MB
for various CAD tasks, up to tens or hundreds of GB for GIS/medical/defense/AEC.

There's only so much a review site can do. Beyond that, much of what matters for
sensible usage is down to the buyer: use a quality case with plenty of room and air
flow, good PSU, good fans, exploit water cooled CPU setup if that can help, etc.

Ian.

 
Status
Not open for further replies.