AMD FirePro W8100 Review: The Professional Radeon R9 290

Status
Not open for further replies.
"The video shows that the AMD FirePro W8100 is bearable when it comes to maximum noise under load. This also demonstrates that a thermal solution originally designed for the Radeon HD 5800 (which hasn't changed much since) deals with the W8100’s nearly 190 W a lot better than the W9100's 250 W."
Wait a minute. If this kind of cooling is better than the ones that used on R9 290 and they had this kind of technology from HD5800 series, then why in the hell they didn't use it on R9 290 series instead of using this crap cooler they used?
 

FormatC

Distinguished
Apr 4, 2011
981
1
18,990
It is the same cooler, but the power consumption of the W8100 is a lot lower. This cooler type can handle up to 190 watts more or less ok, but the R9 290(X) produces more heat due a more expensive power consumption
 

Cryio

Distinguished
Oct 6, 2010
881
0
19,160
"Nvidia's Quadro K5000 is quite a bit cheaper, but comes with half the memory, less 4K connectivity, and is generally slower."

That's almost an understatement. The K5000 is almost constantly 50% slower than the W8100, with a few 25% cases difference. For 700 $ more, the W8100 looks like a great buy.
 

Well this doesn't approve that the cooler they used is superior. It might be higher TDP rated but that doesn't mean that its better than a lower TDP rated. We know how this rated works, the number is not by any means absolute. And we have seen in the past (especially at CPU coolers) higher TDP rated coolers to loose against lower TDP coolers for a lot of reasons (better quality, better tech, better materials, heatpipe placement etc etc).
I think the real reason might come from your review.
I don't believe in coincidence, but they decided to use it on a more expensive professional GPU with great success.
How do we know that the cooler used in W8100 wasn't approved for R9 290(X) cause of its higher cost perhaps?
ps: Am I asking too much if I ask from any reviewer on Tom's to test this cooler on a R9 290? (if its compatible ofc...)
 
Gentlemen?,

The focus on the Firepro W8100 and Quadro K5000 being competitors as something to directly compare is a bit misleading and distracts attention from the impressive features of the Firepro W8100.

The W8100 does outperform the Quadro K5000 is some important ways, but to be in marketing competition, the performance should be to be in the same general league. The W8100 is 56% more expensive- the price difference of $900 is more than enough to buy a K4000 (About $750).

On a marketing-basis a $60,000 car that is 50% faster is not a direct competitor to a $38,000 one. The use and expectations of performance and quality are different. The logic is to say, "If you're thinking of buying a Quadro K5000, you should know that for 56% more you can have 25-50% higher performance in several important but not all categories." These purchases are most often budget driven- how many have unlimited funds- and the buyer of a $1,600 card will be a different person from someone with a $2,500 budget. The buyer's quest is more often based on how much performance is expected combined with how much is possible within the budget.

These cards may have the same applications, but for the W8100 to be better value than a K5000, it should have a consistent 56% performance advantage. A better comparison would be to consider for example, the W7000 and Quadro K4000. Both about $750, but the W7000 is 256-bit, has 4GB. a 154GB/s bandwidth. and 1280 stream processors against the K4000's 192-bit 3GB, 134GB/s and 768 CUDA cores. On Passmark Performance Test, a W7000 3D score near but not the top is about 4300 and 2D at about 1000 while the K4000 scores near the top at about 3000 3D and 1100 2D. The news for AMD is even better when considering that a $1,600 Quadro K5000- double the W7000 cost but also 4GB and 256-bit- near the top 3D scores are about 4300 and in 2D about 900. For me, a better marketing strategy would be to compare the K5000 to the W7000 and the W8100 to a mythological "K5500" that would cost $2,800 (midway between 4 and 12GB and $1600 and $5000).


This means that the person looking for the best performance for $750 -and uses the applications the W-series is good at- has an easy choice in the W7000.

Still, the features- especially the 512-bit and 8GB plus overall performance make the W8100 one to consider in the upper end of workstation cards. This should be a very good animation /film editing card. The comments about AMD being more forward looking than NVIDIA may be correct though the comments about the quality of Quadro drivers also seems true. This furthers the trend of GPUs tending to concentrate in certain functions-( the W8100 in OpenCL for example), having to consider GPU's one by one according to the applications used. More and more, with complex 3D modeling and animation software, specific software drives graphics card choices and except for the very top of the lines, the cards seem to less all-rounders than before- not good at everything.

BambiBoom
 

nebun

Distinguished
Oct 20, 2008
2,840
0
20,810
call me stupid but how is a $2600 gpu a fair competition to a $1600 gpu....am i missing something?....also the amd gpu has more cores....this is not a fair comparison....take an envidia card with the same about of cores and we see who comes on top....AMD HAS THE WORST DRIVERS
 

falchard

Distinguished
Jun 13, 2008
2,360
0
19,790
I would like to see these matched against their desktop counterparts in productivity tasks as well. Over the years we have seen a shift in architecture where the desktop part is pretty much the same as the workstation part using different drivers. Some of us need to compare the benefits of workstation cards over desktop cards. A few years ago in CAD based programs we would see a 400% or more increase in performance compared to desktop chips, and knowing this is still the case is very important.
 

mapesdhs

Distinguished
Typo: "... our processor runs at a base close rate ... "

I assume that should be, 'clock rate'.


Btw, how come the test suite has changed so that there is no longer any
app being used such as AE for which NVIDIA cards can be strong because
of CUDA support?

Ian.

 

mapesdhs

Distinguished
A down-vote eh? I guess the proverbial NVIDIA-haters still lurk, unwilling to
present any rationale as usual. :D

And falchard is right, Viewperf tests showed enormous differences between
pro & gamer cards in previous years, but it seems vendors are deliberately
blurring the tech now, optimising for consumer APIs (ie. not OGL), which
means pro tests often run well on gamer cards. In which case where is their
rationale for the cost difference? Apart from support and supposedly better
drivers, basic performance used to be a major factor of choosing a pro card
and a sensible justification for the extra cost, but this appears to be not the
case anymore; check Viewperf11 scores for any gamer vs. pro card, the only
test where a gamer card isn't massively slower is ENSIGHT-04. For MAYA-03,
a Quadro 4000 is 3X faster than a GTX 580; for PROE-05, a Q4K is 10X faster;
for TCVIS-02, a Q4K is 30X faster.

Today though, with Viewperf12, a 580 is faster than a K5000 for MAYA-04,
about the same for CREO-01, about the same for SHOWCASE-01 and
not that much slower for SW-03. Only for CATIA-04 and SNX-02 does the
expected difference persist.

Meanwhile we get OpenCL touted everywhere, even though there are plenty
of apps which can exploit CUDA, but little attempt to properly compare the
two when the option to use the latter is also available, eg. 3DS Max, Maya,
Cinema4D, AE, LW, SI, etc.

Ian.

PS. nebun, the core structure on these cards is completely different. The number
of cores is a totally useless measure, it tells one nothing. One can't even compare
between different cards from the same vendor, eg. a GTX 770 has way more cores
than a GTX 580, but a 580 hammers the 780 for CUDA. Indeed, a 580 beats all
the 600 series cards for CUDA despite having far few cores (it's because the newer
cards use a much lower core clock, less bandwidth per core, etc.)

 

ddpruitt

Honorable
Jun 4, 2012
1,109
0
11,360
Doesn't Tom's do copy editing? What's up with the chart with blanks at the bottom of page 13.

It's nice to see that AMD is starting to close the gap on it's products. They seriously need to consider updating their cooling solutions and improving power. I would be interested to see if these workstation cards throttle down as often as their desktop counterparts. In my experience most of the current Hawaii chips are running higher voltages than needed and they could save both power and heat by running them down a bit. It should allow the boards to stay stable and compete better in many workloads.
 

eodeo

Distinguished
May 29, 2007
717
0
19,010
I liked the review, but the "detailed" power consumption part left me wanting more.

It is known that AMD cards use more than double power compared to nvidia while idling in multi monitor scenarios. Seeing how this is a professional GPU, chances are that it will be used in a multi and not in a single monitor environment. I'd like to know if the workstation class cards address this problem better than their gaming cousins.
 

somebodyspecial

Honorable
Sep 20, 2012
1,459
0
11,310
As mapesdhs alluded, why is CUDA not being pitted against OpenCL here? Premiere and AE both use Cuda.

https://blogs.adobe.com/premierepro/2011/02/cuda-mercury-playback-engine-and-adobe-premiere-pro.html
Cuda has been in both for years, NV made MPE with Adobe. This hasn't changed. OpenCL has been added, but Cuda is still there.

http://www.dslrfilmnoob.com/2014/04/26/opencl-vs-cuda-adobe-premiere-cc-rendering-test/
April 26th 2014 test, Cuda vs. OpenCL, GTX 670 vs. 290x. ~Tie. So pretty sure a 780TI would smoke the 290x, but it does show as he said they've been improving opencl. However, a 670 would be handily trounce by Nv's 780ti here, so it applies to the 290x too.
http://www.anandtech.com/show/5818/nvidia-geforce-gtx-670-review-feat-evga
GTX 670 specs-1344 cuda cores, 3.5B transistors, 256b wide, 6ghz mem etc)
http://www.anandtech.com/show/7492/the-geforce-gtx-780-ti-review
2880 cuda cores, 7.1B, 384bit bus width, 7ghz mem etc...Not even in the same league for cuda testing.

Is this why tomshardware avoids doing the Cuda vs. OpenCL and acts as if it doesn't exist here even though you toss out a comment saying cuda is good for this stuff IN this very article (then why not show it?)?

https://www.youtube.com/watch?v=XTIqzzTNag0&html5=1
Not my native language, but at 1:45 into the vid you can see him making the selection of CUDA or OpenCL or SOFTWARE. How difficult is it for you guys to click a circle? I could dig further to get a english vid, but you get the point and that took 15 seconds to find..LOL.

Fake articles need to stop just like that qcom S805 preview crap with no K1, when we already know it is TROUNCED by K1, per anandtech, slashgear, Hexus, Pcper (both xiaomi mipad and shield tablet reviews, and the mipad was july 21st), etc who all show the same numbers easily compared etc with all the usual suspect benchmarks.
 

folem

Distinguished
Oct 9, 2011
58
4
18,635
"The video shows that the AMD FirePro W8100 is bearable when it comes to maximum noise under load. This also demonstrates that a thermal solution originally designed for the Radeon HD 5800 (which hasn't changed much since) deals with the W8100’s nearly 190 W a lot better than the W9100's 250 W."
Wait a minute. If this kind of cooling is better than the ones that used on R9 290 and they had this kind of technology from HD5800 series, then why in the hell they didn't use it on R9 290 series instead of using this crap cooler they used?

There are two possible reasons for using a different cooler on the Radeons. The most obvious is cost, the Hawaii GPU costs a lot to manufacture compared to previous generations, almost as much as a GK110 in the Titan. Less obvious may be that it was meant as a show of confidence in card manufacturers.
 

folem

Distinguished
Oct 9, 2011
58
4
18,635
Quick correction on page 1; 3840x2160 is UHD, not 4K. 4K is 4096x2160, the aspect ratio difference is important and the general use of 4K to refer to UHD displays and content as well is harmful for the uninformed person.
 

folem

Distinguished
Oct 9, 2011
58
4
18,635
It seems to me that AMD and NVIDIA have both realized that the K4000/W7000 area is the most profitable. Engineering and DCC companies will put one of those in 500 employees' workstations, whereas a much smaller group of companies will put w8100s or K5000s in just a handful of machines for big data processing. Then there are just a handful of companies like Pixar that will put K6000s in just one or two machines.
 
Status
Not open for further replies.