AMD FirePro V9800 4 GB: Eyefinity Meets Professional Graphics

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
G

Guest

Guest
In the first two pages of the review, I found two cases similar to this: "costs $500 bucks". Um. If you use "bucks" don't use the dollar sign. The correct reading of the example is "costs 500 dollars bucks". PLEASE don't do that to my poor abused brain.
 

jecastej

Distinguished
Apr 6, 2006
365
0
18,780
Well, AMD/ATI is back on top with Maya with this high-end card but I want to see how this translates into the lower end cards. Is a good sign from AMD but I can't spend this much on a top professional GPU.

On the other hand I appreciate a lot the effort with this review. Keep them coming and Thanks!!, and as manny suggested try to include lower end cards and possibly viewport analysis with rotations of shaded, wireframe, textured and illuminated scenes. I know we are asking for a lot on a very demanding and small market.
 

demonhorde665

Distinguished
Jul 13, 2008
1,492
0
19,280
[citation][nom]Eugenester[/nom]Nvidia shines in the industrial/commercial/scientific market, with their driver team and CUDA/GPGPU tech. Too bad the V9800 fell short of expectations. Also, Nividia cards are obviously going to have better results in Adobe Mercury since both companies worked together on hardware optimization. AMD needs to be more aggressive in working together with software makers, (including games!) to have a stronger hold on both the CPU and GPU markets. Overall, a good read.[/citation]

its not that amd dosn't try to work with game amkers and so forth ,its teh simple fact that nivida has more money to throw around. Dev teams are nearly always strapped for more cash resources in this day and age as are publishers. donations go along way to geting good optimizations for your ahrdware from the devs that are makign a game. i bet you easily that many TWIWMTBP games were built in part of nivida money to thier devs.

major loop hole in kmonoploy laws really cos in thoery nivida if they ahd teh money godd trhow it at nearly every dev and essinitlaly wipe out the chance that the competition even competes. with the rare occasion most TWIMTBP games run extra crappy on ati hardware for no apperent reason ( i've even see miuch older nvida cards compeltely trounce on much higher end newer ATI cards runnign teh same game before )
 

demonhorde665

Distinguished
Jul 13, 2008
1,492
0
19,280
[citation][nom]BetterInteraction[/nom]im with Cwize1 and eaclounearly all these bench are not optimized to showcase a workstation gfx card, save for the ones that have collaborative work between the hardware and software vendor (which is already biased), the reason why people get cards of this caliber is because the large frame buffer and gfx power that allows real time interaction with super big scenes/assembliesas many have noted quadro is the de facto in this market but i have heard anecdotal stories about how firePro just feels more zippy, more then 80% of gfx work is interacting and manipulating objects, better interaction between you and the computer means work gets done faster, zippy does count for something...[/citation]

my thoughts exactly , it always seemed stupid of toms (yeah im insulting toms hardware so go ahead and and ban hammer me for aweek you nerf herders) that they show pcopu based test ona work station card , really measuring rendertimes in 3ds max ... get real the renderer is not ran on videocard but is ran on cpu showing this test tells us nothigna bout the power of these cards , what you ened is a poly count to frame rate test


one good way to do a good test is to first makes sure you set all view ports to hardware rendering, then duplicate a bunch of basic spheres (and convert them to edit poly) till you have 100,000 tris , measure view port frame rate in fraps , the double those spheres to get 200k tries , the again double those to get 400 K tris and take mesures of rames in view port

it's not really that hard or that time consuming a basic 32 sided sphere will make 64 tris duplicate once then duplicate both those , then duplicate all 4 ect ect , done this test myself in less than an hour sure seems a lot of time , but it is really the best way to measure video card performance in 3ds max since the renders are processed on the cpu and not the gpu.
 

dallaswits

Distinguished
Sep 16, 2010
77
0
18,630
I wish there was a card that size (from the thumbnail) that could drive 6 video monitors...
I would sell them like hot cakes to customers....
 

eodeo

Distinguished
May 29, 2007
717
0
19,010
It seams I'm not the only one wishing to see real 3d benches.

I would like to see actual viewport performance numbers in Max/Maya.

Maybe this could be as simple as a camera running around high poly toms hardware logo with many light sources. This would be much more informative than running a CPU based render that doesn't really care if you have a discrete graphic card inside your computer or not.

Which brings me to the inevitable- I'd like to see actual geforce/radeon cards with the same chip running head to head against their workstation flavored cards under the aforementioned conditions. Sprinkle that with low end quadro/firepro cards to add to the mix, and we could have a really informative and relevant test. Very much unlike the SPEC is now.

I for one would love to help setup the mentioned test scene to put graphic cards to the test.
 

Burnsy9000

Distinguished
Dec 23, 2009
3
0
18,510
totally agree with eaclou, appreciate the article and what THW does, but I find the info here only 'sort'of' relevant to the real world workstation card user. What would really help me compare cards is relevant tests = viewport interactivity, fps while panning, zooming and rotating large complex scenes with lots of polys, in wireframe and shaded textured in common programs like 3dS max, maya, pro-E, solidworks, etc. and not just comparing 2 cards head to head, but a full chart with both workstation cards and put the gaming cards in there too. I've always read that because of drivers workstation cards are the way to go, but I'd love to see concrete and current proof of that. For the price of one Quadro 5000 I could get a bunch of GTX 580, maybe they would run a Max scene faster,who knows, but would be awesome and so useful to see a comparison like that.

From my experience in 3DS Max, quadro wins over firepro by a big margin. Quadros have Max specific drivers that put them way above AMD cards. Even low-end quadros are better than mid-range firepros using the max drivers.
 
G

Guest

Guest
AMD need some help with marketing strategy?

You guys and galls are shooting yourselves in the foot several times a month now...
 
G

Guest

Guest
Anonymous :
Benchmark with gpu base render engin like mental images IRay or Chaos Group V-Ray RT

eaclou:
There's absolutely no need to include rendering in mentalray, which does not use a GPU at all. At this point in time, there are few GPU renderers that are widely used or as flexible as the established CPU renderers....

mental images Mental ray is CPU base renderer
mental images Iray : CPU+GPU base Renderer
 

blackwater11

Distinguished
Mar 22, 2008
75
0
18,630
This article means nothing. WTF the workstation GPU cards are simply for disply port performance. Is everyone retarded? Rendering is CPU bound. Viewport performance is GPU bound.

Morons,
BW11
 

blackwater11

Distinguished
Mar 22, 2008
75
0
18,630
what's even funnier than this? I'll tell you:
Nvidia is intentionally throttling their GeForce cards during idle and making it so you can no longer change the core / mem / shader clocks at idle. This is so the cards don't perform well at all for 3D applications. This is designed this way so you have to go out and spend $4000 on a workstation card. Sneaky little m&fers Actually I just bought my last Nvidia card 6 months ago. Here I come Red.. don't you disappoint me.
 

eodeo

Distinguished
May 29, 2007
717
0
19,010
bw11, watch you language. this is a civil place after all.

Here I come Red.. don't you disappoint me.

They're as bad as the green, only in perpetual underdog position so you dont hear as much about their underhanded ways.

With that said, I do own 4850 and I'm rather happy with it. And with that said, I'm contemplating upgrading to gtx 460-70 or 560-70 when they come out.

Take care pan; fire, here I come.
 

Draven35

Distinguished
Nov 7, 2008
806
0
19,010
The Lightwave 9.6 OpenGL playback test tests OpenGL on the card, the Maya, Lightwave and 3dsMax SpecAPC tests all have portions which test the graphics card specifically as well as general interactivity in OpenGL within the applications. That's about as good as we're going to get right now unless someone can suggest a repeatable way to record a Zbrush/MudBox session. I'm looking at making some tests for Maya and Max 2011, but both seem to have problems digesting the logo model.
 
G

Guest

Guest
Pointless review (the performance part). Most (if not all) the benchmarks used in the comparisons relied heavily on CPU performance.

???
 
G

Guest

Guest
It'd be really nice to see more GPU interactive rendering benchmarks, like Cycles rendering in Blender, Octane Render, Mental Ray's IRay.
Unfortunately a lot of GPU compute seems tied into CUDA, not openCL, so we can't get comparisons of hardware for GPU compute enabled apps that only support CUDA - like 3dcoat.
 
Status
Not open for further replies.