AMD FirePro V8700: High-End Workstation Graphics

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]curnel_d[/nom]Ok, so basically, we'd have to get ahold of a fireGL bios, and hack it into a 3850. If there was an equivilant to a 3870, I'd do it myself. Who's brave enough?[/citation]
Find me the bios of a quadro card that'll work on a 6600gt (pcie version without sli) and I'll try it out. I happen to have 28 of those spare at the moment.
 
Somewhat off topic here and probably better off in the forums, but...I know Toms has done a comparison of "upgrade CPU or GPU?". When it comes to apps like Maya, 3D Studio and others, I'd like to see a similar comparison. When working with 3D rendering, I'm assuming that the drivers must offload some of the processing to the GPU if drivers alone make such big differences. So would I benefit more from spending, say $330 for a Q9650 C2Q chip (upgrading from a low end C2D) or spending over $1000 for what is essentially a software upgrade ("drivers")?
 
[citation][nom]Evolution2001[/nom]Somewhat off topic here and probably better off in the forums, but...I know Toms has done a comparison of "upgrade CPU or GPU?". When it comes to apps like Maya, 3D Studio and others, I'd like to see a similar comparison. When working with 3D rendering, I'm assuming that the drivers must offload some of the processing to the GPU if drivers alone make such big differences. So would I benefit more from spending, say $330 for a Q9650 C2Q chip (upgrading from a low end C2D) or spending over $1000 for what is essentially a software upgrade ("drivers")?[/citation]

3D Rendering programs feast on optimized stream processors.
That C2Q will help but not even close to what the an optimized card with drivers will do.
It's the same thing as using a faster processor to offset the performance of an IGPU versus even a midstream dedicated gaming card.
Only in this case the performance variance is even higher.
Upgrade the CPU if you want, but don't do it for better 3D rendering performance.
You're better off spending a few hundred and either soft modding, or if that's not your thing, purchasing a legit (if not old) FireGL/Quadro

As far as gaming on a workstation card goes, this is the rule of thumb (as I've understood it, and practiced)
Gaming cards and Workstation cards are identical in terms of hardware, but as someone correctly identified before it's the BIOS and drivers that are the "special sauce."
Gaming cards are optimized for higher and more steady framerates at a cost of image quality.
Workstation cards are the exact opposite, with more thrown in for high-performance OGL work.
You can absolutely game with a workstation card (I am) but you will notice a framerate hit compared to it's much-less-priced gaming brother.
For example my FireGL V7700 compared to an actual HD3870.
Is it an enormous drop in performance? Not really, it still performs leaps and bounds better than it's X1950XTX predecessor.
Is it a leap in image quality (for games) impossible to say. Games may often be made on workstation cards but they are optimized for games anyway.
My 3dmark scores are slightly worse but my gaming benchmarks are right inline with a 3870 (so I've gleaned from reviews and others running similar cards) and there's the ability to limit the video quality in the drivers anyway.
All that said, I certainly wouldn't buy a workstation card for gaming under any circumstances.
However if you obtain a sample card for customer required tests that are no longer in use and the company providing said sample said they
couldn't care less what happens to it because they can't sell a used card.
That's a different story.
 
Is it a leap in image quality (for games) impossible to say. Games may often be made on workstation cards but they are optimized for games anyway.
To put that in a way that makes sense.
Games are often made on workstation cards, but the image quality is compromised when they optimize it for gaming cards.
... the price I pay for not being able to edit.
 
Thanks for the reply Miribus. Given that the cost of nVidia cards are so inexpensive, the potential benefit of hacking the BIOS to run the Quadro drivers seems well worth it. End up spending $200 on two cards (assuming I botch the first one and just stick with a stock replacement) or spend $500+ for an actual quadro. It makes sense to try and hack the BIOS. I did some researching on this a while back when trying to home-build a 'certified' Avid video editing system. I'll have to go back and revisit those sites.
 
An addendum to my previous posts... apparently CPU's still can make a difference (duh!) on rendering times. According to this particular TH chart, the difference can be as much as 60% increase between high and low end end CPU's. Using this chart as a reference, what kind of times can be expected by using professional GPU drivers? This is why I get confused... 🙂
 
[citation][nom]swint144[/nom]Phoronix just published an interesting article comparing the FirePro8700 to the FireGL V8600 and the consumer Radeon HD 8750 under Linux:http://www.phoronix.com/scan.php?p [...] 8700&num=1[/citation]
Sorry - that's the 4870, not 8750... need to wake up now.
 
[citation][nom]Evolution2001[/nom]Thanks for the reply Miribus. Given that the cost of nVidia cards are so inexpensive, the potential benefit of hacking the BIOS to run the Quadro drivers seems well worth it. [/citation]For anyone that may care, I was able to successfully softmod my nVidia 8600GT to that of a Quadro FX 1700 by using the free app RivaTuner. I haven't run any benchmarks yet. If anyone's interested in benchmarks, you can reply to this thread and I'll get notified of it.
 
I guess if you can run Viewperf, see how it compares to a normal
8600GT, that should reveal if the mod is having the proper effect,
re things like antialiased line performance.

Ian.

 
[citation][nom]Evolution2001[/nom]For anyone that may care, I was able to successfully softmod my nVidia 8600GT to that of a Quadro FX 1700 by using the free app RivaTuner. I haven't run any benchmarks yet. If anyone's interested in benchmarks, you can reply to this thread and I'll get notified of it.[/citation]
I've done the same with my 3870, and the benchies paint a similar picture that this article did.
My image quality in zBrush and Maya has increased a ton too. Not to mention going from 38fps in Maya up to a whopping 212 fps. View perf averaged it's performance at 158 compared to the 28 it had before.
 
I'll download ViewPerf when I get home and attempt to run it within the next couple of days. Unfortunately, I won't have a real baseline unless I undo the softmod back to the ol' 8600GT BIOS. As well, I've already popped in the C2Q 9650 (up from the E2160 chip which was OC'd to 3GHz). Too many variables to effectively check CPU upgrade -vs- video drivers. Could I revert everything back? Yes. Will I for the sake of benching? Not likely. But maybe I'll do just the softmod flipflop again.
 
You have an error in the table listing the specs. It should say Shader Model 4.1 for the cards that support DX10.1.

Lets face it, if there extensive support was worth the huge price they charge then it would not be necessary for them to disable certain acceleration features.
 
After spending close to a thousand dollars for this card, i simply can't get the card registered on ATI website. Why It keeps prompting me:
"A valid password consists of 6 to 8 digits with a combination of letters and numbers plus at least one special character. "

even though I tried many combinations of digits/letters/special symbols such as #ab@*21&. The registration site simply SUCKS! Maybe ATI should post a sample valid password as a reference. I just couldn't understand what's wrong with my password.

ATI promised dedicated hardware support for firepro users - just a piece of crap when they don't even allow users to register in the first place!

 
After spending close to a thousand dollars for this card, i simply can't get the card registered on ATI website. Why It keeps prompting me:
"A valid password consists of 6 to 8 digits with a combination of letters and numbers plus at least one special character. "

even though I tried many combinations of digits/letters/special symbols such as #ab@*21&. The registration site simply SUCKS! Maybe ATI should post a sample valid password as a reference. I just couldn't understand what's wrong with my password.

ATI promised dedicated hardware support for firepro users - just a piece of crap when they don't even allow users to register in the first place!
Try -
one cap
one lowercase
one astericks
fill in with numbers

Should work, most passwords are requiring at least one capital letter nowadays. Hope you enjoy your new card.
 
Selling nearly identical hardware (Radeon vs FirePro) on the grounds that the driver is the differentiating factor is not terribly offensive to me. I am OK with putting a value on better algorithms, whether they are implemented in hardware or software. But I wonder what this means to developers. Why are certain products such as Maya faster and other application unaffected? Is it that the Maya programmers are using secret OpenGL extensions or is the driver optimized for Maya calls rather than generically optimized for OpenGL? If I am writing my own applications on Linux will they be able to get a performance boost from a FirePro? If only certain applications are optimized through the drivers than the FirePro is of no use to me as I write custom applications. If the FirePro is generically optimized than what are the specs in terms of the OpenGL API calls and their profiles? So in general this is a useful review for users of certain applications but it leaves a lot of unanswered questions for users of all the other apps and developers writing new code.
 
For a long time now drivers for pro-series PC gfx cards have been
application specific, eg. one is usually expected to load a
specific profile for the card which refers to the target application.

This is a far cry from SGI's original OGL gfx systems which simply
had very savvy drivers that accelerated everything to the best
extent, especially if one used Performer for real-time apps.

Perhaps different apps reveal different types of bottlenecks
in the gfx card? Or maybe there is just a lot more effort put
into optimisations for apps like Maya. Impossible to say really.

I doubt this will get any better until gfx vendors start
including hardware performance feedback from within the GPU
so that an application can evaluate what is happening in the gfx
pipeline. At the moment, nobody can ever say whether a particular
application is using the gfx hw efficiently, which is why I'm
not that impressed by Crysis since the low fps results it gives
could easily be due to an inefficient engine rather than a game
that has a lot more visual detail.

Ian.

 
Status
Not open for further replies.