Articles like this make sense to knowledgeable people, but they only add to the regular user's confusion! I am upgrading an older PII 3.2gb to either an I3 or I5 setup, but I don't need any gaming capabilities. So I have a new right now unopened GeForce GTX550Ti and am buying the other parts such as a Z68 board as I go. So, if I read between the lines, this new 550, which has 1024 of GDDR5 memory compared to the 1024 gddr3 on the test cards, should run as well as the workstation card or the low end quadro cards???
I should clarify, I do some CAD, some but small amounts of rendering, so I am under the impression anything I do at this time will be a significant improvement compared to my 7 year old desktop? But I mainly want to improve the general use of my computer, I'm not returning to full-time graphics use! Any input is appreciated, esecially as I am leaning toward the I3 2120 after rerading the cpu forums here.
[citation][nom]djjoejoe[/nom]If a large difference between a workstation card and a gaming card can be the drivers, does this apply to gaming performance as well? Does the workstation GPU preform similar to the desktop equiv or higher thanks to 'better' drivers. Is it just a case of drivers being optimized for things that end up not applying to gaming, thus any sort of performance increase only applies to none gaming applications?Just curious[/citation]
yes. that's why some people even make modded unofficial drivers that provide even more performance.
In fact a graphic card can even be crippled by the driver on porpoise by the manufacturer. For example to force consumers to buy more expensive or more profitable cards.
But the difference isn't only on the drivers, Yes the drivers is the main difference but the hardware isn't the same.
For the pricing you have to understand the business model. You get a huge support and flawless drivers. The price of the component isn't much of an issue. Just think of avatar or some computer generated movie that requires lot's of gpu power. The gpu price in the entire process is like 0.0000001% of the total budget. If those cards costed the double they would still be sold in the same quantities.
These cards are not for the average user not even for special consumers or high end consumers. They are specifically created for a professional market. In fact most pc's equipped with these cards will only run just a single program in it's entire life.
nice review...now let's compare some mid to top end cards also do the same with gaming cards but test them against CAD designed cards...i would love to see how nVidia compares with AMD's top end cards....i am looking to build a new system with nVidia's Quadro 6000 or Multiplex 7000....thanks in advance
I agree you need tons of memory for rendering, but an old dual core opteron?? If I were doing 8 hour rendering, I would definitely upgrade the CPU class. Even a single socket Xeon 2687 would probably cut your time down to minutes. At the very least, you wouldn't have to go to sleep to figure out you ran out of memory.
Do not use professional cards for gaming. The driver optimisations give woeful performance
compared to proper gaming cards. Gamer cards are called as such for good reason. Likewise,
pro cards are best for pro tasks. The optimisations in the drivers are completely different, eg.
2-sided textures in gaming scenarios vs. anti-aliased lines in pro tasks.
Note though that, for Viewperf specifically, excessive CPU/compute power is not required to
obtain good benchmark scores with a particular card. I've tested a Quadro 600 with a wide
range of systems, from simple dual-core i3 up to dual XEON X5570 Dell T7500. The best results,
for the Viewperf suite that is, were obtained with an overclocked i3 550 @ 4.7GHz. See:
Note that the scores are better than those given for the Quadro 600 in this article.
Impressive data from the V3900 though. AMD have done a good job there.
So, if you're operating on a budget, and doing tasks that mirror what Viewperf tests (it's
not realistic for all scenarios), then you don't necessarily need big compute power (a multi-
CPU workstation is a waste in that sense), but good RAM is wise and definitely use a pro
card, not a gamer card. I've tested various pro cards for gaming tasks and the results were
not pleasant (3DMark06 mainly). See:
What Viewperf does not test are tasks that place significant demands on both CPU and GPU
resources, and RAM aswell, such as GIS, medical imaging and others involving huge datasets.
As always, test with your intended task; don't rely too much on benchmark numbers alone.
To design concepts, you'll get the best general performance from an overclocked Clarkdale,
not the i3 2120 which is fixed. Or for additional multi-core rendering speed, use an oc'd 2500K or
equivalent such as an i7 870 (2500K is cheaper now though), but an i3 550 will cost 60% less which
means additional resources can be devoted to a better GPU (the usual tradeoff to consider). I have
additional results here:
We spend a lot in "professional" GPU in our organization, but have issues finding pertinent and up to date benchmarks. This has caused much arguments over the water cooler area.
May I suggest:
- Perform the same benchmarks, nice selection here
- Add the complete lineup of current AMD and NVIDIA offering
- Add the best of the previous generation cards so we can compare from the "older" benchmarks
- Add a couple of expensive gaming GPU to see the gain/loss we get by going pro
- Add a couple of GPU intensive games just for good comparison.
Doing this about one a year with a "Best Pro Graphic card for the money" would be an instant bookmark reference for us.
The image quality is especially interesting to me, as I'd like to get into doing some 3D art (it's really got my interest lately, and looks so much cooler than the boring spool drawings I do). I assumed that just the shear power of a couple of gaming cards would make up for the lack of driver support, but the poor image quality is making me reconsider what I'm going to have to do.
To jaylimo84 these benchmarks though don't paint us the whole picture. I'd rather see Toms take the time to implement some realword, actual performance in Maya, or similar numbers, then have them spend time running older cards through these specview benchmarks. Specially if you're going to include gaming cards in the comparison.
Spleenbegone, just know that a lot of OpenGl based pro Apps, like Maya, don't get any benefit from SLI'd or Crossfired cards. 3D Studio Max would, if run in DirectX mode though, I believe.
spleenbegone, good point about image quality. Years ago when I first tested a GF4 Ti4600 vs. an
Octane2 V12, the GF4 was way quicker, but its image quality was garbage compared to SGI's V12.
Similar differences still apply today. Again, this is why pro cards are best for pro tasks.
Btw, there are plenty of Quadro cards available 2nd-hand. I obtained my 3rd Quadro600 last week
for just 85 UKP, works fine (normal new price is about 140 to 150 UKP, though note there's one on
eBay UK atm for 95 BIN + 10 shipping). Also easy to obtain are FX 5500s, 4500s, 4600s, etc., though
of course newer cards like the 600 have a newer feature set even if they have less performane or
other options (no SLI with the 600).
nice review. All the questions about how does this do on gaming are hardly relavent. This card belongs in an workstation computer, in an office, used for professional stuff. There really arent that many scenarios where you would do PROFESSIONAL cad work AND serious gaming on the same workstation. But to answer the question, i'd expect the firepro/quadro to perform WORSE (or at best, equal) to their gaming cousins
I use autocad for a living and my work PC has a quadro 600 (along with 16gb ram, SSD and a single quadcore sandybridge zeon). So this review is good to see how it compares to other cards. Its also nice to see it alongside the equivalent gaming cards as many people are sceptical about the benefits of spending 4x the price on what is essentially the same hardware.
But I suppose the true test is to test it against a gaming card of the same budget, as oppose to the one of the same hardware. That could be interesting!
DavC, that's a good point about budget, and is why I mentioned the results I obtained re using
oc'd Clarkdale, which gave the best Viewperf scores. If one's task is akin to the Viewperf tests
(whichever one that may be), then spending a lot on a 4-core or 6-core CPU may be a waste,
especially XEON variants which are so expensive. A cheaper CPU like a 550 may be just as good
if not better, especially once oc'd, and the spare budget then spent on a better pro card. Have you
run Viewperf 11 on your system? If so, please compare to my results, I'd be interested to know
what you observe; my data:
I have successfully run a pro GPU alongside dual gaming GPUs. In my case, I used a 8800 GTX SLI setup and added a Quadro 600 to it, on an nVidia 680i motherboard (CPU: QX6700 @ 3.73 GHz, 4x2GB DDR2 800 4-4-4-12). This unlocked the professional options, including the 10-bit display pipeline through the Quadro's DisplayPort, confirming that the drivers were more-or-less unlocked by the presence of the Quadro.
Additionally, I attempted to play Skyrim on the Dell U3011 I used to test the 10-bit capabilities (2560x1600, medium settings), and I can confirm that there is significantly more lag when playing it on the Quadro's display compared to the 8800 GTX's DVI port. Upon further research, it seems that it is possible to render on the gaming GPU and then copy the result to the professional GPU's framebuffer, but aside from this statement my experiences do not reflect this "best of both worlds" scenario.
Also, I should mention that I have since returned the Quadro 600, I used both the 276.42 Quadro and 295.73 desktop drivers successfully, and I have no further interest at this time to continue down this path. In all likelihood I won't revisit this article to answer any questions.
Great reading and always interested, so thanks to Tom's for doing great reviews including pro graphic hardware. Thanks also to confirm you "can't" flash a gaming card these days as easily, as I understand is the same or even impossible to flash an Nvidia pro card. In all, it is not advisable and it is not worth the effort.
I run both a Quadro 600 and a GeForce 460 on the same machine. It is working as expected in Maya and I can play games at the expected 460 fps I read on the internet, included Tom's Hardware. The Quadro600's manual even advice only to install the Professional drivers first, and if you had older pro or consumer drivers, uninstall those and then install first the new pro drivers.
Also I've tried older less demanding games in the Quadro 600 and it played well, but not very well for new demanding games.
As an example of my experience with both pro and gaming cards on the same machine I use Maya 2011 with no issues and always stable with a Dell 2410 monitor. And also I have Batman Arkham City and Lost Planet II among other games and all are running at the expected fps on the GeForce 460 on another 24 Dell.
If quality, comfort, stability in professional apps is your gold always try a pro GPU. Today there are reasonably powerful entry level pro cards to choose from.
I started to experiment on the lower end, so I could gain my knowledge at low cost, and because I started to work on gaming engines and gaming was as important as pro environments. Also because I wanted the convenience to have both environments on one machine. Ultimately, I installed a copy of W7 on a new Samsung SSD and isolated Maya on that drive to speed up things. So I can dual boot my machine depending on what I am going to do.
With all installed in the same machine I can say I am very happy and my next machine is going to include mid priced Quadros with mid priced GeForces. But next time I hope to include also a bigger SSD so I could install more apps, or maybe I will use the SSD as a cache and a regular mechanical drive.
My machine: Asus MOBO, 2600k CPU, 16 gb Corsair mem, 64 GB 830 Samsung drive, 1 TB Western Digital, Quadro 600 and GeFroce 460, Windows 7 64 pro and a lot of 3D professional and gaming software.
When your in making 3d models, CAD etc and your life/income depends on these workstations and GPU's then you want to have the support available which makes this worth every last penny, even though for the same price they could get a Radeon 6870 or something which is obviously going to outclass this GPU in most situations
SO TOMS, i have an assignment for you (or at least what i think to be a great idea)
How about matching one of these GPU's up with a new entry level Opteron 3200 series chip and see how it compares to workstation tasks compared to an Intel I3/i5/i7 and GPU of the same price point to see what Architects/3d modeling users are better off investing in? I personally dont really care about if the 3dmark score is better but like this article has emphasis on, Workstation tasks!