Gaming vs. Professional Graphics Cards

Status
Not open for further replies.
This is more of a question for my general knowledge, but I always wanted to know the difference between the gaming and professional graphics cards. I used to work for an HVAC company in the IT department and had ordered 2 new PCs for the CAD guys. They requested a certain graphics card that at the time was $700+, but the gaming equivilent was around $300. I see there are GTX equivalent "Professional Cards" that run twice the cost. Could someone elaborate on the difference?
 

cleeve

Illustrious
Same hardware, essentially. Preofessional cards are usually just consumer cards with a provision that allows them to use the pro drivers.

The increased price is supposed to reflect the cost to develop unique drivers specifically for CAD apps, and the much-higher level of support that purchasing these cards will include.
 

rodney_ws

Splendid
Dec 29, 2005
3,819
0
22,810


Never in a million years did I imagine I'd say this... but here goes...

Are you sure Cleeve?
 

soloman02

Distinguished
Oct 1, 2007
191
0
18,680
First, the OP asked what the difference was between professional cards and consumer cards. You did not really answer that with the link. Second, Those benches are wrong.

The 3850, even the extreme will not beat the 8800GT, since the only difference between the 3850 and 3870 is the type of ram used IE: ddr3 vs ddr4. Lower clocks as well, but the 3850 core can be overclocked to close to the 3870 core.

Also, that review loses further credibility since you cherry picked your sources. The power graphs and the fps graphs are not from the same place!

A more credible source is right here:
http://www.tomshardware.com/2007/11/15/amd_radeon_hd_3800/page16.html
and it clearly shows that while the 8800GT beats the 3870 in crysis, it does not do so by a huge margin (by huge I mean 30% or more).


To answer the OP: Professional cards are pretty much the same as their consumer counterparts. They cost more because they have some hardware that is optimized to take advantage of programs like CAD. The drivers are also optimized to improve performance for said professional applications.
 
I guess I am not familiar enough with CAD and other graphic application to know what added hardware/support these cards add to their functionality and make it worth the extra price. Does it render certain images smoother, better, faster? If so what type of images?

My thought is would a consumer card work just as well as the professional models? or am i just being ignorant on this subject?
 

cleeve

Illustrious


Heheh. We'll, I'm pretty sure. Quadros are essentially Geforces, and FireGL's are essentially Radeons. Same GPUs.

Hell, in the old days we used to buy Radeon 9500's and mod them into FireGL's by soldering a bridge. Same kind of thing with turning Geforce FX cards into Quadros.

Unless something huge has happened in the past couple years that I wasn't privy to, I'm pretty sure this is how it still works. But hey, I don't specialize in workstation cards. I Could be mistaken.
 

cleeve

Illustrious


What benches, and what link? Did someone erase a post, or did you reply to the wrong post?
 

cleeve

Illustrious


The pro (Quadro, FireGL) drivers allow for more control over certain settings in CAD apps, and are geared to accelerate OpenGL pro apps instead of games. The last time I read a review with a gaming card vs. it's pro counterpart playing a game, the pro drivers tended to accelerate games slower than the consumer drivers, and the pro drivers tended to accelerate CAD apps much faster than consumer drivers.
 

leo2kp

Distinguished
Yes they're quite different I think. It's a different type of application that they're designed for. It takes a different type of architecture to create graphics than it does to play a game. But as far as the components go, they're not much different than a normal Joe Consumer card except for what they're designed for. A professional card doesn't work as well for gaming as far as I know. Maybe I'm wrong ;)
 

leo2kp

Distinguished
"Up to 1.5GB GDDR3 Frame Buffer with ultra fast memory bandwidth
Industry's first 1.5GB frame buffer and massive memory bandwidth up to 76.8GB/sec. delivers high throughput for interactive visualization of large models and high-performance for real time processing of large textures and frames and enables the highest quality and resolution full-scene antialiasing (FSAA).

Dual Dual-Link Digital Display Connectors
Dual dual-link TMDS transmitters support ultra-high-resolution panels (up to 3840 x 2400 @ 24Hz on each panel) --which result in amazing image quality producing detailed photorealistic images.

Fast 3D Textures
Fast transfer and manipulation of 3D textures resulting in more interactive visualization of large volumetric dataset.

Jumbo 8K Textures Processing
Faster processing of very large textures resulting in higher performances when zooming and panning through high resolution images.

Next-Generation Vertex and Pixel Programmability with Shader Model 4.0
Reference standard for shader model 4.0 enabling a higher level of performance and ultra-realistic effects for OpenGL and next generation DirectX 10 industry-leading professional applications. Investment protection for future Microsoft Vista release.

C Programming Environment for the GPU
The NVIDIA® CUDA™ GPU computing software provides a C language environment and tool suite that unleashes new capabilities to solve complex, visualization challenges such as real-time ray tracing and interactive volume rendering.

12-Bit Subpixel Precision
3x that of the nearest competitive workstation graphics, 12-bit sub-pixel precision delivers high geometric accuracy, eliminating spreckles, cracks, and other rasterization anomalies.

32-Bit Floating Point Precision
Sets new standards for image clarity and quality through 32-bit floating point capabilities in shading, filtering, texturing, and blending. Enables unprecedented rendered image quality for visual effects processing.

32-Bit Filtering and Blending
Enables unprecedented rendered image quality for visual effects processing.

Advanced Color Compression, Early Z-Cull
Improved pipeline color compression and early z-culling to increase effective bandwidth and improve rendering efficiency and performance.

Cg High-Level Graphics Shader Language
Cg—"C" for graphics—is a high-level, open-standard programming language for OpenGL that takes advantage of the power of programmable GPUs. NVIDIA Quadro FX programmable graphics pipelines leverage high-level shading languages to enable the creation and integration of real-time photorealistic effects into 3D models, scenes, and designs. This represents a major leap forward in ease and speed for the creation of real-time, realistic graphics within MCAD, DCC, and scientific applications.

Essential for Microsoft Windows® Vista
Offering an enriched 3D user interface, increased application performance, and the highest image quality, NVIDIA Quadro graphics boards and NVIDIA® OpenGL ICD drivers are optimized for 32- and 64-bit architectures to enable the best Windows® Vista™ experience.

Frame Synchronization (G-Sync Option)
Allows the display channels from multiple workstations to be synchronized, thus creating one large "virtual display" that can be driven by a multisystem cluster for performance scalability.

Full 128-Bit Precision Graphics Pipeline
Enables sophisticated mathematical computations to maintain high accuracy, resulting in unmatched visual quality. Full IEEE 32-bit floating-point precision per color component (RGBA) delivers millions of color variations with the broadest dynamic range.

Full-Scene Antialiasing (FSAA)
Up to 32x SLIAA and 16x FSAA dramatically reduces visual aliasing artifacts or "jaggies" at resolutions up to 1920x1200, resulting in highly realistic scenes.

Genlock/Frame Lock (G-Sync Option)
Also known as "house sync." Genlock allows the graphics output to synchronized to an external source, typically for film and broadcast video applications.

Hardware 3D Window Clipping
Hardware accelerated clip regions (data transfer mechanism between a window and the frame buffer) which improve overall graphics performance by increasing transfer speed between color buffer and frame buffer.

Hardware-Accelerated Pixel Read-Back
Up to 2.4GB/sec pixel read-back performance delivers massive host throughput, more than 10x the performance of previous generation graphic systems.

Highest Workstation Application Performance
Next-generation architecture enables over 2x improvement in geometry and fill rates with the industry's highest performance for professional CAD, DCC, and scientific applications.

High-Performance Display Outputs
400MHz RAMDACs and up to two DVI digital connectors drive the highest resolution digital displays available on the market.

NVIDIA High Precision, High Dynamic-Range Technology
Sets new standards for image clarity and quality through floating point capabilities in shading, filtering, texturing, and blending. Enables unprecedented rendered image quality for visual effects processing.

NVIDIA PureVideo Technology
NVIDIA PureVideo™ technology is the combination of high-definition video processors and software that delivers unprecedented picture clarity, smooth video, accurate color, and precise image scaling for SD and HD video content. Features include, high-quality scaling, spatial temporal de-interlacing, inverse telecine, and high quality HD video playback from DVD.

NVIDIA PureVideo HD Technology
The ultimate high-definition movie experience on your PC by combining high-definition movie decode acceleration and post-processing, HDCP circuitry, and integration with HD movie players. It delivers superb picture quality for all video formats, as well as stunning HD DVD and Blu-ray movies—with low CPU utilization and power consumption.

NVIDIA Quadro Unified Memory Architecture
Allows for superior memory management, which efficiently allocates and shares memory resources between concurrent graphics windows and applications.

NVIDIA SLI™ Technology
A revolutionary platform innovation that enables professional users to dynamically scale graphics performance, enhance image quality, and expand display real estate. Available on NVIDIA Quadro FX 5600, 5500, 4600, 4500, 3500, and 3450 GPUs.

NVIDIA Unified Architecture
Industry's first unified architecture designed to dynamically allocate compute, geometry, shading and pixel processing power to deliver optimized GPU performance.

nView Multi-Display Technology
The NVIDIA® nView® hardware and software technology combination delivers maximum flexibility for multi-display options, and provides unprecedented end-user control of the desktop experience. NVIDIA GPUs are designed to support multi-displays, but graphics cards vary. Please verify multi-display support in the graphics card before purchasing.

PCI Express Certified
PCI Express is a new Intel bus architecture that doubles the bandwidth of the AGP 8X bus, delivering greater than 2GB/sec. in both upstream and downstream data transfers.

Quad Buffered Stereo
Offers enhanced visual experience for professional applications that demand stereo viewing capability.

Rotated-Grid Full-Scene Antialiasing (RG FSAA)
The rotated grid FSAA sampling algorithm introduces far greater sophistication in the sampling pattern, significantly increasing color accuracy and visual quality for edges and lines, reducing "jaggies" while maintaining performance.

Unified Driver Architecture (UDA)
The NVIDIA UDA guarantees forward and backward compatibility with software drivers. Simplifies upgrading to a new NVIDIA product because all NVIDIA products work with the same driver software."



As far as I know, many of those above-mentioned features aren't available on a gaming card. I don't know if they're driver-based or if they're hardware-based features (well, the 1.5GB memory is obviously hardware), but I think it has to do with RISC architecture and keeping things you don't need out of gaming cards, and vice-versa. I just don't believe a professional card will play a game the same way.
 

stemnin

Distinguished
Dec 28, 2006
1,450
0
19,280
I always wondered the same, seeing an Quadro FX at like 4000$ I thought it was a typo before I started doing CAD, I don't know anyone that uses such a card, and I doubt my company will ever look into it (we're using AGP FX5200's, my boss has a 6600 in his lol, sad.. they run Quake 3 ok).
 
Have to be careful with our terminology here. In the old days, CAD users bought good CAD cards which also happened to be perty decent gaming cards. These came with specific CAD drivers.....but these were DOS drivers. Since Windows AutoCAD and much more so since AutoCAD 2000, the use of special professional graphics cards in plain ole vector graphics AutoCAD is a waste of money.

I buy plain old gaming cards for all our CAD workstations. Where professional graphics cards begin to shine is when doing CAD rendering. When you take a basic CAD file and start doing ray tracing and applying "surfaces" to CAD objects and applying gradient fills, now you start to benefit from professional cards. "CAD" is just plain old vector graphics and doesn't require much processing power. I don't have a single CAD file from any project we have worked on in the last 20 years that I can't open and manipulate on an old NT4 box with a Diamond Fire GL 8 MB graphics card.

Now if I bought and used various AutoCAD add ons to render some of these drawings, with textured surfaces, gradients and light ray tracing and shadows, just about any machine in my office would have me making a pot of coffee and reading a newspaper.
 

cleeve

Illustrious


All of the features - except maybe clockspeeds and amount of memory - are software. It's the same core GPU.

For an example, from an Xbit labs review:

"...the new Quadro FX5500 as well as its architectural peculiarities make it resemble the GeForce 7900 GTX gaming solution with slower graphics memory... it uses a new 90nm G71 GPU..."

http://www.xbitlabs.com/articles/video/display/quadrofx-vs-firegl_6.html#sect0


Same GPU. Sure, they add memory to hold more textures, and sometimes tweak the memory and core clocks, but it's the same architecture, same card folks.

All that's changed is the driver: it's a driveer that'll run OpenGL more quickly and stably, because OpenGL is the primary API for CAD. And to support a driver team that's only used on a relatively small number of cards, they have to charge a lot more per card. Hence the colossal price.

Like I said, I've personally changed a Radeon 9500 into a FireGL back in the day... I could do that because it's the exact same card. This isn't theory, it's common knowledge. Before Ati and Nvidia consciously made it difficult to run pro drivers on consumer cards, people used to do it all the time.

Here, check the "Radeon to FireGL mod guide":

http://www.techarp.com/showarticle.aspx?artno=105&pgno=0


Like I said... same cards. Different drivers. I know this for a fact because I did it.

So if anyone can offer direct knowledge and proof that Ati and Nvidia have changed this practice and now use unique hardware for Quadros and FireGLs, I'd like to hear it. Because I'm seeing alot of speculation here but no direct experience.
 

cleeve

Illustrious


Depends on what you define as CAD. I'm defining it generally as computer aided design - which includes 3d visualization tools like 3dsMAX and MAYA.

In these apps, the GPU makes a huge difference with viewport redraws, even in wireframe only mode with complex meshes. And pro card drivers help alot in viewport refreshes that include textures and transparency.

When doing a final CPU-based render to an image file, the graphics card doesn't matter, at that point it's all CPU and RAM. The graphics card can absolutely speed up viewport renders, though.
 

soloman02

Distinguished
Oct 1, 2007
191
0
18,680



The person must have somehow deleted his post.
He had pog in his name.
 

blashyrkh

Distinguished
Jul 4, 2007
350
0
18,780
What about that driver mod on RivaTuner that "Unlocks professional capabilities" of the video card? I know that the quadro cards have the same cores like their gaming counterparts, but I always though it was more like, the gaming cards are "locked" in a sense, where the pro one's can act more like a GPGPU and offload the CPU with graph intense operations. Do any of these stand?
 

Houndsteeth

Distinguished
Jul 14, 2006
514
3
19,015
The capabilities aren't necessarily locked in the hardware, but rather in the software. It's quite well proven that NVidia and ATI/AMD intentionally "dumb" down their consumer drivers with regards to OpenGL rendering just so they wouldn't be competitive with their professional line of cards. What this means is usually a downgraded performance of OpenGL games when compared to DX numbers for the exact same engine. This issue has often caused problems for both Mac and Linux gaming, as they are completely dependent on OpenGL (DX is Microsoft OS only).

Hence, Apple started writing their own drivers that didn't hobble OpenGL rendering, with the simple caveat to NVidia and ATI that they would use consumer cards in consumer machines and professional cards in workstation level machines. Since Linux has no single entity that can make this promise to NVidia and ATI, and the respective manufacturers are very unlikely to release the source code for their drivers, Unix/Linux OpenGL rendering (when it does work) is still hobbled by the drivers that the manufacturers did provide, unless you purchased a professional card and received the full performance drivers from the manufacturer.
 

Norton72

Distinguished
Dec 29, 2009
130
0
18,680



Man Cleeve you don't know how much I appreciate this post. This is EXACTLY the information I have been searching for. So a good professional video card will speed up regens but is not necessarily needed for final render. Basically, it allows you to work faster but has no effect on the final product. (Other than the ill effects of fatigue and frustration!)

So, would a professional workstation benefit from SLI or Crossfire? Would two cards be of any use?
 

J-Ro

Distinguished
Dec 30, 2009
5
0
18,520
Here's an article that shows exactly how to go about doing the softmod to convert a GeForce into a Quadro. It also includes some benchmarking showing the performace increase on several applications.

http://www.techarp.com/showarticle.aspx?artno=539&pgno=0

Of particular interest (to me, at least), was the almost 300% increase in SolidWorks performance from the original card to the softmodded one. However, it get's better. On the last page, it compares an unmodded card's performance, to a softmodded card, and then to an actual Quadro card. It still showed a 200% increase in Solidworks performance between the softmodded card and stepping up to an actual Quadro.

Other programs had varying amounts of impact, some little or none, but since the system I'm building will be mainly used for SolidWorks, I'm getting a Quadro FX-1800 (~$400). Maya, OTOH, showed little or no improvement.
 

dpj007

Distinguished
Jul 14, 2009
2
0
18,510
Graphics cards: Pro vs Gaming

Basically Pro cards are optimized for accuracy while Gaming cards are optimized for frame to frame speed.

Without getting too technical (which alot of folks might not need) the cards are basically the same and are tweaked differently to enhance performance in different directions.

This does make a big difference but that difference can be pretty subtle for the average user.

If a computer is doing mid to high end 3D CAD or rendering a Pro card is recommended if not required.

For 2D or simple 3D a gaming card will do the trick if you don't mind some occational wavey lines or less detailed rendering.

A Pro card used for playing games will be able to render the crap out of one single image but could suffer moving from image to image and in pure frames per second.
 
Status
Not open for further replies.

TRENDING THREADS