Difference between "normal" GPUs and rendering GPUs?

CobaltImpurity

Reputable
Nov 16, 2014
58
0
4,630
I'm helping my friend build a gaming/editing/rendering rig and I learned from one of my coworkers at my co-op that their are GPUs designed specifically for rendering. He told me what CUDA cores are/do (which was nice because I tried finding out in the past to no avail), and that the architecture is specially designed for rendering.

I know that rendering is traditionally done with the CPU and just want some clarification as to what makes a card specifically for rendering:

- Does a GPU with CUDA cores work with a CPU in tandem to render video or can you only use one or the other?

- How can I tell which GPUs are specifically used, designed, or meant for rendering. Does the manufacturer tell you; "This line of GPUs is optimized for rendering videos". Or do you just have to know what technology the manufacturer is using (like CUDA) to identify them?

-What are the drawbacks of using a GPU designed for rendering vs "normal" GPUs like the 980 and titans and stuff for gaming? (since he wants to do both with the same rig (duh) and I probably should have mentioned that he wants to be able to run the most graphically intensive games on Ultra, unless that's not possible for whatever reason, then high.)

-Linus (from LinusTechTips) first put Titan Blacks in all his employees editing/rendering rigs, and then later switched to Titan X'. If all they're doing is rendering and editing then why not get rendering GPUs instead. Since they're some of the best cards out there do they just do "everything" better, even if they're not optimized for rendering? Unless they are in which case my mistake.

- And a side question; if there is a significant difference between the capabilities of the cards in terms of how high of a quality they can display (ultra, high, blahblah) what would the difference be? I know that highly depends on which cards you're comparing but if you could give me a general estimate that be nice. The card we originally had was a 980ti 6GBs core clock 1.10 (or something).


In conclusion if you can give me any other information that is remotely important or good to know on this topic that would be amazing.

Thanks in advance, society of Tom's! :^)
 
1) They work together. The CPU handles the most of the easier stuff while the heavy number crunching (the actual video rendering) is passed off to the GPU.

2) Nvidia's Quadro lineup and AMD's FirePro lineup are considered their "workstation" cards that have drivers optimized specifically for rendering tasks, while the GTX/Radeon cards are optimized for games but the flagship models (980ti, Titan X) still work very well for rendering. Nvidia calls their GPU cores CUDA cores while AMD calls theirs stream processors. They do the exact same thing, but software like photoshop that supports CUDA acceleration while perform noticeably better on nvidia cards.

3) Don't get a workstation GPU if gaming is a concern though. Sure a Quadro K6000 will outperform a 980 ti in certain rendering tasks (especially OpenGL), but you could get 4x GTX 980 tis for the price of 1 Quadro K6000. The Quadro would not hold up in gaming at all. The Quadro series for rendering is a gimmick for regular consumers IMO.
 
There is a long winded answer , which i would need reference for and a shorter answer. Yes and no. The CUDA cores or Shaders/ALUs are not working with the CPU but along side it so to speak. Roughly , a CPU is designed to work in serial execution. Again roughly, Shaders can work in parallel.
Now Some architectures perform better in professional editing tools ie Fermi can use 512 shaders clocked at 750MHz,(pulling numbers out of my hat) will outperform a GTX 680 in Adobe AE CS6. And from what i read and heard , Maxwell does not fair any better. If this does not makes sense or is even comprehensible as a few sentences please let me know and i will try to correct it.
Nvidia Developer zone has some interesting info and applications for workstation PCs, or any Nvidia GPU that uses a unified shader model. Bit of a process signing up but it does give you access to some of their Cuda SDKs.



More gaming related.
this topic has gotten quite interesting lately as DX12 and Vulkan mature as APIs.
Check out a video about asynchronous shaders or asynchronous compute if interested. here is one of a few vids https://www.youtube.com/watch?v=T9BYZ61Tfkw