Graphic Card and processor for render


Oct 11, 2011
i am planning to take i5 2500k processor and GF GTX 260 876mb graphic card for vray and metal ray render in max and light render in maya , will this card supports
my requirement i want to limit my budget, can any guide me on this
the CPU is great
though a 2600 would be better
you dont need a K version
usually dont OC workstations for stability reasons
so instead of 2500K get 2600 non K
or get 2500 non-K to save money
unless you want to OC for other reasons like gaming

you will want a CUDA enabled Nvidia card which the 260
is one of those
look at this list

the 260 is on the list of compatible cards but there are cards
with a better compute capability
the 260 is 1.3 while there are cards at 2.1
like the 440 and 450

What country are you in?

Do you already have the 260?

@greghome I was just going by the Nvidia charts showing compute capability
which to be honest I am not 100 percent sure what that means LOL

really though the OP would be better off with a workstation card
like the Quadro series
they will actually outperform a gaming card and are more "precise"
the gaming cards will sacrifice accuracy for FPS while the Quadro
will render a more accurate output
has to do with a different driver set and BIOS
there is a way to flash some desktop cards to the workstation equal
but not recommended for the inexperienced
can get a quadro 600 for $170 on US Newegg

and remember gaming cards dont offer the stability and precision
a workstation card does also features that gaming cards dont that are needed
for rendering
I have been tweaking AutoCAD workstations for my engineering consulting firm since 1993..... OC'ing is done the day the box is built ...... one I sit at now each day is a 2600k at 4.6 Ghz (runs low-mid 60's temps) .....has alternate gaming boot at 4.8 GHz w/o HT.

Ya want HT. If not the 2600k, then the wait for the new Intel SB-E CPU's.

You don't need to limit yaself to what nVidia "enables" CUDA on, ya can unlock CUDA yaself.

Workstation cards are great for rendering and for the most part are the same hardware as their gaming counterparts (firmware differences). But the realistic entry point is at about $450

If that's not in the cards, Id stick w/a consumer gaming card like the 560 Ti and unlock it.
I'd have to disagree with smp on both the cpu and gpu comments. A proper overclock will be completely stable and speed up rendering quite a bit. I work on quadros, geforce and radeon and comparable cards of the same price range, workstation cards will get slaughtered. As jack said, you really shouldn't be looking at them until the higher price range. Workstation cards will handle the viewport much nicer. As for precision, you will only see an issue when polys are on top of each other, the quadro won't put lines/hatching, but if polys are on top of each other than the model needs to be fixed. Rendering looks are the same and for the price geforce and radeon render much faster as they will have more stream procs/cudas. As for stability or bugs, never encountered a hardware cause. Other than adobe products where I had to enable cuda, I haven't had to do any changes in autodesk products.

Vray and iray are gpu accelerated, mental ray is purely cpu based as well as the default software renderers of max and maya.
"These questions are understandable given that GPUs like the ATI Radeon HD 4870 and the ATI FirePro v8750 appear to have the same GPU (RV770) and hardware configuration, but Alexis explained that there are several significant, but unapparent hardware-level differences.

First and foremost, workstation GPUs are different from desktop GPUs at the ASIC and board level. If you were to place a workstation ASIC (the actual GPU chip) in the equivalent consumer grade board, the card would exhibit different behavior. In other words, the GPU dies are not simply interchangeable.

Alexis continued by explaining that workstation hardware offers features that can’t be benchmarked, but really matter to users and cannot be had on desktop hardware. Such features include 30-bit color depth, framelock/genlock functionality, and hardware stereoscopic output."

source -

I like to source my information :)
I tend to read links people post, the first time they post it, and while I did notice it's 2 years old, all of it is still true, hardware, features, support, drivers, everything, and I'm fully aware. While hardware at the board level is different, it's still based off of fermi(or whatever architecture it is) and gets similar performance. As for features, yes, workstation cards offer other features, quite a bit more, you can read the whole list off the nvidia/amd site. But let's step back off the spec sheet and back to real world, unless you work in a professional studio using 64x aa and 16k x 16k textures where accuracy needs to be spot on because it's so big and any tiny little microscopic detail is noticeable, none of that really matters. I stand by what I said by first hand experience/testing, consumer is faster for the same price, same quality even at 1080p, and I've never had a reliability issue. It's similar to the oh so common amateur mistake when people look for a gaming gpu they want, they stare at a spec sheet to see which looks better, wrong, it's all about real world performance (and usually the best bang/buck for that individuals uses and res).
I knew that would get a response

The funny thing is I agree with you :)

What we need to know from the OP is:

Is this for amateur use or is this for professional use?

there is a big difference whether this is for time critical
projects with money at stake or is this for amateur projects
at home?

A serious workstation will NOT be OCd due to reliability reasons
also will use ECC ram and Enterprise level storage solutions

I still say a professional machine should not be OCd
You buy the hardware that can get the job done at stock settings
If you have projects that have to meet deadlines then
you cant take a chance of an unstable system

also OCing leads to more heat and power usage

Would you disagree that a workstation needs

1) a CPU with more cores and thread handling the better

2) ECC ram

3) Enterprise level storage preferably 10k and 15k drives in Raid 1

4) workstation level GPUs

Now of course if the OP is just doing small non time critical
side projects then quality "home desktop" level hardware is okay

His other thread he was looking at a gt330 and now in this thread he states a 260, so I think it's safe to say this is just small home projects.

If a company has a big enough budget, then yes they should get xeon/opteron and quadro/firepro. As stated in my other post, if they "work in a professional studio." But they will have there own IT and not come on tom's for advice. We just get the indie people here, although a couple of rich ones appear sometimes who can afford xeon/quadro which was suggested to them over consumer parts.
personally I would recommend a 550Ti or GTX 460 1gb
if on a budget that would be a good card for rendering
for the money or
but personally I am a ATI Stream user
I use Cyberlink Powerdirector 9 for my video editing (amateur)
with a HD 5670 GDDR5 512mb
for a C2D 3ghz system I get some decent times on video rendering
using GPU acceleration

but CUDA is so much more widely used that is the choice of the Pros