Gigabyte GeForce GTX Titan Black: Do-It-Yourself Cooler Swap

Status
Not open for further replies.

bloodgigas

Honorable
Apr 9, 2013
5
0
10,510


If you bothered reading the first page you'd know why.

"Nvidia doesn’t allow its partners to sell the GeForce GTX Titan Black with proprietary cooling. However, Gigabyte now offers a GHz Edition of the card that comes bundled with its WindForce solution, which you can install on the overclocked board yourself."

This one right? whats the difference between you install it yourself and Gigabyte take the initiative pre-factory installed? or Warranty Void?
 

bloodgigas

Honorable
Apr 9, 2013
5
0
10,510


If you bothered reading the first page you'd know why.

"Nvidia doesn’t allow its partners to sell the GeForce GTX Titan Black with proprietary cooling. However, Gigabyte now offers a GHz Edition of the card that comes bundled with its WindForce solution, which you can install on the overclocked board yourself."

This one right? whats the difference between you install it yourself and Gigabyte take the initiative pre-factory installed? or Warranty Void?
 

FormatC

Distinguished
Apr 4, 2011
981
1
18,990
It is one of Nvidias funny rules.

Ok, for your better understanding:
Nvidia doesn’t allow its partners to sell the GeForce GTX Titan Black with factory-installed proprietary cooling.
 

wolverine96

Reputable
Mar 26, 2014
1,237
0
5,660
Very nice, Gigabyte! I almost wish I had bought one. I have one of those "out of stock ASUS cards from Newegg". I am not disappointed, though. The card handles 84 degrees Celsius just fine!

Igor Wallossek, I wonder if you could put up a graph for 3D rendering? If you use Blender's BMW scene by Mike Pan (a popular benchmark scene), make sure you properly set the tile size!
 

Damn_Rookie

Reputable
Feb 21, 2014
791
0
5,660

Silly question probably, but why does nVidia allow only EVGA to break this rule, with their hydro copper signature edition you mentioned? Is it just because it's a water cooled model? Do you think nVidia specially signs off on the design?

I'm genuinely curious.
 

Gunbuster

Distinguished
Dec 31, 2007
17
0
18,520
Do the individual OEM's even make the reference cards or does Nvidia just sell/ship them cards binned to their clock speed specification from one central ODM factory and the OEM put's it in their own box?
 

FormatC

Distinguished
Apr 4, 2011
981
1
18,990
You are right, this is shown in graphics for the temperature and clock speed. EACH reference card from Nvidia is throttling under longer load, this is the disadvantage of the combination between temp target and a quieter cooler profile. Nothing new, because it is a "feature" since GTX 780. You have to run the original fan with fixed 65% rpm to prevent the card before thermal limitations. But this is really loud. :D
 

mapesdhs

Distinguished
wolverine96 writes:
> Igor Wallossek, I wonder if you could put up a graph for 3D rendering? If you use
> Blender's BMW scene by Mike Pan (a popular benchmark scene), make sure you
> properly set the tile size!

Arion Bench 2.5.0 would be a better test, because it scales perfectly with
multiple GPUs.

Or the AE CUDA test my friend has created, but it's pretty intense, maybe
takes too long with just one card (about 20 minutes with a single 780Ti).

Ian.

 

wolverine96

Reputable
Mar 26, 2014
1,237
0
5,660


I agree. The BMW scene is not the best CUDA benchmark. I just didn't want them to mess it up if they decided to use it. I heard some people complaining about this benchmark, although I don't know if they were right or wrong.

My Titan Black renders the BMW in just over 24 seconds! :D (Not including the post-process compositing, which uses the CPU. Tile size was set to 512x512.)
For comparison, an Intel Core 2 Duo @ 2.33 GHz took 16 minutes!

Have you run a Titan Black on that AE CUDA test? If so, I am curious to see the results!
 

mapesdhs

Distinguished
wolverine96 writes:
> I agree. The BMW scene is not the best CUDA benchmark. ...

I've tried it numerous times with various setups, it just seems to behave a
bit weird IMO.


> My Titan Black renders the BMW in just over 24 seconds! :D ...

Main problem I find is I can't work out how to make it use all available GPUs.
Is that possible? One of my 580s does it in about 43s, but my system has 4 of
them, so it's a bit moot really. Mind you, I'm using an older version of
Blender (2.61), stuck with it to ensure consistent CPU-based testing.

And as you say, it also involves some CPU stuff (scene setup doesn't use the
GPU).


> Have you run a Titan Black on that AE CUDA test? If so, I am curious to see
> the results!

Alas no, atm I don't have access to anything newer than some top-end 3GB GTX
580s (MSI LX, 832MHz); my system has 4 of them. Final version of the test file
takes 14m 48s to render in AE using 16bpc and 8 samples (ie. just average
quality), so on a Titan Black I'm guessing it would take maybe 25 mins? Hard
to say. Would certainly be interesting to find out. Note the 'max' quality
setting would be 32bpc and 10 samples (likewise, for the full animation, avg
quality is 1080p @ 25Hz, max quality is 50Hz).

I'll sort out the test readme, download archive, web page, etc., next week,
but need to talk to C.A. first about some things. Anyway, here's the rendered
image in original Targa format (just one frame, the default test frame 96, the
last frame in the main animation sequence):

http://www.sgidepot.co.uk/misc/cuda.101_Frame96.tga

Here's the file converted to BMP and SGI IRIS/RGB:

http://www.sgidepot.co.uk/misc/cuda.101_Frame96.bmp
http://www.sgidepot.co.uk/misc/cuda.101_Frame96.rgb

and for those who don't mind losing a bit of quality, here's a 100% JPEG:

http://www.sgidepot.co.uk/misc/cuda.101_Frame96.jpg


The full 4 second animation takes hours to compute even at average quality and
is thus intended more as a stress test for those interested in checking that
their system can handle long renders or other GPU tasks without falling over
(I've seen many people asking for a test like this on forums). I suspect at max
quality the whole sequence would take about a week to crunch on my system. :D
Also interesting for exploring power consumption & energy cost issues for
different GPU configs (load draw on my system during the render is around 920W).

Ian.

 

wolverine96

Reputable
Mar 26, 2014
1,237
0
5,660
Did you say you are having trouble getting multiple GPU's to work? I only use one GPU, but here's a very informative link. More specifically, see this section.

Your system with 4 GTX 580's is much faster than mine! (Two GTX 580's is about as fast as one GTX Titan Black.) I guess the only time mine would be faster is if the scene used more than 3GB of RAM. I actually was planning on getting 2 GTX 580's, but then I discovered the Titan Black.

Is that Cycles in those images you posted?

By the way, Blender 2.71 is coming out very soon. In the past 10 versions, there have been some major performance gains for Cycles. I think it's like 30-50% faster in some cases.
 

mapesdhs

Distinguished
wolverine96 writes:
> Did you say you are having trouble getting multiple GPU's to work? I only use one
> GPU, but here's a very informative link. More specifically, see this section.

Thanks!! My goof, looks like V2.61 doesn't have the Compute Panel. Will try
the newer 2.70a in a moment... (downloading now)


> Your system with 4 GTX 580's is much faster than mine! ...

Yup, though I suspect your power bill is less likely to make your eyeballs explode. :D


> ... I guess the only time mine would be faster is if the scene used more than
> 3GB of RAM. ...

I had been hoping we'd see 6GB 780Tis, but seems like that's been killed off. Shame.


> I actually was planning on getting 2 GTX 580's, but then I discovered the Titan Black.

The real advantage of multiple 580s is just low upfront cost. Standard 580s are pretty
cheap (I have four 1.5GB 797MHz models which cost about 400 UKP total), if one's ok
with the VRAM limit. 3GB 580s cost a bit more, but not much more (I've bought/sold
nine Palit 3GB 580s in the past year). The MSI LXs though can be a tad pricey, depends
on luck really I guess. I got mine (five total) for good prices though overall, and they do
oc like crazy (1GHz+ is possible).


> Is that Cycles in those images you posted?

No, it's the RayTrace3D renderer within After Effects.


> By the way, Blender 2.71 is coming out very soon. In the past 10 versions,
> there have been some major performance gains for Cycles. I think it's like 30-50%
> faster in some cases.

Good that they keep boosting it, but a nightmare for benchmarking consistency. :D


Ok, download done, quick test...

Cycles does the BMW in 11.56s (blimey!), tile size 240x135. Just curious btw,
you mentioned using 512x512 tile size, but surely it'd be optimal to use an even
divisor of the image dimensions in both X and Y? What do you get if you try
a tile size of 240x135?

Ian.

 

wolverine96

Reputable
Mar 26, 2014
1,237
0
5,660
Thanks! I tried 240x135, but that took 31 seconds. I doubled it to 480x270, and it rendered in just under 25 seconds with compositing turned on. So it's about a second quicker (4%).

The reason I used 512x512 is because it fits nicely into the graphics card. Graphics cards handle images best at resolutions with dimensions that are powers of two (128x64, 1024x1024, 16x16, etc.)

If I had multiple GPU's, I would see a greater gain in performance by switching to 480x270 (with one GPU, I'm only rendering one tile at a time anyway). I have learned this while rendering on my 8-core CPU. It is an FX-8350, and it renders it in 1 minute and 53 seconds, only 4.5 times slower than one Titan Black!

The GTX 580's I wanted were 3GB, I think. They were refurbished for $450 each. I got my brand-spanking-new Titan Black for $1000, so that was $100 well spent!

Do you have a 780 ti? I'm just wondering how it compares to the Titan Black. Or maybe I should just ask on BlenderArtists.org...
 

mapesdhs

Distinguished
wolverine896 writes:
> The reason I used 512x512 is because it fits nicely into the graphics card. Graphics cards handle
> images best at resolutions with dimensions that are powers of two (128x64, 1024x1024, 16x16, etc.)

Interesting, I found it was fastest when using a tile size that was an even divisor of the image size.
Otherwise it ends up having to render splinter pieces towards the end.


> The GTX 580's I wanted were 3GB, I think. They were refurbished for $450 each. ...

I bought about nine 3GB 580s in the last year, typically for around $220 each, mostly from eBay. ;)
Sold four of them for AE machine builds.


> Do you have a 780 ti? ...

Not yet. Can't justify the cost atm.


> ... I'm just wondering how it compares to the Titan Black. ...

It'll be identical for anything where the RAM limit is not an issue or 64bit fp doesn't matter.

Ian.

 

mapesdhs

Distinguished
wolverine96, a small followup: I tried the BMW with a more unusual setup just for a laugh:
P55 config with an i7 875K at 3.2GHz default (mbd is an ASUS P7P55 WS Supercomputer,
16GB RAM at only 1333 CL9), using four EVGA 1.5GB 580s at 797MHz (somewhat slower
than my MSIs). It completed the BMW test in 12.51s. :D Proves one does not need a
modern chipset or crazy CPU to have good CUDA performance, though of course in reality
the 16GB max RAM could be limiting for some tasks. Total cost of these four 580s was a
little over 400 UKP, all bought about a year ago.

Ian.

 

wolverine96

Reputable
Mar 26, 2014
1,237
0
5,660


Interesting.

I guess It's okay that I didn't go for an Intel CPU on my $3000 PC, LOL! And yes, I have heard that RAM can affect render times, even while rendering on the GPU.
 
Status
Not open for further replies.