OpenCL And CUDA Are Go: GeForce GTX Titan, Tested In Pro Apps

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


We already know you get great gaming. The question here is, do you get a great workstation card too? I don't think there is much questions left to be asked about it's gaming it's assumed it plays everything in that regard. Workstation cards suffer in gaming usually due to lower clocks all around. So if this thing runs well for the workstation apps you're planning on running why buy a pure workstation card and suffer? You've never wanted to have your cake and eat it too? :)
 


Why buy a work station card. Well for starters you get much much much better support from the manufacturer and that's part of the reason why you pay more. If you don't run a business in where time is money than you can get away with paying less for a gamer card with far less support.
 
[citation][nom]ojas[/nom]Aha! Been waiting for tan article like this. Thanks!I must ask, though: Was the Titan using all the compute resources available to it? There was that setting in Nvidia's Control Panel, that let you switch between more double precision float performance or more gaming performance. Was it enabled?http://www.tomshardware.com/review [...] 438-3.html[/citation]

I would like a reply to this if possible. It would be great to have the tests run with both settings and be able to compare which settings work best for which applications as well.
 
For all applications, using FP64, this special CUDA switch in the control panel was enabled. I've checked the difference in special financial operations (Monte Carlo option pricing) and you can increase the performance 25% and more (in dependence of the application). This switch will disable in the first line the boost, but it works.

Take a look at this and compare the Titan, the GTX 690 and 680:

12-OpenCL-SiSoft-04-Monte-Carlo-Option-Pricing-FP64.png

(This is from the upcomming workstation special with 21 cards)
 
I wish online publications would go back to Zero Start Charts for all data instead of pick and choosing and making some visually deceptive by starting them higher to make theyre point not the actual data point. Even though they will plea otherwise but look closely at all charts they don't all follow the same setup even with all the data
 
Perfect. I'm using a Quadro K5000 with 4GB now and 3ds Max with iRay and I need additional rendering power.

What most reviewers forget to mention is that with Cuda rendering and iRay, the entire 3D scene needs to be loaded into on-board memory, which makes all the 2GB and 3GB cards useless. Even the Gtx 690 has 4GB shared over 2 GPU's so nothing gained there too. 6Gb is totally awesome and the GTX Titan is definitely the card for me.
 
I was right... again...

CUDA is on its last dying breath and nobody cares about Titan. It sucked when it was released and sucks even moreso now that people realize they paid over $1,000 for a card that, not even a year later, has been surpassed by a cheaper card (780 Ti for example).

Oh nVIDIA fans, they're so much like Apple fans. Always falling for the marketing.
 


I guess you haven't seen the AMD fan's falling for the Mantle and Freesync PR then? :lol:
 


I've stunned how little you know about CUDA. 😀 A friend of mine writes CUDA code for accelerating
financial transactions on stock exchanges, except they have to use cards with ECC RAM because
errors are unacceptable (not an issue with gaming, just means a pixel might be the wrong colour).
Plus, the return path from GPU to host is only PCIe x1 for gamer cards, whereas Teslas have a full
speed return path. There are other differences too such as how the cache system works, things which
are highly simplified or removed for gamer cards.

As for my other comments, I'm the one who's bought the hw and done the tests, as linked elsewhere.

Ian.

PS. Late reply I know; can't remember now why I stopped following this thread, but never mind.

 
I see what you mean. 😀 Funny thing is, the very guy I mentioned who
does CUDA for financial stuff is most interested in Titan because the
code he writes absolutely depends on 64bit CUDA; thus, for him, gamer
cards are not an option anyway. Although Titan doesn't have ECC or
other features of Tesla, it's 1/3rd-64bit-cores mode is ideal for allowing
him to test code at home on a card that's reasonably affordable (he could
never buy a Tesla for his personal use, even though he's been
optimistically hunting for a used K20).

It's pretty obvious looking at the Titan design that NVIDIA carefully
positioned its features so it would stand out at least for a while as a
top gamer card, while also not stepping on the toes of the Tesla product
line too much. In that sense, it's an ideal CUDA developer card, even if
not suitable for end system deployment due to lack of ECC.

Thus, Titan continues to be an excellent option for those who want 64bit
CUDA but can't afford a Tesla, even if it's been outdated for 3D
performance by the 780Ti (RAM capacity is another issue of course, Titan
continues to have that advantage). Presumably some gamers are still
buying Titans because of the RAM size, so with the 780Ti priced lower,
maybe that's why there's no 780Ti with 6GB yet - as long as Titan is
still selling, why bother?

I can't afford either, which is why I bought a bucket load of used 3GB 580s. 😀

Ian.

 
Status
Not open for further replies.