OpenCL And CUDA Are Go: GeForce GTX Titan, Tested In Pro Apps

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

badtaylorx

Distinguished
Apr 7, 2011
827
0
19,060
[citation][nom]Stephen Bell[/nom]I think there is one significant point that is overlooked in the conclusion of this review:- 6 GB of RAM! You benchmarked iRay but didn't mention how with that 6Gb the Titan can do scenes that most other cards cant touch and would simply reject. Then there is the issue of viewing massive scenes. to show what I am talking about, go down the page a little and click on the landscape/ocean scene, which kills a normal video card but would probably fit inside a Titan.The Titan is faster than its pro origins, a fraction of the price and has enough RAM on it to do serious work. I think it is a "no-brainer" as a workstation card – just buy it.[/citation]

how is that not what toms is saying here/???
 

ericjohn004

Honorable
Oct 26, 2012
651
0
11,010
This card wasn't meant for this. It was meant for gaming. If Nvidia would have made this perform like a workstation card it would be, one, worse for gaminge, and two, it would be harder to sell real work station cards if the Titan performed like one. So if Nvidia would have optimized the drivers for this stuff it would be a lose/lose for them.

I do appreciate, as always, the way Tom's benchmarked these cards. Their benchmarks always seem to be accurate.
 

metroid359

Honorable
Apr 17, 2013
1
0
10,510
The graphs need a baseline of Zero. the difference looks huge visually, but that's because the range of Scores / Seconds are so magnified.
 

dgould

Honorable
Oct 23, 2012
7
0
10,510
I have read that AMD desktop cards can be soft modded to host the drivers from AMD's workstation cards. Is this still accurate? It seems to me that a 6GB 7970 with the workstation drivers would blow everything else out of the water considering its 600 dollar price point.
 

sammual777

Distinguished
Oct 23, 2011
15
0
18,510
[citation][nom]cravin[/nom]Yeah I don't see why people cant just mod the workstation (quadro ie) drivers to work with 680s and titans and stuff.[/citation]

Yeah that hasn't worked since the GTX8800 days. You can softmod the bios easily enough but it doesn't give you the quadro performance gains. The GTX double precision performance is physically disabled to something like an 8th of the quadro performance.

And this comparison is pointless without a quadro benchmark.
 

Rob Burns

Distinguished
Oct 9, 2010
14
0
18,510
Thi article has the potential to be something great: PLEASE include current quadro and firepro cards (quadro 4000, 5000, K5000, firepro W7000, 8000, 9000) in the same tests. Without the pro cards providing some context and grounding all of this information this article doesn't help me make any kind of informed decision about buying cards for professional use. Also PLEASE include Vray in your tests. 3DS max and Vray are the industry standard combo in architecture. No one uses Iray for anything. Vray also renders with CUDA and OpenCL, would be good to see both modes covered. A comprehensive comparison of gaming and pro cards in professional apps, including Vray, would be the most important article I have ever read on TH, and one I've been waiting over 8 years to see.
 

bit_user

Polypheme
Ambassador
[citation][nom]cmi86[/nom]Ya titan is also 150% more expensive than the 7970 which also puts them in "different leagues" as the writer said.[/citation]Going by current newegg prices, the cheapest Titan card is over 2.2x as expensive as the cheapest HD 7970.

So, when you say 150% more, actually, it's really 122% more. But most people would say it's 222% the price of a HD 7970.

And is it just me, or did high-end GFX card prices increase, in recent months?
 

FormatC

Distinguished
Apr 4, 2011
981
1
18,990
If you're missing the workstation cards, please read the first page again :D

Today's story also serves as a preview for a big workstation graphics card round-up we have coming up with all of the new Kepler-based Quadro cards. We're going to use the same benchmarks (and a lot more) to compare two generations of Nvidia and AMD offerings. Right now, we're still sorting out some driver issues that show why it's so important for these companies to seek out certifications for their premium products. You'll see us add the results from these gaming cards to that piece, too.

Overall you will get: 22 cards with new Kepler Quadros, all Fermi Quadros, the complete FirePro W- and V-series and 7 consumer cards with GeForce GTX 690 and a mysterious Radeon HD 7xxxx included. 67 charts and over 180 hours of benchmarking (also with FP32 and FP64 and F@H), but I'm just waiting for the last card :)

For all: please read the introduction

@Rob Burns:
Unfortunately I did not get a license for Vray and it is not my style to benchmark with cracked software. :(

 

falchard

Distinguished
Jun 13, 2008
2,360
0
19,790
[citation][nom]k1114[/nom]Why are there not workstation cards in the graphs?[/citation]
Because they would perform several orders magnitude better than these consumer cards on professional design suites.
The GTX Titan was designed for consumers in the upcoming era of GPGPU calculation in consumer software like games.
 

wiyosaya

Distinguished
Apr 12, 2006
915
1
18,990
[citation][nom]DoDidDont[/nom]I use 3ds max mainly for ultra high detail hard surface 3D modelling, rigging and production rendering. These results prove that the Titan is more than twice as fast as the rest of the cards tested here when it come to rendering in CUDA based app’s like iray, blender and most likely V-ray too. Even if the Titan turned out to be on par with the older gen GTX cards, which it didn’t, the 6GB of onboard memory for me is an absolute must. My current 3GB GTX 580’s are almost maxed out on Vram because of the high level of detail I model at, and that’s before texturing, so the alternative is buying an older generation card like the quadro 6000 or Tesla c2075 at over twice the price of the Titan, or spending more than three times the price of the Titan on the newer Tesla K20/X. * Good view port performance.* Great gaming performance.* More than twice as fast as the older GTX cards in CUDA based production rendering.* 6Gb of onboard memory for huge data sets, at less than half the price of an older 6GB Quadro/Tesla cards.This card is a win win win for the apps I use, so the “its price is just too high for the performance it offers in professional applications” remark, is completely wrong.Would you rather spend £7600 on 4x older 6gb Tesla cards in your render node, or spend £3300 on 4x Titan’s and get over twice the performance, do the math.I understand the advantages of Quadro/Tesla cards, optimised drivers, higher yield chips, better stability, durability, but using the GTX 480 vs. Quadro 6000 as an example up to 30% extra view port performance, but over 700% in cost, from a business point of view the math just doesn’t add up. I have owned Quadro cards in the past, and always ended up being disappointed by the very slight view port performance increase over the desktop equivalent, and feeling I have just wasted a lot of cash for nothing. One of my mechanical models I am working on has over 30 million polygons so far, and the GTX 580 throws it around the view ports with ease.For gaming yes this card is over priced, and you are better off getting a cheaper sli/crossfire configuration, but for some professionals that need fast render times and working on large data sets, this card will be a much cheaper and faster option than spending a lot more cash on Quadro/Tesla cards.I already ordered two Titans this morning. I will order another two when my other kidney sells on Ebay.[/citation]
At last, someone with a real perspective on what this card is capable of, and, I think, who is also well within the target market for this card. If I could +1 you infinitely, I would. LOL

I, too, once bought a pro card, and vowed to never, never, never again do so. IMHO, they are not worth what you get out of them vs the price for the card. As I see it, they are simply an excuse to charge exorbitant prices for what essentially amounts to the same hardware and drivers.

IMHO, this is NOT a gamer card or a desktop applications card, rather it is a card aimed at situations where people want Tesla like performance without breaking the bank. As I see it, another great use of this card is in HPC where DP compute power is a requirement, but the budget for Tesla's does not exist.

If you are a gamer or a desktop application user, then spending so much on this card seems pointless to me.
 

wiyosaya

Distinguished
Apr 12, 2006
915
1
18,990
The SolidWorks benchmark sends a pretty obvious message: don't spend $1,000 on a Titan for your workstation if this app is important to you. Engineers making money with Dassault's software need to go about this the right way and snag a professional graphics card.
I really hate to put it this way, however, do you actually have a clue as to what you are talking about?

This SW benchmark you are using is a rendering benchmark. In most cases, no engineer would ever use SolidWorks in a realistic rendering mode unless they are working on marketing material, or a presentation for the boss.

SolidWorks, in case you are unaware, is also used for things like FEA and the like, and it is capable of producing data that will estimate the life of a part based on the possible loads that the part will see while it is operating. It is also capable of fluid dynamics, magnetics, heat transfer, and other real-world simulations. And guess what? The part of SolidWorks that performs those calculations is CUDA enabled. Do some research, man, find this out for yourself. So, by showing only a rendering benchmark, you have completely overlooked where this card has the potential to blow away many other cards when used with SolidWorks. Without an FEA benchmark of some sort, you guys are out of your league, and this review reminds me of one written by a child with a new toy.

My apologies for the rant, but you guys are far from experts in this.
 

mapesdhs

Distinguished
[citation][nom]zero2dash[/nom]I'm baffled as to why you did not include anything from Adobe CS6 and instead used 3 gamer benchmarks by Unigine. I thought this was a "pro apps" comparison?[/citation]

I agree, CS6 AE CUDA test would have been very interesting.

Also, Igor, is there any chance you could at some point try running this test please? If you
could include it in the forthcoming pro-cards roundup, that'd be great!

http://forums.creativecow.net/thread/2/1019120

Source aep:

http://www.teddygage.com/AEBENCHCS6/CS6%2011.1%20RAYTRACE%20BENCHMARK.aep

Ian.

 

mapesdhs

Distinguished
[citation][nom]metroid359[/nom]The graphs need a baseline of Zero. the difference looks huge visually, but that's because the range of Scores / Seconds are so magnified.[/citation]

I was going to mention this earlier. Not using an origin of 0 is very bad statistical practice. No matter
how one might label a graph or include descriptive text, it's impossible to stop a viewer inferring an
initially incorrect conclusion from the visual graphic alone.

Please don't use graphs starting at non-0 origins.

Ian.

 

mapesdhs

Distinguished
Btw, it's shocking how slow some of the results are compared to even just a simple Quadro600,
and note that ProE is very CPU-limited, it runs better on a highly oc'd dual-core than a typical
stock quad-core. See my data:

http://www.sgidepot.co.uk/misc/viewperf.txt

Your best result for ProE is 3.9 with a 7970; I get 10.84 with a Quadro600 and a cheap i3 550. :D

Those who say gamer cards are a good alternative are making a terrible generalisation. For some
tasks, pro cards are far better because the driver differences are really important. Games don't
need AA lines. Pro apps don't need 2-sided textures.

As for Tesla cards, they don't work the same way, eg. the return path from the GPU to the host is
much faster than for gamer cards.

Ian.

 

Aaron Sims

Honorable
Apr 5, 2013
5
0
10,510
I would also love to see a comparison with some quadro, tesla, and firepro cards, especially the NVIDIA TESLA K20 and Quodro 6000 as I understand they use the same GPU(or very similar), just different architecture and drivers.

It’s really hard to find an up to date comparison of workstation graphics cards, and if you do they almost never compare consumer/gaming cards to pro grade card.
Cheers
 

pablo99

Honorable
Apr 18, 2013
1
0
10,510
Very confused what that actually showed? Pro apps?? I run a creative media post house and we use Photoshop, after effects, premier pro, Cinema 4D and color with Resolve. Where was the comparison of Adobes Mercury engine? Why no C4D? Resolve is now the defacto coloring tool in a majority of post houses and the lite version is free - perfect for benchmarking.

Any chance you could do a round of benchmarks with the above in mind? That would be very helpful.
 

Aaron Sims

Honorable
Apr 5, 2013
5
0
10,510
[citation][nom]cravin[/nom]Yeah I don't see why people cant just mod the workstation (quadro ie) drivers to work with 680s and titans and stuff.[/citation]

apparent you can, you just need to change a few resisters(physical solder job)
people have been reporting hacking a gtx 690 to run as a Quadro K5000's or Tesla K10, and seems like you can probably do the same with the Titan to a K20 or Quadro 6000
 

FormatC

Distinguished
Apr 4, 2011
981
1
18,990

One information for the expert:
You should know that Solidworks 2013 does not work with consumer cards! I've tested all workstation cards with 2013 for the next article, but without certified drivers and hardware you can't use it.

 
Status
Not open for further replies.