Nvidia GeForce GTX Titan 6 GB: GK110 On A Gaming Card

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]blazorthon[/nom]Tesla K20X double precision specification: 1.31TFLOPsIf Titan has 1.5TFLOPs in double precision, then it might actually beat the Tesla K20X for whatever doesn't need the professional/enterprise features supported by the Tesla. For that sort of job, maybe it will be worth the money and then some so long as it's a job that AMD doesn't excel at (for Titan's price, you can get two 7970s) or can't do (such as a program that supports CUDA but doesn't support Direct Compute nor OpenCL, at least not as well as it supports CUDA). However, that makes it a very small niche product at best, doesn't it?[/citation]
Apparently Titan's boost clock is disabled while its full fp64 performance is enabled, so its DP performance is actually about the same as the K20X at ~1.3 TFLOPS, from what I've read on Anandtech. This doesn't exactly make sense to me, and implies that in addition to disabling boost it also under-clocks its base a bit, but I've heard no mention of this being the case.

As for Geforce Titan being a niche product, I can certainly imagine that being the case. But no more so than something like a Quadro card in the usage cases I talked about. In fact I think its target market could be wider than Quadro, as none of the Quadro users I know personally use ECC on a regular basis (if at all), and in addition to offering a large frame buffer and excellent compute performance, it also excels at gaming (something I can tell you from personal experience, Quadro's aren't especially good at). But then there are the drivers to consider, Quadro drivers being highly optimized for a different set of workloads than Titan's Geforce drivers, which can make a big difference from what I've seen. I don't know exactly how broad the appeal for Titan will be, but like I said before I don't see much gaming specific value in it. Unfortunate since it's being marketed primarily as a high-end gaming card.
 
Chris, could you clear up a FP64 performance discrepancy I've noticed between this article and one by Anandtech. According to Anandtech Nvidia's "official compute performance figures" for Titan is 1.3 TFLOPS FP64, while this article says 1.5. Could you confirm the correct figure?
 
Most of you are limited to 1080p gaming which works for you and your budgets. There are a few of us who game at alot higher resolutions than that with multi monitor setups at 2560 x 1600 and higher like myself. Its a hobby every year to upgrade and this card fits into that category that want to have the best in graphics and some bragging rights. I would say even with 5760 x 1080 this card is overkill. This will be a nice upgrade from my 2 680s in SLI at 2560 x 1600, I may get 2 more 30 inch displays just because of this card.
 
[citation][nom]dragonsqrrl[/nom]Apparently Titan's boost clock is disabled while its full fp64 performance is enabled, so its DP performance is actually about the same as the K20X at ~1.3 TFLOPS, from what I've read on Anandtech. This doesn't exactly make sense to me, and implies that in addition to disabling boost it also under-clocks its base a bit, but I've heard no mention of this being the case. As for Geforce Titan being a niche product, I can certainly imagine that being the case. But no more so than something like a Quadro card in the usage cases I talked about. In fact I think its target market could be wider than Quadro, as none of the Quadro users I know personally use ECC on a regular basis (if at all), and in addition to offering a large frame buffer and excellent compute performance, it also excels at gaming (something I can tell you from personal experience, Quadro's aren't especially good at). But then there are the drivers to consider, Quadro drivers being highly optimized for a different set of workloads than Titan's Geforce drivers, which can make a big difference from what I've seen. I don't know exactly how broad the appeal for Titan will be, but like I said before I don't see much gaming specific value in it. Unfortunate since it's being marketed primarily as a high-end gaming card.[/citation]

If that's true, then the number from this Tom's article would be wrong.

My only concern over how wide of an applicable market Titan has is that CUDA seems to be losing a little traction to Direct compute and OpenCL. Like I said earlier, two 7970s can be had around the same price as Titan and they have quite an advantage (at least in theoretical performance) so long as they're supported. If Titan is really only about 1.3TFLOPS for double precision, then that gives the 7970s an even greater potential advantage. Even worse, the 7970s have far more gaming performance for the money, albeit at much higher power consumption for their advantages.

My point is that although Titan will probably give people an *affordable* alternative to Quadro/Tesla for work that doesn't need the professional/enterprise features and drivers, I'm not sure if it can do that job better than AMD's Tahiti cards for most common relevant workloads now and in the future. Do you think that you can give a better perspective on this than I have?
 


Money is obviously no object in this post and you probably know this already, but if you want to upgrade from two 680s to Titan, you'll have to at least get at least two Titans or it'd be a downgrade. $2000 is a lot of money to buy in considering how much more performance you'd really get.

Also, what with a few games such as Crysis 3 being able to be too much for a very smooth experience with even a single 680 or 7970 at 2560x1440 and 2560x1600, I think that Titan may have a shot at being the first single GPU card to be truly enough at those resolutions in any current reasonable situation. Maybe. I wouldn't call a single Titan overkill for 5760x1080.
 
Luxury item if you really want to have the fastest card possible. Not bad actually, but it is hard to see that these flyes out of shelves...
Most previews claims 30% more speed than 7970 but with extreme cooling it is possible to get much, much more from this! It is easy to see that with liguid Ln2 this can achieve nice results... but not for normal home users...
 
[citation][nom]TheBigTroll[/nom]this GPU from the start would have cost nvidia a fortune to make from the huge die size. it wasnt meant to be cheap either way[/citation]

It's only a little bigger than GF100 and it's not Nvidia's largest die (that would be the one used in the GTX 280 IIRC). I'm not saying that it couldn't have had problems, it undoubtedly does, just that size is not the only factor.
 
Any one ever wonder why they put so much engineering into how a card looks only to put them ass up. with a card costing 1k it seams like they would but a littl though as the the part we allways look at ? Im just wondering why they put so much effort into the side we never see? the people that would buy 1k card have there wattrer cooled computer all kit up with neon , water cooled. I just think it would be nice if there ass wasnt so ugly. it makes me wonder if they realize we have to flip it over to install it. has anyone elce wondered this same thing ?oh and why do they call them apartments when there all stuck together? and why do you park in a driveway and drive on a parkway???
 
What I'd like to hear more about is how this card will do on CUDA stuff. It seems like having that whopping 6 Gb of onboard memory will make it so you can do both larger calculations, and need to swap data between the CPU and GPU less (ie you won't prematurely run out of GPU memory, which is a concern I have about the 690, where each card only sees an effective 2 GB of vram). It's my understanding that this memory swapping is a major bottleneck of GPU calculations. Thoughts?
 
What I'd like to hear more about is how this card will do on CUDA stuff. It seems like having that whopping 6 Gb of onboard memory will make it so you can do both larger calculations, and need to swap data between the CPU and GPU less (ie you won't prematurely run out of GPU memory, which is a concern I have about the 690, where each card only sees an effective 2 GB of vram). It's my understanding that this memory swapping is a major bottleneck of GPU calculations. Thoughts?
 
TO make this thing worst. Nvidia think this item is an luxury item, buyers are buying as collector item but This card DO not come with a luxury warranty with components build to guaranty last for >5-7years. Whats good for a collector item that fail and couldnt be replace 3 years later?

GTX690 do not have 5years-7years of warranty. Buying extended warranty is not gonna guaranty u that the manufacturer gonna replace a new same card later. They probably gonna find a similar speed mainstream card and replace it.
 
[citation][nom]tomfreak[/nom]TO make this thing worst. Nvidia think this item is an luxury item, buyers are buying as collector item but This card DO not come with a luxury warranty with components build to guaranty last for >5-7years. Whats good for a collector item that fail and couldnt be replace 3 years later? GTX690 do not have 5years-7years of warranty. Buying extended warranty is not gonna guaranty u that the manufacturer gonna replace a new same card later. They probably gonna find a similar speed mainstream card and replace it.[/citation]

I'd rather get a new, similarly or better performing card than another old one if my card fails. If I saw it as a collectors item, then I probably wouldn't be using it.
 
hell no!!!! back then when nvidia released gf-110 they only charged $5++ for their top of the line gpu.. and now twice for that?? when gk-104 suppose to be mid range chip now they charging high end price for it... if they charge $600 to $700 it is understandable but $1k is totally unacceptable.. if gk-110 can be made into dual-gpu card how much y'all think they will charged for it?? $2k??? as a gtx 670 user and fanboy as well.. i very very dissapointed with nvidia this time.. i would rather buy hd 7970 instead if i haven't got a piece of 670 in my machine...
 


you do realize that the fermi card had 3 billion transistors and the gk 104 had 3.5 billion transistors while the titan gk110 has 7.1 billion. meaning it is slightly plausable that you pay double the cash for double the transistor count.

also mention the gk110 is the biggest chip (regarding die size) that nvidia made, meaning flaws are going to cut down the amount of chips that are suitable for the titan. in the end, it aint cheap enough to put it in the 500 dollar price bracket. ever
 


GK110 is not Nvidia's biggest GPU. GT200a from the GTX 280 is Nvidia's biggest GPU.
 
Alright. After mulling this over, here is what I could come up with. (Benchmarks later this week will help a lot.)

Background: I have 2 GTX 670's in SLI, but I purchased late in the cycle. Both were purchased in mid-fall 2012 (yes a few months ago) and I was looking to grab the new cards when they released, presumably the GTX 780. This is why I purchased the 670 vs. the 680 or a 690. I paid $800 total.

Let's play pretend a bit. All accounts pointed to the GTX 780 to be a refresh. 15 - 25% performance increase over GTX 680, nothing amazing, but solid. Current guesstimates with no benchmarks puts the Titan at 80% increase over GTX 680. That is amazing. Those are the numbers I am going to use, until confirmation from Tom's benchmarks come in just a few short days.

Playing pretend, assuming a lot: So let's say you upgrade video cards every year on the new model release. You bought 2 GTX 780's because is this pretend land, they just came out. You rock 2 in SLI until the next cards release (Maxwell, 1 year from now). Let's pretend the GTX 780's are $600 each. You have $1200 wrapped up in cards. Let's say you sell both cards for $600 after 1 year of use and purchase 2 x Maxwell cards. Let's say they are $600 a piece. From today through Maxwell you are $2,400 deep on graphics cards, but recouped $600 from selling your GTX 780's for a net of $1800 spent on your hobby. Let's say Maxwell is a whopping 50% increase over GTX 780 (big boost) How much faster would it be than GTX Titan? Hmmmm.....

So no more pretend time. GTX Titan, as of next week, is here right now. $2,000 gets you going with 2 of them, now. You can skip the GTX 780 as I can't imagine they will be faster than the Titan. They will probably be 25% faster than GTX 680 if based on the same chip as the GTX 680. After all the Titan is using a monster GPU! They will also cost less than the Titan. Probably have less memory too.
So Maxwell comes out. If you are within 10% of the performance of Maxwell, maybe you say, bah I don't need to upgrade (sensibility kicks in for the first time in this whole process). Maxwell has to be 18-24 months out w/ no delay's if GTX 780 is slated now for Summer / Fall release. Is paying a premium now for 2, maybe even 3 years of best available performance that crazy? Especially if the premium is maybe $200 vs. the normal process of upgrading and selling cards when the new ones come out?

What if GTX 780 is delayed longer? What if Maxwell is delayed further? What if AMD fails to do anything for a while to push Nvidia? What if this card sits at the top of the food chain (multi-GPU setup) for a couple of years? Won't be so bad having such kick ass performance for such a long time. Plus if you bought 2, you could always add a third if you needed / craved more performance and nothing substantially better was available (talking 2-3 years out).

Sure comparing 1 GTX Titan card to Dual GPU cards / Multi GPU setups doesn't make much sense. But putting 2 in SLI and making the giant assumption that 2xGTX Titans is 80% faster than 2xGTX 680 that is huge! Big enough that considering everything else, there might be a chance that a pair of these bad boys sits at the top of the hill for 2 whole years. That is crazy talk!

I feel like we are getting a chance to buy future tech now. Sure there is a premium. Yes $2,000 is a lot for graphics. But it doesn't sound completely absurd. Still waiting on benchmarks myself, but I would gamble at 80% increase over GTX 680 now for $2k (assuming speed and price are 100% true) than guessing when and what the next 2 flagship cards will be from Nvidia. Presumably the GTX 780 and GTX 880.

Anyone got any thoughts on my way too long assessment? Am I trying to hard to talk myself out of $2k and into 2 Titan's? LOL
 
[citation][nom]phenom90[/nom]hell no!!!! back then when nvidia released gf-110 they only charged $5++ for their top of the line gpu.. and now twice for that?? when gk-104 suppose to be mid range chip now they charging high end price for it... if they charge $600 to $700 it is understandable but $1k is totally unacceptable.. if gk-110 can be made into dual-gpu card how much y'all think they will charged for it?? $2k??? as a gtx 670 user and fanboy as well.. i very very dissapointed with nvidia this time.. i would rather buy hd 7970 instead if i haven't got a piece of 670 in my machine...[/citation]not to mention that if u look into the VRM/PCB of 480/580, u actually see similar component like 6 VRMs, vapor chambers vs titan due to similar TDP. sure 28nm is an expensive process, but I DO not believe it is costing extra $500 vs 40nm back in GTX480 launch date.(that time 40nm is also new and yields are probably worst)

 
People the benchmarks are out, if you're going crazy just looked them up. Hours ago I Googled "nVidia Titan" and there were well over 100 articles. Tom's is respectable and is going to wait, then publish.

But there is nothing to know really. The Titan uses approx 30 percent more power (250 vs. 195 watts) than a GTX 680 and is a bit more efficient(per watt). So there is your rough graphical calculation. It's not 80% more powerful(wish it was 80% or less more $$ thou)and also of note; scores nowhere near 7107 in 3Dmark11 X. lol, what a joke that was
 
Status
Not open for further replies.