Nvidia GeForce GTX Titan 6 GB: GK110 On A Gaming Card

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]blazorthon[/nom]If that's true, then the number from this Tom's article would be wrong.My only concern over how wide of an applicable market Titan has is that CUDA seems to be losing a little traction to Direct compute and OpenCL. Like I said earlier, two 7970s can be had around the same price as Titan and they have quite an advantage (at least in theoretical performance) so long as they're supported. If Titan is really only about 1.3TFLOPS for double precision, then that gives the 7970s an even greater potential advantage. Even worse, the 7970s have far more gaming performance for the money, albeit at much higher power consumption for their advantages.My point is that although Titan will probably give people an *affordable* alternative to Quadro/Tesla for work that doesn't need the professional/enterprise features and drivers, I'm not sure if it can do that job better than AMD's Tahiti cards for most common relevant workloads now and in the future. Do you think that you can give a better perspective on this than I have?[/citation]
The problem with multi-GPU setups is that much of the content creation applications out there don't take advantage of the second GPU for viewport rendering or acceleration, and so generally a single powerful GPU is preferable. For instance none of the software I use on a regular basis can utilize more than one GPU, SLI and Crossfire simply aren't supported. Personally, I've never really been a big fan of multi-GPU setups even for gaming.

AMD's Tahiti cards offer fantastic raw performance from what I've seen, but I think CUDA and to a lesser extent driver support have been some of the main problems with adoption. For instance some applications such as Premiere CS5 and 5.5, rely on CUDA for hardware acceleration. I think this has changed to OpenCL in CS6, but I'm not certain. The software/driver ecosystem is also extremely important, and I think this is an area Nvidia has traditionally done a much better job pursuing than AMD. The push AMD is making with Tahiti and GCN looks very promising, and I think as more people realize that they're becoming a viable alternative to Nvidia and CUDA, adoption rates will increase. Personally I don't have a lot of experience running applications like Maya, Mudbox, or After Effects on AMD cards, but I know that during our last upgrade cycle my program chose to go with Quadro 5000's because they were the superior option at the time for the applications we use.
 
Not everything big is a success, the Titan represents a desperado attempt to claim fastest single GPU and sell off remaining expensive Tesla parts under the guise of "high end" gaming GPU. Its price point is probably going to be the biggest death nail which will fell this Titan or sink it like the Titan-ic.

It only has one value and that is in enthusiast gaming rigs for epeen status. Two Sapphire VaporX 6GB 7970GE cards give better performance with over $250 kept on hand, not that 6GB cards have a market other than extreme resolution or catleap setups, even so most high end gamers will favor single or dual high end GPU's as the costs and deminishing returns don't necessitate more being spent. The titan's only call is extreme resolutions and GPGPU which is a limited market in its own right. If a gamer is runing single 19:10 or 25:14 resolutions this card represents very poor value for money.
 
[citation][nom]dragonsqrrl[/nom]The problem with multi-GPU setups is that much of the content creation applications out there don't take advantage of the second GPU for viewport rendering or acceleration, and so generally a single powerful GPU is preferable. For instance none of the software I use on a regular basis can utilize more than one GPU, SLI and Crossfire simply aren't supported. Personally, I've never really been a big fan of multi-GPU setups even for gaming.AMD's Tahiti cards offer fantastic raw performance from what I've seen, but I think CUDA and to a lesser extent driver support have been some of the main problems with adoption. For instance some applications such as Premiere CS5 and 5.5, rely on CUDA for hardware acceleration. I think this has changed to OpenCL in CS6, but I'm not certain. The software/driver ecosystem is also extremely important, and I think this is an area Nvidia has traditionally done a much better job pursuing than AMD. The push AMD is making with Tahiti and GCN looks very promising, and I think as more people realize that they're becoming a viable alternative to Nvidia and CUDA, adoption rates will increase. Personally I don't have a lot of experience running applications like Maya, Mudbox, or After Effects on AMD cards, but I know that during our last upgrade cycle my program chose to go with Quadro 5000's because they were the superior option at the time for the applications we use.[/citation]With the next gen consoles coming with uber weak down clock bulldozer + GCN combo, u are going to see Games start putting Direct compute and openCL in works. Suddenly CUDA worst future in gaming world.
 
Our beef is with its stratospheric price tag, which limits the Titan to small form factor gaming boxes

WHAT DOES THIS MEAN! Every "review" of this titan has suggested putting this card into a mini-itx case, i am boggled as to why everyone keeps suggesting this.
 
[citation][nom]Scotty99[/nom]WHAT DOES THIS MEAN! Every "review" of this titan has suggested putting this card into a mini-itx case, i am boggled as to why everyone keeps suggesting this.[/citation]

It doesn't make sense in a larger form factor case that has the room for other cards such as multiple 670s because of its huge cost. That is why you see it put in small form factor cases where its performance at its power consumption can make a difference due to the limited cooling capabilities of such situations.
 
That's just marketing weapon and state of art engineering piece nVidia likes to carry top performance banner and this chip will probably hold it for longer time.

In the world of reasonable PC building this part could not exist 😛
 
[citation][nom]cypeq[/nom]That's just marketing weapon and state of art engineering piece nVidia likes to carry top performance banner and this chip will probably hold it for longer time.In the world of reasonable PC building this part could not exist[/citation]

Did not even think of this (prob because i think SLI is silly anyways) but that is probably why nvidia is suggesting reviewers to focus on a single card as cheaper SLI configs can beat it.

Thanks.
 
Titan's price tag looks a bit unrealistic. The gaming performance will be nowhere near the 690, and the compute performance will only be a moderate boost over the 7970GE which is less than half its price. The real benefit to Titan is in it being a single GPU card with lower power consumption than the 690 and certainly two 7970GEs.

Tahiti looks inefficient but it has a significantly higher number of shaders than GK104. I can't imagine that a portion of these are sat idle when the rest of them aren't, plus Tahiti's compute performance is massively superior. I do wonder what the 7970's power consumption would've been like had it come out with 6970 levels of compute performance (or less).
 


I think that cooling the 690 is a little more difficult for very tight cases. It's not just more heat than Titan, but about half of the heat is also sent right back into the case. Small form factor cases may have more issues getting rid of it without being obnoxiously loud.
 


If I was to hazard a guess, I'd bet on GCN (especially Tahiti) using around 20% less power if it wasn't so compute-focused (I think that it's worth the sacrifice of some power consumption to get greatly more compute performance, especially with compute-accelerated gaming features and more becoming more common).

Other things to consider with AMD's power consumption is that stock voltage seems unnecessarily high and at least for the main performance rivals (IE GTX 670/680 and Radeon 7950/7970/7970 GHz Edition, GTX 660/660 Ti and Radeon 7850/7870), AMD almost always has wider memory interfaces to worry about.
 
Meh, Chris just trolled me, no benchmarks! :lol:

But on the GPU. I'm skipping the price debate, it's been discussed enough already.

I'll comment on GPU boost, looks like Nvidia saw Intel's Turbo Boost 2.0 and tried to mimic it, it seems almost exactly the same.

@Chris: You think you can see how turning on additional DP floating point cores compares to the 1/24 default ratio in terms of overclocking? I mean GPU Boost should be disabled and all (from what you said), so how does that change things?
 
[citation][nom]The Stealthinator[/nom]I read the 3x Titan cards could run Battlefield 3 on ultra settings on 3 monitors.[/citation]

Depending on the resolution, I bet a Radeon 7750 could do that too. Gonna need to be more specific 😛
 
[citation][nom]maxinexus[/nom]Once 4k monitors start to role out...Titan might handle it just fine...but by that time there will be Titan 2 for sure.[/citation]

I don't think that Titan would handle 4K gaming very well without reduced quality settings.
 
[citation][nom]bl1nds1de13[/nom]When compared to the GTX690 I would have to differ on saying that " there's no real reason not to favor it over Titan " ..... Any SLI or crossfire solution, including dual board cards like the 690, will have microstutters when compared to a single card setup. This has been thoroughly shown in several tests, and have seen it myself. A single card will never have scaling issues or microstutters. BL1NDS1DE13[/citation]

While the issue isn't exactly the same as "microstutter" in an SLI or Crossfire rig, single-card solutions do suffer from frame rate variations that produce a similar effect. Not as bad, but it is still there. And, the problem has been addressed enough that it is not as bad as it used to be.

Still, it IS a consideration, I agree there.
 
These cards are supposed to suck air from their sides (the direction of the next expansion slots). Looking at how the 3 cards are pressed against each other, I wonder how it effects the center and right card temperatures. I'm pretty sure that they are not getting the proper ventilation they need.

tommiken
 
I love Nvidia for releasing the Titan. I don't understand all the hate in this thread tho?

Nvidia knows there is virtually no market for it, but they release it anyways -- just because there are some enthusiasts who will appreciate it. It doesn't effect any of the other products; it doesn't even try to compete, nor should it. It's not meant to. It's just pure awesome - at a price.

When you consider the Telsa versions of this card are 2 to 4 times as costly, all of a sudden this card is priced amazingly low. I'm half tempted to buy it just so I can start doing some serious CUDA programming at home.

Honestly, $1K isn't that much; Dropping $7K on a desktop that I use 8-hours a day is peanuts compared to the amount I spent on a car that I spend 20-minutes a day driving. If its the best, and its just $1k, why the hell no - I can easily afford it.
 
[citation][nom]blazorthon[/nom]Tesla K20X double precision specification: 1.31TFLOPsIf Titan has 1.5TFLOPs in double precision, then it might actually beat the Tesla K20X for whatever doesn't need the professional/enterprise features supported by the Tesla. For that sort of job, maybe it will be worth the money and then some so long as it's a job that AMD doesn't excel at (for Titan's price, you can get two 7970s) or can't do (such as a program that supports CUDA but doesn't support Direct Compute nor OpenCL, at least not as well as it supports CUDA).[/citation]
It looks like Anandtech clarified a portion of their article. 1.3 TFLOPS is more of a worst case scenario for fp64 performance, although it's still listed as Nvidia's official compute performance figure for the card. Boost is disabled when full fp64 is enabled, limiting the card to 837 MHz max. But it can go as low as 725 MHz in particularly TDP constrained workloads, which is what Nvidia's figures are based off of. So at 837 MHz its theoretical DP performance should be around 1.5 TFLOPS.
 
Stereoscopic 3d / 120hz gaming across three monitors. This is the only single GPU setup that could come close to supporting that.
I can't afford it, but if I was rich I'd buy it. It's not overkill, it's just too expensive.
 
$1000 for a card is ridiculous. IMO, a single 680gtx will satisfy 90% of all serious gamers out there, 2x680 will satisfy wealthy gamers with triple screens and this will still be just under $1000.
 
Status
Not open for further replies.