Nvidia GeForce GTX 780 Ti Review: GK110, Fully Unlocked

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
wow, this is a heavy blow toward AMD. tho it's interesting that Nvidia would 'intentionally' cripple their flagship's 4K performance (via memory limitations) in an attempt to save costs. I guess few people game at 4K atm so it's no big deal, but it's still a bit odd.

That aside, I can't help but feel that this is a bit of a kick in the b... to those who purchased the Titan earlier in the year, buying into Nvidia's claim that it's the unique GPU champion that won't be beat by a single generation change, thus justifying the $1k price tag...
 
@lp231

The press Card is cherry picked, both companies do it, honestly there is nothing wrong with this practice as long as it comes with a disclaimer saying this is the Potential of the card. I am sure when the retail TI's come out they will have a % drop off.

@ mouse monkey I get your point, but can I add to that and say that loyalty to any company is kinda idiotic? They really could care less about you or I, just about our money! :).

AMD drivers haven't been bad in a very long time, that is left over stigma more or less. Yes the card does run hot but the aftermarket cooler will fix that cuz who buys the stock card these days? unless you are going to rip the cooler off and get a better one you are better off with an aftermarket card maker!
 


It's not about loyalty to any one company, it's about having something that works as advertised and does the job that it's intended for and in that regard the 7790 that I bought a few months ago has been a total disappointment and as such it has been relegated to the spare parts pile and the 560Ti has been put back in service and now I can fold (an Open CL app) and watch telly at the same time! :lol:
 
Things can't stay this way price wise .... at a $100 price difference between the Ti and the 290x, I can't imagine anyone is going to say "Oh I'll spend up to $600 but I won't go $700" .... especially once it starts setting in peeps minds that the "overclocked to the wall" $500 780 beats the "overclocked to the wall" 290x by surprisingly significant margins ....

http://www.youtube.com/watch?v=djvZaHHU4I8 (benchies at 8:30)

Also, if water cooling, will there be an extra cost for the additional rad area / fans ya might need with two 2xx versus two 7xx....... I have not as yet seen how far either can go on water cooling but at stock speeds, we're talking +100 watts extra.

Again, I don't wanna focus on who's winning / who's losing but given what we have seen, I expect the 290x will have to now drop in price, pulling the 290 down to which nVidia will again have to respond with reductions in 770 / 780. Hang on to ya wallets folks .... methinks more price drops coming.

Again, while most of the reviews I have read have come down pretty hard on the heat and noise of the 2xx series, some even offering a "no buy" recommendation, I am very concerned about the willingness of many to accept these noise level as a trade off for increased performance. I don't know if it's more a matter of brand loyalty or a surrender of sorts to the increasing costs of GFX cards but I noticed above CF'd 290s was suggested as an alternative..... the performance of that machine is immaterial if I have to wear ear protection or be in the next room to use it comfortably ..... 2 x 59 dBA ? Why not hook a vacuum cleaner to ya PC to provide cooling.

This willingness to accept noise at this level, to my mind at least, is disturbing and a trend that started with the H100 (also at 60 dBA) and is one I hope soon gets nipped in the bud. Yes, it was important for AMD to put something competitive on the shelves for the holiday season but I wish they had waited a few more weeks and come up with a better / quieter cooling solution.

I still see a solid position for the 2xx cards in the 4k space at current proces but unfortunately nVidia still has two cards that outperform the 290x when all are overclocked to their limits at 1920 x 1080 and 2560 x 1600. As of yet I have not been asked to do a 4k build but would recommend that the user consider the 290x if that was their desired res. However, this holiday season , the predominant requests I have had have been for 144 Hz 1920 x 1080....and one for the new 240Hz Eizo. Personally, I don't see myself going to larger resolution screens until 120Hz gets there.

 
It's funny you say that bjaminnyc, I just googled "geforce 780ti benchamarks" and so far 5 of many review sites all said the same thing basically. The 780ti is todays fastest GPU. Things may change when the 290X gets better cooling, but not today.
 


Fair enough! I feel the same way about the stuff I buy, I would have sent that card back and gotten my money back :) . I have bought GPUs from both companies and have gotten bad apples from both, but I still think both put out a quality product it's a matter of preference. I tell people electronics are like Russian roulette some times you get the bullet when you pull the trigger. I just send them back if they fail and get something that works, and a lot can be contributed to the aftermarket companies that make the card as opposed to Nvidia or AMD.
 
I am very unimpressed. Essentially it just beats the 290X while sacrificing memory and costing $250 more. Once the third party coolers come out for the 290X I could see them being of equal performance...
 


Well Chris is investigating at the moment for the 290X but this isn't the case for 780ti or any other AMD or Nvidia GPU. Do you mind showing me one other AMD or Nvidia GPU that has different performance, stock clocked between retail and press GPUs?


Battlefield 4 Crashing Fixed with Catalyst 13.11 Beta8 Driver
 
TH tested the 290x with an Accelero Xtreme III which made it run faster with less noise. Any chance you could update the charts with those figures for comparison of what we might expect with an after market cooler or a good third party design vs the 780 ti.
 


Well I've used this TV card in conjunction with several Nvidia cards and it has always just worked but the first AMD card I use it with and the AMD driver keeps failing as soon as an Open CL app is fired up, strange that.
 
Thanks for pointing out the retail discrepancy between the AMD retail and press boards. Can you do the same with the 780ti when you have the chance? Admittedly, I am an AMD owner. However, I think it's fair journalism to look at both companies. Maybe an article can be written based on further research on the issue. I think consumers would find this most valuable. And it would help diagnose how widespread the issue is.
 
780ti - great tech, great product, priced at a premium
290x - great GPU tech, crappy product, priced as such

Since when does a well-designed product build thermal limiting into the normal operating design profile. It really is ridiculous. The only thing I can think of is AMD is purposely holding back on the 290x with the cheap cooler fiasco in order to allow for a follow-up to the 780ti (both of these companies are formulating strategy several moves in advance, like a game of chess). But I don't understand why they couldn't have just dialed back on the clock. That would still allow for headroom for the enthusiast through overclocking.

Based on what I've read so far, the 290 & 290x are great tech under the hood but overall are really shoddy products.
 


I should have clarified my statement to say that if found to be true that I am sure that both companies do it and that a review should come with a disclaimer.

As for the BF4 issue both AMD and Nvidia have been having issues with the game crashing and had to release driver updates to rectify those issues.
 


Submit a ticket to AMD and tell them to get that fixed! That is something that I have always like about Nvidia is if there is a bug/ issue with something and you bring it to their attention they will get on it and fix it a.s.a.p. I am sure with Raja Koduri now in charge of the AMD GPU department that he will strive for this kind of customer service.
 
From the AMA (Read the first 5 points, they're related to this):


There is no minimum clock, and that is the point of PowerTune. The board can dither clockspeed to any MHz value permitted by the user’s operating environment.

2. We've seen reports from Tom's Hardware that retail 290X cards are clocking much lower (someone posted a chart on this page above), and even a user on Tech Report claiming much lower clocks than review samples have.

Is this simply because the current PowerTune implementation is heavily dependent on cooling (which will be variable from card to card)?
This issue with the 290X is causing people to be cautious regarding the 290 as well.

Plain and simple, THG and Tech Report have faulty boards. You can tell because Sweclockers performed the same retail board test and got the expected results: performance in identical to the AMD-issued samples.

Every 290X should be running 2200 RPM in quiet mode, and every 290 should be running 2650 RPM. We will be releasing a driver today or tomorrow that corrects these rare and underperforming products, wherever they may exist.

3. In light of (2) and the fact that AnandTech went so far as to recommend AGAINST the 290 due to the noise it made (i think they measured over 55 dBA), wouldn't it have been a better idea to re-do the reference cooler? Maybe make it a dual-fan blower?

Having addressed #2, we’re comfortable with the performance of the reference cooler. While the dBa is a hard science, user preference for that “noise” level is completely subjective. Hundreds of reviewers worldwide were comfortable giving both the 290 and 290X the nod, so I take contrary decisions in stride.

4. Partly because of (2) and (3), doesn't the 290 make the 290X pointless?

Hardly! The 290X has uber mode and a better bin for overclocking.

5. Wouldn't it have been a better idea to keep the 290 at a 40% fan limit (and thus be quieter) and allow partner boards to demonstrate Titan-class performance at $425-450?

No, because we’re very happy with every board beating Titan.

6.a.) Open? How? It's a low-level API, exclusive to GCN. How's it going to be compatible with Fermi/Kepler/Maxwell etc. or Intel's HD graphics? For that matter, will you be forced to maintain backwards compatibility with GCN in future?

You’re right, Mantle depends on the Graphics Core Next ISA. We hope that the design principles of Mantle will achieve broader adoption, and we intend to release an SDK in 2014. In the meantime, interest developers can contact us to begin a relationship of collaboration, working on the API together in its formative stages.

As for “backwards compatibility,” I think it’s a given that any graphics API is architected for forward-looking extensibility while being able to support devices of the past. Necessary by design?

6.b.) All we know from AMD as yet about Mantle is that it can provide up to 9x more draw calls. Draw calls on their own shouldn't mean too much, if the scenario is GPU bound. You suggest that it'll benefit CPU-bound and multi-GPU configs more (which already have 80%+ scaling).

That said, isn't Mantle more of a Trojan horse for better APU performance, and increased mixed APU-GPU performance? AMD's APUs are in a lot of cases CPU bottle-necked, and the mixed mode performance is barely up to the mark.

I suggested that it’ll benefit CPU bottlenecking and multi-GPU scaling as examples of what Mantle is capable of. Make no mistake, though, Mantle’s primary goal is to squeeze more performance out of a graphics card than you can otherwise extract today through traditional means.

6.c.) All said and done, will Mantle see any greater adoption than GPU accelerated PhysX? At least GPU PhysX is possible on non-Nvidia hardware, should they choose to allow it.
Wouldn't it have been better to release Mantle as various extensions to OpenGL (like Nvidia does), given the gradual rise of *nix gaming systems? And Microsoft's complete disinterest in Windows as a gaming platform...or heck, even in the PC itself.

It’s impossible to estimate the trajectory of a graphics API compared to a physics library. I think they’re operating on different planes of significance.

I will also say that API extensions are insufficient to achieve what Mantle achieves.

6.d.) Developers have said they'll "partner" with you, however the only games with confirmed (eventual) support are BF4 and Star Citizen. Unreal Engine 4 and idTech don't seem to support Mantle, nor do their creators seem inclined to do that in the near future.
Is that going to change? Are devs willing to maintain 5 code paths? It would make sense if they could use Mantle on consoles, but if they can't...

The work people are doing for consoles is already interoperable, or even reusable, with Mantle when those games come to the PC. People may have missed that it’s not just Battlefield 4 that supports Mantle, it’s the entire Frostbite 3 engine and any game that uses it. In the 6 weeks since its announcement, three more major studios have come to us with interest on Mantle, and the momentum is accelerating.

7. With TSMC's 20nm potentially unavailable till late next year, is AMD considering switching to Intel's 22nm or 14nm for its GPUs? Sounds like heresy, but ATI and Intel weren't competitors.

No.

8.Regarding G-Sync, what would be easier: Licensing Nvidia's tech and eventually getting them to open it up, or creating an open alternative and asking them to contribute? There is, after all, more excitement about G-Sync than stuff like 4K.

We fundamentally disagree that there is more excitement about G-Sync than 4K. As to what would be easier with respect to NVIDIA’s technology, it’s probably best to wait an NVIDIA AMA. 😛

9.Is AMD planning on making a OpenCL based physics engine for games that could hopefully replace PhysX? Why not integrate it with Havok?

No, we are not making an OpenCL physics library to replace PhysX. What we are doing is acknowledging that the full dimension of GPU physics can be done with libraries like Havok and Bullet, using OpenCL across the CPU and GPU. We are supporting developers in these endeavors, in whatever shape they take.

10. We've seen that despite GCN having exemplary OpenCL performance in synthetic benchmarks, however in real-world tests GCN cards are matched by Nvidia and Intel solutions. What's going on there?

You would need to show me examples. Compute is very architecturally-dependent, however. F@H has a long and storied history with NVIDIA, so the project understandably runs very well on NVIDIA hardware. Meanwhile, BitCoin runs exceptionally well on our own hardware. This is the power of software optimization, and tuning for one architecture over another. Ceteris paribus, our compute performance is exemplary and should give us the lead in any scenario.

11.Are the video encoding blocks present in the consoles (PS4, Xbone) also available to GCN 1.1 GPUs?

You would have to ask the console companies regarding the architecture of the hardware.

12. What is it the official/internal AMD name for GCN 1.1? I believe it was Anand of AnandTech that called it that.

We do not have an official or internal name. It’s “graphics core next.”

13. I remember reading that GPU PhysX will be supported on the PS4. Does that mean PhysX support will be added to Catalyst drivers on the PC? Or rather, will Nvidia allow AMD GPUs to run PhysX stuff?
A lot of questions, but I've had them for a long time. Thanks![/quotemsg]

No, it means NVIDIA extended the PhysX-on-CPU portion of their library to developers interested in integrating those libraries into console titles.

http://www.tomshardware.com/forum/id-1863987/official-amd-radeon-representatives/page-5.html#11884668
 

Too much discussion for one if...

But to be honest I believe that the problem could be on the newer AMD's tech powertune combined with the high temps... As I show in many reviews, the performance of 290X was falling after 5 mins or less of gaming (and thats the true performance cause people usually gaming for hours...). (people was refering even 650Mhz clocks :S).
Thats why it doesn't affect any other AMD (like R9 280X) or Nvidia GPU.
But this is only an estimate too. We are waiting from Chris to reveal his work on retail 290X version.

ps: W1zzard from Techpowerup revealed that his 780ti had only 75% ASIC quality which is average and not by any means a golden review sample...
 
It's a nice card and I'm sure I could get it faster then my liquid cooled titan which is clocked at 1185Mhz consistent and stable at full load with temp at 61-64C. The 780ti liquid cooled will probably be a beast. But I'm stayin with my current set up. Way too much of a hassle to change the card out without at least 25% gain which it won't be. Looks like we wont see the really exciting gains till the 20nm node.
 
Chris, I see you're using the BMW scene with Blender - very cool!

Btw, how does one force that test to use multiple GPUs? Or must one
use a newer version of Blender for that? (I've been testing with 2.61)

Ian.

 

From EVGA's site
780Ti_HC_650x313.png

😀
 


You are right that is a lot of discussion for an IF!
I think you are right when it comes to the tech power tuning on the AMD cards that with the temps being so high from that garbage cooler that it only hurts the performance of the card! Either they revamp the power tuning or they revamp the cooler, I am sure that this will be rectified when the aftermarket companies put their own hardware together for these cards. I am curious why they just don't ask one of the partnering companies to build them a "generic" cooler that is better than the ones they come up with?
 
Seems like NVIDIA needs some help cutting the cost of its otherwise great cards, like an Intel 14nm process. The latest AMD releases have radically altered the whole price performance equation.

I love to to see this competition -- can't wait for some BF sales to upgrade my very dated HD5850.
 
If would be helpful to compare the Tom's benchmark results with those from techpowerup. Personally, I find Tom's results to be less accurate. According to TPP, a stock GTX780Ti provides 13% and 10% more FPS than 290 and 290X. Not enough to justify the cost in my opinion, though.
 
"Thanks for pointing out the retail discrepancy between the AMD retail and press boards. Can you do the same with the 780ti when you have the chance?"

The answer is there without the hyperbole. Basically Its the stock 780 performance that the 290x presser demolished vs the cherry picked chip equipped 780Ti. The review said the retail 780s have been getting the just OK chips, while the best chips were held back for the 780Ti. Until AMD gets deeper into the production runs of chips for the 290x and their yields and quality improves there will be greater variability in the quality of their chips. AMD is struggling at this point just to get product out the door and accepting chips from wafers that once quality yields improve would be scrapped. .

This isn't an AMD issue. but rather a common issue to chip manufacturers. When manufacturers start a die shrink on wafers or a new product design, chips can be used from wafers where the yields are very low say in the 30% range, but later after quality improves and process tweaking is done, yields will improve to 80-90% and then its generally time to move to next product or shrink. Obviously the quality of the chips on a wafer at the far higher yields are better than those from a initial production run.
 


Well AMD has on their hands a new GF100. It has great potential but it is limited by its temperature. I believe a new Hawaii 2.0 GPU should be a better solution (like GF110 was faster quiter and had lower temps), instead of better cooler. But this will take around 5 to 6 months (my best estimate judgin on how Nvidia did with GF100).
The problem is that today we have the Hawaii 1.0 so we must rely on custom coolers from MSI, ASUS etc etc.

So what would be the perfect Ideal for today or lets say 2 weeks from now (so companies should release their custom coolers to the market)?
Sites like Tom's Hardware should make a nice HUGE round up with custom card from R9 290 and R9 290X along with custom cards from 780 and 780ti. That would require around 20-25 test cards (lets say 5-6 from each GPU), which is a lot of time.
I would love to see one of those round up though! :)
 
Status
Not open for further replies.