Report: Nvidia GeForce Titan Might Outperform GTX 690

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]samuelspark[/nom]If it outperforms the 690, it'll outperform the 7990[/citation]

Not necessarily. The 7990 already outperforms the 690 overall, granted not by huge margins, so this might simply equalize with the 7990 at a slightly cheaper than average price for the 7990. Besides, the 7990 isn't even an official AMD card. There's not much profit to be had at these very high price ranges.
 
[citation][nom]wdmfiber[/nom]Nvidia wasn't actually fortunate. They had to hack the compute performance out of the GK104 to do it. I'm not sure what went wrong with the GK110, after all... TSCM makes the 28nm GPU parts for both nVidia and AMD. But whatever the case, the yeilds are good now and the titan is coming! And the GTX 780 and Radeon 8970 shouldn't be much of a worry. They're more of a refesh/name change; same 28nm tech. New GPU's won't be out till ~2014, 20nm.[/citation]

AMD and Nvidia's current gens are both are TSMC 28nm, but they actually use different 28nm processes.
 
[citation][nom]wdmfiber[/nom]@Tomfreak, that is a brilliant deduction. Thanks for taking a closer look at the math.Benchmarks:http://www.tomshardware.com/review [...] 232-7.htmlSo indeed, when taking TDP into account the gigaflops per watt seem too high for 28nm.Maybe it's a prototype Maxwell GTX 880? lol... unlikely, someone photoshopped it (GTX 680 TDP is 195 watts | 690 is 300 watts | Titan is 235 watts)My rough math says to Score 7100 in 3Dmark 11, the TDP would have to be ~375 watts![/citation]

TDP doesn't work like that and it is not equal to power consumption. GK110 is not the same architecture as the other Kepler GPUs, so it might be able ot be much more power efficient. Using the same node doesn't mean that there can't be power efficiency differences. That has been proven time and time again. Heck, a very good example is how much more power efficient the Radeon 6800 cards are compared to the other VLIW5-based models. They have a tweaked version of the same architecture used in the Radeon 5000 and 6700/6600/6400 cards and the same fabrication process, yet they're significantly more power efficient.

Also, the GPU is not the only part of a graphics card that consumes a lot of power. This Titan might have memory built on smaller nodes and/or at lower frequencies than the 680/690 and other components might be better too.
 
I have no need to upgrade my 5870 ATM, unless you play those tech demo's like Crysis there is no need.

But since 1080P is and will be the sweet spot for quite some time, i may upgrade to this card as it should produce a nice 60+FPS solid in upcoming games.. though i may wait for Source engine 2 announcement first. I don't like the idea of dual video cards, shame they still have micro stuttering.. and the companies know this because with triple SLi/Xfire there is barely any, this is why they only allow dual cards to have 2 links on a single card.. pretty pathetic.

Actually, most very high-end double GPU setups are excellent about stutter these days and besides that, dual-GPU cards are built with only two GPUs because more is rarely feasible unless they use lower end GPUs. That is more wasteful and difficult to build into a single card.

Also, most dual-GPU cards support quad GPU arrays with a second dual-GPU card and AMD's modern dual-GPU cards all support this in addition to triple GPU arrays with a single GPU card instead of a second dual-GPU card. What cards support mixing with what cards varies, but as a general rule, all but a very small portion of dual-GPU cards can be put in an array with a second dual-GPU card of the same model and AMD has the additional advantage of supporting more variation in what dual-GPU cards can match what dual-GPU cards as well as supporting mixing dual-GPU cards and single GPU cards.

Also, AMD and Nvidia both have many single GPU cards that can be mixed with three or four other single GPU cards so long as the motherboard supports that many.
 

ojas

Distinguished
Feb 25, 2011
2,924
0
20,810
[citation][nom]ojas[/nom]Looks more like it'll be sold as a Tesla card, unless they're doing away with dual GPUs because Titan is enough.[/citation]
Sorry, i meant Quadro, not Tesla.
 

wdmfiber

Honorable
Dec 7, 2012
810
0
11,160
[citation][nom]blazorthon[/nom]TDP doesn't work like that and it is not equal to power consumption. GK110 is not the same architecture as the other Kepler GPUs, so it might be able ot be much more power efficient. Using the same node doesn't mean that there can't be power efficiency differences. That has been proven time and time again. Heck, a very good example is how much more power efficient the Radeon 6800 cards are compared to the other VLIW5-based models. They have a tweaked version of the same architecture used in the Radeon 5000 and 6700/6600/6400 cards and the same fabrication process, yet they're significantly more power efficient.Also, the GPU is not the only part of a graphics card that consumes a lot of power. This Titan might have memory built on smaller nodes and/or at lower frequencies than the 680/690 and other components might be better too.[/citation]
efficiencies and architecture, lol! You don't understand.

If the Titan GPU scores 7100 on 3Dmark11, than it is ~22% more powerfull than a GTX690. That in itself is a pretty lofty claim. Now take into account that Titan is suppose to use 65 watts less than a GTX 690... and a score of 7100 is impossible. Likely not even possible for next-gen 20nm GPU's (2014).

Either the score of 7100 was photoshopped or Titan will have three 8 pin connectors on it. I'm guessing the former and Titan will score ~5100.
http://www.tomshardware.com/news/Titan-Nvidia-GK110-gpu-tesla,20614.html
 

hannibal

Distinguished
3Dmark? It is guite easy to see that this Titan beats down 690 in physic test, because it is much better optimised in calculation, but it can be a little bit slower in normal gaming tests... Well soon we can see it.
Allso the clocks are suposed to be really slow, so it is possible tha with enough cooling and power this can really fly. But allso that remains to be seing.
 

BestJinjo

Distinguished
Feb 1, 2012
41
0
18,540
LOL @ all the people who actually believe NV can double the performance of a GTX680 on the same 28nm node. No GPU company in the history has ever doubled the performance of a flagship last gen card on the same node. It's not possible. That 3DMark score is a fake, achieved on an overclocked GTX690:

http://i.imgur.com/oGPVPHY.jpg

Considering K20X with 2688 CUDA cores and 732mhz + 5.2ghz GDDR5 already has a TDP of 235W, the only way for Titan to score more than double the points in 3dMark11 is if it more or less doubles the shading, texture and memory bandwidth of GTX680. Just do a simple mathematical calculation and you'll arrive that just to double the Stream Processing power, the Titan would need to be a 2880 CUDA core part clocked at more than 1.1Ghz. Ya, that's a 300W card right there. It's pretty amazing that people eat rumors like this up and don't even realize GTX680 already uses about 180W of power.
 

kathiki

Distinguished
Apr 9, 2010
56
0
18,630
This is really AWESOME NEWS.

Nvidia will produce a new card that a) MIGHT outperform an existing model, b) PERHAPS will outperform an existing model c) expect to outperform it by 10-20% ( that is roughly 12 fps ).

I am awaiting in anticipation ..................



 

dons20

Honorable
Feb 2, 2013
30
0
10,530
[citation][nom]samuelspark[/nom]GG AMD?[/citation]
GG :D
But some modern computers are still much faster than this...although the PS4 will be optimized for gaming so it'll perform great
 
Well, if one of my SLI 570s hadn't have croaked last summer right during the 680 release, I'd have held out and skipped the 6-series. Unfortunately, I couldn't warrant spending another $320 (price of one at that time) on a last-gen card and bit the bullet shelling out another two bills for a 680 knowing full well the next-gen was six months away (I would have preferred waiting for the much better value 670, but couldn't). But if this new card outperforms a 670/680 SLI setup which it sounds like it will, I'll likely be offloading the 680 and picking one up. Going to need it moving to a 5760x1200 setup this spring.
 

kyuuketsuki

Distinguished
May 17, 2011
267
5
18,785
Correct me if I'm wrong, but doesn't 3D Mark 11 take advantage of GPU compute for the physics test? It also supports "compute shaders". It seems like the score could be getting some huge boosts on a couple of areas that take advantage of the compute performance, since Kepler sucks for compute.

That's assuming this is even legitimate. Which is likely isn't, since rumors from anonymous sources cannot be verified.
 
[citation][nom]wdmfiber[/nom]efficiencies and architecture, lol! You don't understand. If the Titan GPU scores 7100 on 3Dmark11, than it is ~22% more powerfull than a GTX690. That in itself is a pretty lofty claim. Now take into account that Titan is suppose to use 65 watts less than a GTX 690... and a score of 7100 is impossible. Likely not even possible for next-gen 20nm GPU's (2014). Either the score of 7100 was photoshopped or Titan will have three 8 pin connectors on it. I'm guessing the former and Titan will score ~5100. http://www.tomshardware.com/news/T [...] 20614.html[/citation]

You don't understand, not I. You completely disregarded my comment out of your unfounded belief that it is impossible for Nvidia to have improved that much despite the fact that it is not only possible, but not even necessary for what we see. You seem to forget the fact 3Dmark is a synthetic and can thus be throw off between different architectures, especially due to the large number of things that it tests. Again, GK110 is a different variant of Kepler and that could have mixed things up comparing it to the GTX 690. So, no, the score is not even remotely impossible and furthermore, it is possible without a ridiculous power connector arrangement such as is found on some Radeon 7990 and 7970X2 cards.

Now, whether or not it is true is a whole other story since, as others have stated, it is an unverified claim.
 

Angry Bellic

Honorable
Dec 13, 2012
150
0
10,690
Currently the most powerful Graphics Card in the world is AMD's HD 7990, right? I will buy one more GTX 680 and in an SLI configuration.
 
[citation][nom]Angry Bellic[/nom]Currently the most powerful Graphics Card in the world is AMD's HD 7990, right? I will buy one more GTX 680 and in an SLI configuration.[/citation]

The Radeon 7990 does seem to be the most powerful graphics card on average, but it's technically not AMD's AFAIK as it's an unofficial card. However, that's not to say that I'm likely to recommend it to anyone.
 

dragonsqrrl

Distinguished
Nov 19, 2009
1,280
0
19,290
[citation][nom]BestJinjo[/nom]LOL @ all the people who actually believe NV can double the performance of a GTX680 on the same 28nm node. No GPU company in the history has ever doubled the performance of a flagship last gen card on the same node. It's not possible.[/citation]
Ever heard of the Geforce 8800?
 
6GB of VRAM?

That suggests a dual-GPU part to me (2x3GB). The GTX680 has "only" 2GB so jumping to 6GB, even for triple-monitor is a little overkill.

Plus, don't Tesla cards have slightly DIFFERENT architecture to be optimized for their non-gaming task? Using the EXACT same GPU seems unlikely.

I wouldn't put much stock in this info. All I take out of it is there will be a new card.
 

keyston

Distinguished
Feb 14, 2012
11
0
18,510
Its amazes me reading these comments that people just think about how they will run the latest games.
The GPU has many more uses than that, Path trace rendering for 1. The speed that Iray, Octane and the like can render an image in comparison to a cpu and ram dependent ray tracer makes the latter seem like dinosaurs, let alone the quality of the image.
The problems with current gpu's is that the game cards render faster than the quadra cards, but the quadra's have more memory, but I cant see the logic in paying 3 or 4 thousand dollars for a card which you will undoubtedly replace a few years down the track when a GTX card can do the same job quicker.
And if these new cards have 6 gb of memory, and can compete with the speed of a gtx 580 in rendering,then nvidia will sell heaps of them to people who don't just play games. At the moment I'm rendering a scene with Mental Ray. Its been rendering for 5 hours and will still be going for at least another 2. If I could render it with my GTX 580 3gb it would only take 10 minutes, but its just a bit too much for the inboard memory. 6gb will solve that problem. I hope Toms bench this Titan against say a Quadro 5000, a Gtx 580 3gb, and maybe even a Gtx 680 4gb in something like Iray or Octane.
Also, it wont be long before games are using this kind of tech, or Nvidia optix. I think Nvidia know what they are doing and must be able to see a market for this kind of card, or why would they bother.
 

jtenorj

Distinguished
@BestJinjo: building on what dragonsqrrl said, there have been a number of times performance has improved by double(or more) going from one single gpu flagship in a generation to the single gpu flagship of the next generation on the same process. 8500pro to 9700pro(150nm), fx5950 to 6800 ultra(130 nm), 7900gtx to 8800gtx(90nm), and hd3870 to hd4870(55nm). Those are off the top of my head. 7900gtx to 8800gtx and hd3870 to hd4870 did use more power, though.
 
Status
Not open for further replies.