Report: Nvidia To Launch GK104-based GTX 660Ti in August

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

A Bad Day

Distinguished
Nov 25, 2011
2,256
0
19,790
[citation][nom]atikkur[/nom]if i were nvidia, i just let 500 series to co-exist with 600 series to serve the performance under 660ti. then focus to develop 700 series for next year or next iteration.[/citation]

They would have to drop the price even lower as older cards tend to have lower power efficiency than the newer models, and we're talking about Nivida, who has a reputation for lower power efficiency than AMD.
 

gsxrme

Distinguished
Mar 31, 2009
253
0
18,780
[citation][nom]kristoffe[/nom]Checking scores, it's almost as if the cuda cores are really just getting slammed in these 104's and not accessed properly in design. If they were a sign of proper parallel architecture, they would KILL the 560Ti, which I have 2 x 2gb in each of my rendering systems. It is not the case. Nvidia is simply engineering marketing now to keep up with ati's 'streams' when in fact the 560Ti 2gb was killing it and the power draw was reasonable.1344~1536 should show a parallel processing advantage of at least 4~5x that of the 560Ti and the scores on various sites are just pathetic. Hopefully someone comes out with a nice hack to enable or properly access the cores, otherwise, what is the point? And this new 660Ti with only 1.5gb, what they can't afford to put in parallel 2gb? ORLY? 4gb for a great custom 680 (which I have read about but never seen IRL)yawn[/citation]


Nvidia's GPU are currently better than AMD when it comes to FPS/Watt. The 7970 ghz Edition is 300watt and it is entirely on pair with the GTX680 that is 50watts less.

The real thing Nvidia spys in AMD most likely found AMD is about to launch some real shit maybe a quite HD8000 series and Nvidia needs to forget about the 600s and jump right to the 700s. This isn't like Nvidia to give up on a line so fast. Something is clearly going on that we don't know yet.
 
[citation][nom]gsxrme[/nom]Nvidia's GPU are currently better than AMD when it comes to FPS/Watt. The 7970 ghz Edition is 300watt and it is entirely on pair with the GTX680 that is 50watts less.The real thing Nvidia spys in AMD most likely found AMD is about to launch some real *** maybe a quite HD8000 series and Nvidia needs to forget about the 600s and jump right to the 700s. This isn't like Nvidia to give up on a line so fast. Something is clearly going on that we don't know yet.[/citation]

Not entirely. Give it a higher resolution benchmark and watch the 680 quickly fall behind. Triple 1080p is not too expensive of a display setup and the 7970 tends to beat the 680 if given particularly heavy workloads with high resolutions at maxed out texture quality, especially with high AA/AF and such at the same time. GK104 is a faster GPU than Tahiti for gaming, but it's much lower VRAM bandwidth leaves it choked for bandwidth, so increasing the VRAM pressure lets the 7970 pull ahead because it has no such bottle-neck.

This is why AMD doesn't need to use monster dies to compete with Nvidia like Nvidia did to compete with and beat AMD in maximum performance of their flagship single GPU card with their previous generation video cards. It is also why GK104 based cards can truly soar past the AMD Tahiti cards with workloads that aren't VRAM bandwidth heavy, such as low resolution gaming. No amount of driver tweaking can change the fact that Tahiti has much lower gaming performance, but the same is true for how a driver tweak won't solve the GK104's VRAM bandwidth deficiency.

A good example that we should all be aware of by now would be looking at Llano and Trinity. Trinity is even more VRAM bottle-necked than Llano as far as I can tell and we've seen how just increasing the RAM frequency from 1333MHz to 1866MHz is a greater increase in performance than a more than 50% overclock on the GPU due to the GPU being so RAM bandwidth-starved.
 
[citation][nom]confused1265[/nom]Just ordered a $299 HD 7870...should I cancel?[/citation]

You might want to if you can wait for August to find out if this is true or not and if this card has any advantages over the 7870 (maybe it does, maybe it doesn't). However, it's a your chance to take or not take. You might end up just wasting several weeks of your time and having to get the 7870 anyway. Or, you might have more money and be able to get a better card... Or something else too. Regardless, it's your choice to make.
 

tomfreak

Distinguished
May 18, 2011
1,334
0
19,280
[citation][nom]confused1265[/nom]Just ordered a $299 HD 7870...should I cancel?[/citation]if thats the case time to jump ATI 7700 series then. GT640 with GK107 DOES not beat 7750(you can take GTX660m as benchmark), and being crush by 7770 while the GT640 DDR3 already consuming 75w tdp. Looks like AMD GCN is far more efficient in everywhere even in compute lol....
 
[citation][nom]Tomfreak[/nom]if thats the case time to jump ATI 7700 series then. GT640 with GK107 DOES not beat 7750(you can take GTX660m as benchmark), and being crush by 7770 while the GT640 DDR3 already consuming 75w tdp. Looks like AMD GCN is far more efficient in everywhere even in compute lol....[/citation]

AMD 7870, not Ati 7700 series. Also, the GT 640 DDR3 undoubtedly only loses to the 7770 because of it's huge memory bottle-neck. The GDDR5 version might beat the 7770 quite significantly. Kepler FP32 is a less power-hungry architecture than GCN, but Nvidia is giving their cards huge memory bottle-necks that hold them back while letting the GPU consume as much power as it would even if it had faster memory that would let it stretch it's performance out.

Note that this is for gaming performance, not compute. For compute performance, the Kepler FP32 architecture is only capable of single-precision work and it does poorly at that compared to GCN. Kepler GPUs rely on a small amount of Kepler FP64 cores that can do dual-precision math that much of the compute programs use. This is a huge part of why they perform so poorly compared to even Fermi, granted architecturally, I'm not sure if FP64 is as good as GCN at compute anyway. Point is, GCN is not more efficient in gaming like it is in compute compared to Kepler.
 

Marcus52

Distinguished
Jun 11, 2008
619
0
19,010
[citation][nom]blazorthon[/nom]i7 ~= i5 in desktop gaming performance in all modern games and that probably won't change any time soon... Even if it did, I don'st see any way that you could max out the i5 with even two GTX 660 TIs in SLI, so you probably wouldn't get any benefit out of it.[/citation]

This is only true in a general way, not "all modern games" run as well on i5 as i7. World of Warcraft, for example, takes advantage of Sandy Bridge-E in a way most FPS type games don't. (And, if you are thinking WoW isn't modern, the graphics engine was recently upgraded to DX11.)

As I always say, look at your applications and do some research; build to your needs, not what people say you should do.

;)
 
[citation][nom]Marcus52[/nom]This is only true in a general way, not "all modern games" run as well on i5 as i7. World of Warcraft, for example, takes advantage of Sandy Bridge-E in a way most FPS type games don't. (And, if you are thinking WoW isn't modern, the graphics engine was recently upgraded to DX11.)As I always say, look at your applications and do some research; build to your needs, not what people say you should do.[/citation]

Hence the ~. Beyond that, there is no way that a GTX 660 TI is more than an i5 can handle in WoW or any other game. An i5, maybe with overclocking, should be able to handle even two 660 TIs in SLI just as well as an i7 would except maybe in a few games that can use more than four threads and even then, only in situations where they intentionally keep the graphics load low so that the CPU become a bottle-neck. The point was that the i5s are enough for his computer and the i7 is not going to make much of a difference, let alone a noticeable difference, if any at all.

Another thing about the DX11 WoW... It is even less CPU-bound than WoW in DX9, at least from what I've seen.
 
G

Guest

Guest
Last GK104? What, no 650ti or 660? That sucks, was hoping for something in that price range that was made with the lower-power consumption 28nm process. I guess I could get a Radeon 7770 or a 7850, but was hoping to get a Geforce for PhsyX support in Borderlands 2. Guess I'm stuck getting a 560 or 560 ti.
 

tomfreak

Distinguished
May 18, 2011
1,334
0
19,280
[citation][nom]blazorthon[/nom]AMD 7870, not Ati 7700 series. Also, the GT 640 DDR3 undoubtedly only loses to the 7770 because of it's huge memory bottle-neck. The GDDR5 version might beat the 7770 quite significantly. Kepler FP32 is a less power-hungry architecture than GCN, but Nvidia is giving their cards huge memory bottle-necks that hold them back while letting the GPU consume as much power as it would even if it had faster memory that would let it stretch it's performance out.Note that this is for gaming performance, not compute. For compute performance, the Kepler FP32 architecture is only capable of single-precision work and it does poorly at that compared to GCN. Kepler GPUs rely on a small amount of Kepler FP64 cores that can do dual-precision math that much of the compute programs use. This is a huge part of why they perform so poorly compared to even Fermi, granted architecturally, I'm not sure if FP64 is as good as GCN at compute anyway. Point is, GCN is not more efficient in gaming like it is in compute compared to Kepler.[/citation]GCN is not efficient on the 7900 series only. 7900 are ROP bottle-necked. Do u actually check the benchmark on GT640 DDR5? GTX660m is a actually GT640 with DDR5.Only with a 835MHz clock. IT DOES NOT BEAT 5770/7750, let alone 7770. 900MHz on desktop is not gonna help MUCH and it is already 75w tdp. So Nvidia need a GK106 to kill 7770. In other words GCN are more efficient on 7700 and 7800 series. Even the Techpowerup site shows that 7870 are delivering slightly more performance per watt than GTX680.
 

atticus14

Distinguished
Apr 14, 2009
13
0
18,510
cant wait till the 670 and 680 are midrange parts in the 7xx series, they'll be the same cards as we see today just underclocked more. I really wanted to bite this gen but my mind cant get over the depreciation thats about to happen within 6 months.
 

shin0bi272

Distinguished
Nov 20, 2007
1,103
0
19,310
so I guess the 685 will be the 780 after all. Will have to grab a 780 when it comes out... should last at least 3 or 4 years due to the console ports. And with them throwing in the towel so to speak on the 6 series it might as well be an announcement on the 700 series beginning production. That sort of makes the 600 series the new 400 series doesnt it? LOL obsolete in 9 months... sorry to everyone who bought a 680 or 690 the day they came out.
 
[citation][nom]Tomfreak[/nom]GCN is not efficient on the 7900 series only. 7900 are ROP bottle-necked. Do u actually check the benchmark on GT640 DDR5? GTX660m is a actually GT640 with DDR5.Only with a 835MHz clock. IT DOES NOT BEAT 5770/7750, let alone 7770. 900MHz on desktop is not gonna help MUCH and it is already 75w tdp. So Nvidia need a GK106 to kill 7770. In other words GCN are more efficient on 7700 and 7800 series. Even the Techpowerup site shows that 7870 are delivering slightly more performance per watt than GTX680.[/citation]

The GTX 670 has much higher performance per watt than the GTX 680. That puts a hole in your theory since it is more power efficient than the Radeon 7870 which uses a little bit less power, but is significantly slower. Yes, the 7700 and 7800 series are more efficient than the 7900 series and I never denied that. However, they still use the same architecture and are not really as efficient as Kepler can be. If Kepler cards weren't made with such large memory bandwidth bottle-necks, they could be far more efficient than they already are.

Also, you're completely wrong about the GTX 660M and the GT 640 GDDR5 being the same. The GT 640 GDDR5 has 80GB/s of VRAM bandwidth and the GTX 660M has only 64GB/s of VRAM bandwidth. The GT 640 GDDR5 has a 950MHz GPU clock frequency. The GTX 660M has a 835MHz clock frequency. The GT 640 GDDR5 has a 14% advantage in GPU frequency and a very important 25% VRAM bandwidth advantage over the GTX 660M. That should amount to a roughly 20-30% performance advantage over the GTX 660M in gaming. It is an excellent contender in performance for the slightly more power hungry Radeon 7770 GHz Edition and has a good chance of slightly beating the 7770, especially if they are at the same memory and GPU clock frequencies for a clock-for-clock comparison where the GT 640 GDDR5 should win by 10-15%.
 


You're the troll here if you think that even in single-precision, the 670 should be 4-5 times faster than a 560 TI just because of the difference in core count, especially since the cores aren't even comparable cores. Alright then, I'll do the math for you since instead of doing it yourself, you fail to realize that what I've said is correct.

GTX 560 TI reference specifications for the GPU cores and memory
GPU
Core count- 384
Frequency- 822MHz GPU/1644MHz core frequency
memory
interface- 256 bit GDDR5
frequency- 1336MHz

GTX 670 reference specifications for the GPU cores and memory
GPU
Core count- 1344
Frequency- 915MHz GPU frequency and core frequency + small Turbo
memory
interface- 256 bit GDDR5
frequency 1502MHz

First off, we can clearly see how each core in a Fermi card is approximately equal to two Kepler cores worth of performance with the same GPU frequency. So, the GTX 670 effectively has half the core count that it says in comparison to a Fermi GPU to get a more accurate representation of how their clock rates compare to each other.

So, it's more like comparing 384 cores at 822MHz to 672 cores at 915MHz.
Simple math would get us to 384 times 822 = 315648 and 672 times 915 equals 614880. So, at best, the GTX 670 could only be almost twice as fast as the GTX 560 TI in single precision math (doesn't even come close in dual-precision) unless there were other changes made to do something about that. However, sometimes simple math is too simple. Core count increases do not scale perfectly (they scale worse and worse as the core count increases), so the GTX 670 can't even be twice as fast as the GTX 560 TI. Heck, GTX 560 TI SLI is almost on-par with the GTX 670 on average.

EDIT: I forgot to add that although GTX 560 TI SLI is about on-par with a single GTX 670, keep in mind that GTX 560 TIs don't have the best scaling, although it is fairly good. I don't remember the exact average on games, but I think it's somewhere in 75% to 85%, kinda close (I think that it's slightly ahead) to the scaling of the somewhat improved VLIW5 GPUs in Radeon 6800 cards. Point is that the 670 is not even twice as fast as the 560 TI and if I had to consider causes for this, the two major ones in mind are the 670's inferior memory bandwidth for the GPU's performance and the fact that core count differences do not scale as linearly as clock frequency increases. There is actually some law or something about this that has been studied quite well and has a decent wiki, for what that's worth.

Furthermore, just because CUDA supports single-precision math doesn't mean that a program that uses CUDA is using single-precision math. Sometimes, that just isn't good enough and dual-precision (or better, but that's not relevant to this conversation) and a CUDA accelerated program must use dual-precision math. In such workloads, the GTX 670 and 680 won't be able to beat a GTX 560 TI like they do in single-precision math. Funny that someone whom works with CUDA doesn't know something so simple. Beyond that, I never said that CUDA only has dual-precision tools anyway. Heck, you didn't even specify whether or not your application uses dual-precision or single precision, but considering the fact that the GTX 670 has higher single-precision throughput, chances are that it's dual-precision if I had to guess if the GTX 670 is beaten by a GTX 560 TI.

Regardless, even with single-precision, there's no way for the GTX 670 to reasonably be much more than about double the performance of a GTX 560 TI and I'm including the GTX 670's Turbo in that number.
 
G

Guest

Guest
"Nvidia's GPU are currently better than AMD when it comes to FPS/Watt. The 7970 ghz Edition is 300watt and it is entirely on pair with the GTX680 that is 50watts less.

The real thing Nvidia spys in AMD most likely found AMD is about to launch some real shit maybe a quite HD8000 series and Nvidia needs to forget about the 600s and jump right to the 700s. This isn't like Nvidia to give up on a line so fast. Something is clearly going on that we don't know yet."


Tip the GHZ edition never draws the 300 watts not even when clocked at 1300/7600... it only draws 247w. 300 watts is an artificial number assumed by the fact that it has double 8 pin connectors. Sure there's head room on the sapphire toxic cooler but it's just wasted power to throw any more. You should think of it as a 250w card with a over the top power connection that's not needed.
 

pravania

Distinguished
Jun 18, 2011
7
0
18,510
My whole systems powr consumpion maxes out at 420 watts with 2* evga gtx 670 4gb.
At max of 1250core. Thats wat i call efficient. Old 460s sli used nearly same power but 75% less preformance
 
Status
Not open for further replies.