Report: Nvidia To Launch GK104-based GTX 660Ti in August

Status
Not open for further replies.

Dangi

Honorable
Mar 30, 2012
192
0
10,690
Great news !! With this we can expect to see HD7800 series becoming cheaper as occured before with HD7900 series when GTX670 and GTX680 appeared
 
hmm, a pair of these should run like a dream for any single monitor needs. 7800 prices will finally drop a little. all in all good news :)

wonder how the 700 series will match up to AMD's 8000 series. if the 700 series are indeed GK110 based, then the folks at AMD will have a much better handle on what it's performance might be like, hopefully they use this to their advantage
 

freggo

Distinguished
Nov 22, 2008
2,019
0
19,780
Just building a new system, so maybe an ATI price cut coming soon? Would be perfect timing.
Also, isn't there constantly something new or updated that gets followed by a price cut somewhere else ?

God, I wish the car companies would do that too. I'd get myself an 'older' Mercedes convertible from the bargain bin :)
 
[citation][nom]esrever[/nom]wonder how many times they can cut that die down[/citation]

Nvidia often does it up to three times. They could cut it down as far as they want to. Heck, if they really wanted to, they could make all of their dies as full GK100 dies had they made such a die and they could cut it down all the way down to the bottom Kepler cards. It would almost defintiely be a very bad method, but it can be done.
 
[citation][nom]john_4[/nom]Have used both AMD and Nvidia throughout the years in my gaming rig builds that goes for AMD CPUs too. I think allot of it comes down to who has what out at the time when you do your build and how much your willing to pay. I build my rigs on the higher end side without going extreme but still try to make sure to get as much bang for buck as possible. Usually spend around $300 - $350 each for the CPU and Video. Last time I checked the old Athlon 64x2 with a ATI card with 128Mb on-board (before AMD bought them out). Was a good system for the time and is still running strong, no breakage. If I were to build one right now this new Nvidia 660Ti would be on the top of my list for consideration paired with an intel i7 CPU.My Gigabyte 560Ti is still running strong with my older Q9650 CPU right now and until the new kiddie consoles release I see no reason to build a new rig.[/citation]

i7 ~= i5 in desktop gaming performance in all modern games and that probably won't change any time soon... Even if it did, I don'st see any way that you could max out the i5 with even two GTX 660 TIs in SLI, so you probably wouldn't get any benefit out of it.
 

rocknrollz

Distinguished
Nov 16, 2011
750
0
19,010
Nvidia really stuck out this time, I must admit. They did great with the 670, but the 680 and 690 were nothing amazing. (They were awesome though)

Now we hear that near the end of the year they are now releasing their mid ranged cards? IMO, they are too late, as AMD is already in the talks with the 8000 series. Hopefully Nvidia follows the same strategy as AMD in that they release their cards on a month by month basis.
 

atikkur

Distinguished
Apr 27, 2010
327
0
18,790
then next year (2013) will be 700's games... i can wait for a year. i just sense this 600 series are still not the optimal version. nvidia only best on their second/third cycle of revisions.
 

atikkur

Distinguished
Apr 27, 2010
327
0
18,790
[citation][nom]matto17secs[/nom]No mention of the 650 Ti and 660 non-Ti also due to be released at the same time.[/citation]

if i were nvidia, i just let 500 series to co-exist with 600 series to serve the performance under 660ti. then focus to develop 700 series for next year or next iteration.
 
[citation][nom]atikkur[/nom]if i were nvidia, i just let 500 series to co-exist with 600 series to serve the performance under 660ti. then focus to develop 700 series for next year or next iteration.[/citation]

That's not a very bad idea, but it means that Nvidia would leave older, much more power-hungry cards as their mid/low end lineup competing against AMD's comparatively power-sipping cards that also tend to be cheaper.
 

icemunk

Distinguished
Aug 1, 2009
628
0
18,990
[citation][nom]blazorthon[/nom]That's not a very bad idea, but it means that Nvidia would leave older, much more power-hungry cards as their mid/low end lineup competing against AMD's comparatively power-sipping cards that also tend to be cheaper.[/citation]

Nvidia makes nice cards but they really do need to work on their power and heat efficiencies. AMD has them beat hands-down in that area at the moment. I'm not a fan of having an extra air conditioner to cool my room, just because of my video card.
 

kristoffe

Distinguished
Jul 15, 2010
153
12
18,695
Checking scores, it's almost as if the cuda cores are really just getting slammed in these 104's and not accessed properly in design. If they were a sign of proper parallel architecture, they would KILL the 560Ti, which I have 2 x 2gb in each of my rendering systems. It is not the case. Nvidia is simply engineering marketing now to keep up with ati's 'streams' when in fact the 560Ti 2gb was killing it and the power draw was reasonable.

1344~1536 should show a parallel processing advantage of at least 4~5x that of the 560Ti and the scores on various sites are just pathetic. Hopefully someone comes out with a nice hack to enable or properly access the cores, otherwise, what is the point?

And this new 660Ti with only 1.5gb, what they can't afford to put in parallel 2gb? ORLY? 4gb for a great custom 680 (which I have read about but never seen IRL)

yawn
 
These most likely are not a cut down die.
They are probably the same die as the 680.
They just had to wait long enough to get enough defective chips to launch the 660-TI model.
All chip fabricators do this to sale defective chips that could not otherwise be sold.
 
Hopefully TSMC is able to stave off their fabrication issues and figure out why they're having so many issues with 28nm transistors. That's really the bottleneck for both GPU companies.
 
[citation][nom]kristoffe[/nom]Checking scores, it's almost as if the cuda cores are really just getting slammed in these 104's and not accessed properly in design. If they were a sign of proper parallel architecture, they would KILL the 560Ti, which I have 2 x 2gb in each of my rendering systems. It is not the case. Nvidia is simply engineering marketing now to keep up with ati's 'streams' when in fact the 560Ti 2gb was killing it and the power draw was reasonable.1344~1536 should show a parallel processing advantage of at least 4~5x that of the 560Ti and the scores on various sites are just pathetic. Hopefully someone comes out with a nice hack to enable or properly access the cores, otherwise, what is the point? And this new 660Ti with only 1.5gb, what they can't afford to put in parallel 2gb? ORLY? 4gb for a great custom 680 (which I have read about but never seen IRL)yawn[/citation]

That's not how it works. First off, these cores are not the same as the cores in the 560 TI. These are optimized for single precision math and aren't even capable of dual-precision math. They are also only half as fast as the older cores (although much more power efficient and not only because of the die shrink) due to the abandonment of the inefficient hot-clocking method use previously. The dual-precision capabilities of the GK104 are only from a small amount of 64 bit Kepler CUDA cores that don't do single-precision math. Well, since games run on single-precision math, these were not prioritized. This is why the Kepler cards are somewhat more power efficient than AMD's GCN based Radeon 7000 cards. They are purely designed for gaming performance and that is what they excel at when they're VRAM doesn't cause too severe of problems with it's too-small bandwidth.

Furthermore, there is 1.5GB because it has a 192 bit bus instead of a 256 bit bus... RAM chips have 32 bit buses. You do the math on how many chips a smaller bus can get. That's right, twelve. Twelve chips times 256MiB per chip means 1.5GB of VRAM. 512MiB chips are much more expensive than 256MiB chips. For example, 8GB DDR3 memory modules use 512MiB chips and although their prices have improved substantially in the last few months, they are still oftentimes much more expensive than a similar 2x4GiB memory kit. Also, there is a GTX 670 4GiB at newegg, not that it matters because in any situation where you can use that much VRAM in a gaming situation, the memory bandwidth holds it back so badly that you'd hate to compare it to a multiple 7950 OC or 7970 multiple 7970 system... There might be a 4GB GTX 680 out by now, but I don't really care to check and like I said, it doesn't really matter.

So, there's not a problem with CUDA cores being improperly accessed... The problem is that you don't know the situation. Beyond that, you ignore the other factors in performance... I guess that you didn't know about how increasing the core count linearly does not give a linear increase in performance and there are other limits in performance, such as the memory and more either. Heck, that's all ignoring any CPU bottle-necks and other bottle-necks that aren't directly related to the graphics card that can hold back performance.
 
[citation][nom]boiler1990[/nom]Hopefully TSMC is able to stave off their fabrication issues and figure out why they're having so many issues with 28nm transistors. That's really the bottleneck for both GPU companies.[/citation]

The process node is the distance between the transistors, not the size of the transistors. The size of the transistors varies between different types of transistors even on the same process technology and node.

[citation][nom]icemunk[/nom]Nvidia makes nice cards but they really do need to work on their power and heat efficiencies. AMD has them beat hands-down in that area at the moment. I'm not a fan of having an extra air conditioner to cool my room, just because of my video card.[/citation]

Actually, Nvidia seems to have the advantage in power efficiency when it's Kepler versus GCN. It's just with many of their cards using previous gen architectures in the GPUs that they lose in power efficiency against AMD. Although I'm sure that it wasn't the only reason behind Nvidia optimizing the Kepler FP32 GPUs as they did, Kepler FP32 GPUs really are a leap ahead of Nvidia's prior GPUs in power consumption and efficiency to a point where they beat AMD's more compute-oriented GCN GPUs in gaming power efficiency.
 


If it does use GK104 as the article suggests, then yes, it is a die-harvested version of the GK104 GPU that is more defective than and/or not able to pass voltage binning as a GTX 670's die-harvested GK104 that didn't make it into a GTX 680 for the same reasons.
 
Status
Not open for further replies.

TRENDING THREADS

Latest posts