[citation][nom]BestJinjo[/nom]@ blazorthon,NV can release a 1Ghz GK110 part with 12-13 SMX clusters OR a downclocked 2880 SP GK110, but they cannot do both at the same time. Otherwise, you end up with a 550mm^2 1Ghz part that will use 250-275W of power. I doubt they want to see a repeat of Fermi. GTX680 already uses about 180-185W of power under gaming. You cannot almost double the size of the chip on the same 28nm node and maintain the same 1Ghz clocks, while adding GPGPU/double precision/dynamic scheduler functionality and only grow your power consumption 70W. It's just not possible. Even when you look at GTX660Ti vs. GTX680, by chopping off 25% memory bandwidth and ROPs, and still reducing the shader count 14% from 1536, it dropped power consumption 40W from the 680:http://www.techpowerup.com/reviews [...] am/26.htmlNow think about a chip with 1Ghz clocks, full GPGPU capability, 48 ROPs, 2880 SPs, 240 TMUs, and 384-bit memory bus? It's going to be using 275-300W of power unless 28nm node has some trick up its sleeve.[/citation]
Your 660 TI versus 680 example is extremely flawed. The 670 performs about on-par with the 680 and it uses far less power than the 680 despite having the exact same memory configuration. The 660 TI and the 670, in your link, use very similar amounts of power (with the 670 using a little more). The power usage drop isn't even from reducing the VRAM interface, especially considering that the 660 Ti and the 670 both have eight RAM ICs. The power consumption drop from the 680 to the 660 TI is almost purely in the GPU. You also failed to realize that you're counting peak power consumption aka power consumption under stress tests, not gaming power consumption which is much lower on the 680 than it is at peak whereas the 660 TI doesn't change nearly as much between them. Average power consumption during gaming, the 680 would only use about 20-25% more power. That's almost exactly the difference in GPU performance.
The *685* would use more power. That is without a doubt. However, it would not need to use as much as you claim. If Nvidia doesn't want a repeat of Fermi, then the answer is obvious. Make the *685* have the combination of performance and power consumption that they want it to have. They could make a card between the 680 and the 690 in power consumption and performance if they want to and a *Big Kepler* chip would be the way to do it if they don't want a dual-GPU GK106 card.
Also, if Nvidia wanted to, they could make a chip with fewer SMXs and a lower frequency than the GK110. They could do either one or both if desired. Which one isn't my choice to make, so I made sure that the choices are on the table. Nvidia would make that choice if they make a faster single-GPU card than the 680 within the GTX 600 series.
However, like I said before, I doubt that they'd do this. The market for it simply isn't thriving. My point is that they can do it, not will do this.