[citation][nom]mosu[/nom]simple math: 512 CUDA cores=250watts 1536CUDA cores=750Watts, assuming that 28nm tech gives them a 40% reduction on power usage, will consume at least 500 watts...not feasible.[/citation]
That isn't how it works. Kepler shaders aren't hot clocked so they use less than half the power while giving half the performance at the same GPU clock rate because their clock rates are halved. Power usage increases exponentially with linearly increased clock rates and the inverse happens with decreasing clock rates linearly, power usage then decreases exponentially.
There is the die shrink, but you mentioned that. We don't know for sure exactly how much the reduced power usage. It's pretty much impossible to assume we know how much power these cards will use, but they will undoubtedly be much more efficient than Fermi.
After all of that, there may have also been other improvements.
[citation][nom]4745454b[/nom]So seeing as we've all seemed to have missed the obvious...GF104/114 doesn't have 512 cores. That's the big one (GF100/110) that has that many. 1536 might be the top end for Kepler, but GK104 isn't (or rather shouldn't be unless nvidia has changed their naming scheme.) the top end but the mid range. This seems to have some mixing of info.[/citation]
There seems to be some confusion with this. I think that this is not the top end Kepler card. Nvidia would not let AMD beat them there after winning so much in that regard. This is probably the replacement for the GTX 560, 560 Ti or the GTX 570. Considering the difference between the 6970 and the 7970, I don't think that this is too far off since Nvidia made more improvements than a die shrink. Abandoning hot clocking makes a difference and AMD has shown that.
Nvidia will probably make a GPU with 2048 or 2560 or some similar number of shaders as the GK100. I'm expecting Nvidia to trump the 7970 with their top single GPU card at some point, but once that's done the 7970 and other cards will have either already gone down in price or will go down at that time.
AMD would need to start using larger dies to compete with Nvidia as the top performer, although AMD seems to have slightly better performance per square mm of die with the 6000s vs the 500s. However, I expect that to change with Kepler and think they will be relatively similar.
Now, will Nvidia pay proper attention to the low and lower middle end? If not, then AMD will continue to dominate there.