BigMack70 brought up a good point about the VRAM capacity being a cause for concern. Before I say anything about the 680, look at this.
Nvidia has often historically had less VRAM than AMD at the same performance level and also fairly often, even at higher performance levels. Lets just look at the GTX 570, 580, and the Radeon 6970 right now.
The 580 and 570 came out around two years ago. Back then, 1.5GB, even at their performance level, was more than enough, even for SLI setups. Even today, it is still enough for single GPU and even dual GPU setups of the 580. However, three 580s (with a resolution, quality settings, and AA/AF respective to their performance) can end up with a huge VRAM capacity bottleneck. In many cases now, two 580s can need the AA/AF turned down because of the VRAM problem (all of this is assuming that these are 1.4GB 580s, not 3GB 580s).
However, like I said, this simply was not so when they first came out and a while after that. However, even today, the 6970 has enough VRAM for dual and triple GPU setups with resolutions, quality settings, and AA/AF respective to their performance. The 6970s and even the 580s have obviously had enough VRAM to continue to be great for quite a while now and still have timeleft in them, probably enough time to wait for the Radeon 8000 and perhaps GTX 700 cards (assuming that those are the next series).
So, the 580 1.5GB was not a VRAM capacity bottle-necked card and it still isn't today (for most situations). The same can be said about most of Nvidia's cards. Going further back, we can look at the GTX 285 and 295. The 295 has less VRAM capacity than it's competitor, the Radeon 4870X2, but that didn't stop it from being great for years to come, did it (and even now, a decent card for anything up to 1080p if you're willing to skip DX11 support and deal with it being about half as efficient as cards with similar performance right now, such as the Radeon 7850)? I think that the claims of Nvidia having VRAM bottle-necked cards were way over-hyped and exaggerated, at least until Kepler.
Back to the 680 and 690. These cards have the same amount of VRAM per GPU as the 6970. The problem with this is that they are almost twice as fast per GPU as the 6970. They also have the VRAM bandwidth of the GTX 580, but this is not as big of a deal (although it does mean that they probably have a minor/moderate bandwidth bottleneck. I don't think that this should be worried about very much. IT can be proved if increasing VRAM bandwidth scales performance better than it does for most other cards).
The problem is that now they are twice as fast as those 6970s, but have the same VRAM capacity. I do not think that the 680 has the proper VRAM capacity for their GPU performance and that this can and will hamper their longevity as high end cards. If I were to consider buying a 680, it would be a 4GB 680. If I can't get one, then I will compromise by getting a Radeon 7970 or overclocking the hell out of a 7950. They seem to be very close to a 7970 even if you overclock both the 7950 and the 7970, probably due to the fact that the only difference between them would them be that the 7950 only has a block of shaders and such disabled and that increasing the core count does not scale performance very well.
This is shown to a lesser effect on the 6950 and 6970, without flashing the 6950. This scaling problem gets worse with higher core counts, showing a diminishing returns that bare an alarming resemblance to the diminishing returns effect seen between increasing clock frequency and power usage. Nvidia seems to know this and is actually working around this in their next or second to next architecture (there was a recent article about Nvidia patenting the basis for an architecture that should stave off this multi-core parallelism scaling problem).
Point is, I'd take an overclocked Radeon 7950 3GB that then has performance close to a stock 680 2GB over the 680 2GB even if it means considerably higher thermals just because I don't want to risk needing an early upgrade due to the 680 running out of VRAM well before it's GPU is fully loaded. I hope that 4GB per GPU GTX 680s and 690s are made and that they have reasonable prices because if not, I'd have to compromise. I'm certain that the electricity bill hike would be cheaper than upgrading such a high end graphics card more often than I should have to.
On the note of agreeing with BigMack70 on this, I do disagree about his/her opinion of Nvidia's turbo boosted GPUs. I would simply like for Nvidia to give us greater control over it so that both the people who want it can use it and the people who don't want it don't have to use it if they buy the card. It would also mean another thing for enthusiasts to play with and I think that many of us would like that 😉