People seem to be really confused on this memory bandwidth which is a NON-ISSUE.
Points:
1) The GTX900 series uses a compression algorithm which doesn't need as much memory bandwidth.
2) If the memory wasn't being accessed at full speed then the GPU wouldn't be able to operate at 100% as it would be left waiting.
3) *There's no relationship between memory bandwidth and the AMOUNT of memory. A GPU only talks to a small portion of memory at a time, not all of it. Modern games are starting to use more memory. You won't see much benefit right now but probably in the near future.
*In Watch Dogs (game quality aside) the reason for the stutter was because the VRAM would slowly fill up as you drove around. It didn't take very long and then more than 2GB was used which then caused severe stutter if you only had 2GB as it started swapping the currently unused data back to SYSTEM memory.
So at the time of launch a GTX960 4GB would have ran Watch Dogs quite smoothly whereas a GTX960 2GB would not.
**Why was Watch Dogs coded like this? Because it was designed for the new CONSOLES first and had over 5GB (shared) allowable so it could use more than 3GB at times easily for the video content. When they ported back to PC they discovered it was very difficult to manage the swapping of the video memory to keep usage low.
Other:
These cards aren't really comparable as the GTX960 is more expensive, though it is a much better card. Do I think the 4GB will be of benefit in the near future? Yes. Yes, I do. Unless you think game developers have learned their lesson and from now on out we will always get well optimized PC versions of games designed to run on the new consoles...
*Frankly, ignore things like "memory bandwidth"; I know how it works and simply don't care because all that matters in terms of performance are BENCHMARKS. If I get good performance and the card is using jelly beans instead of electrons who cares?