endeavour37a :
So I don't think memory is holding these cores back now days anyway.
You need to ask yourself how the newer AMD and Nvidia GPUs managed to get the same performance out of a narrower memory bus and that would because of new texture compression algorithms. But that sort of gain is a one-time thing - it does not scale with processing power, so you are back to having to add raw memory bandwidth to push performance further. On graphics settings that stress memory bandwidth more heavily, the wider old models often pull ahead. Being clever to reduce costs is nice but it is not always sufficient to overcome what can be achieve through brute force.
Other problems with compression algorithms is that they cost transistors, they cost die area, they cost power, they cost latency, they add scheduling complexity, they add potential failure points, etc. If memory bandwidth was not a critical bottleneck, neither AMD or Nvidia would throw away transistors and everything that goes with them at things that add unnecessary overhead. They had to throw transistor budget at texture compression because memory bandwidth was a bottleneck to producing lower-cost, higher-performance GPUs.
With the 14/16nm shrink, they will be able to cram more than twice as much processing power in the same die area and power budget but to sustain that processing power, they will need twice as much effective bandwidth. You cannot double bandwidth by being clever alone, especially not when most cost-effective clever tricks are already behind you.