Some people do need to realize that games is only part of the world. And not even the largest piece of the cake. Our world is spinning because of we have the power of computers. Even phones turned into mini computers.
If gamers have no reason to upgrade - well blame the consoles and their manufacturers. All of the "current generation" consoles arrived already outdated. Remember the "hack" or "mod" to unlock all of Watchdogs potential?
The reason why a lot of us and specifically me are "upset" by the current CPU development is because it dictates what the average level of performance will be. Intel launches Sandy Bridge and then Sandy Bridge-E and EP, Ivy Bridge and then Ivy Bridge-E and EP, Haswell...etc. But the moment the mainstream line launches - you know what to expect. If anyone here has renders out of Mental Ray (I got only Mental Ray at home) that runs for 28 hours for a single frame is going to get me. And the performance of extra cores is never linear improvement. The more cores you add, the more time a single pixel is rendered. Pixar's Render man is the best example - 4 cores render faster than 24. But even in the best threaded render engine, if you have 2 render nodes with 2 I7s running at 3 GHz, they will render 30% faster than a single 8 core Xeon at 3 GHz. and you can get those 2 render nodes for less than the price of the Xeon by itself. And if you get more performance out of your cheap render nodes - all the best.
If Broadwell's performance is 5% on top of Haswell, than Broadwell-E and EP will be 5% on top of the Haswell. Everything is linked to the mainstream part. Intel launches a mainstream part and adjusts number of cores and TDP for the enthusiast/professional line. The moment this Broadwell article came out, we already know what is in store for the next 3 years.
And if some people think - get a 12 core Xeon or something - that is not always possible or smart. A lot of software scales bad after 8 cores/threads. Even half the functions that you use in a work flow are single threaded (diffuse to object baking, modeling tools, deformers, conversions, etc). In a lot of animation studios they use high-clocked low-core count machines for animators because of such reasons. Running on screaming "Cores" does not work. My home Sandy when at 4.5 renders 10-20% slower the the Xeon 2650v2 at work.
The software is too far behind. If in the late 1990s and begging of 2000 - the hardware was holding back the software, ever since late 2000s and specifically since 2010 - the software is lagging. VRay 3.0 which is available for 3Ds Max and soon to be out for Maya was rebuild from the ground up to use AVX. And when I was at a presentation of VRay 3.0 - there was 30% improvement in render speeds compared to the older version. And this is happening in 2014 and 2015. And AVX is? Technology from 2008 first implemented in 2010/2011.
And now imagine all those I7s/I5s or their Xeon versions (mainstream socket xeons perform exactly the same as the I series counterparts) arriving in cheaper workstations or in Macs. This is the main hardware of the working force. Not only Adobe products and 3D packages and SDKs, but also a lot of software developers and scientists sit on this hardware. And all the Indie studios.
If all of you guys have something to blame for no reason to upgrade - blame the software. Don't blame Intel. Blame the console manufacturers for their outdated consoles. Don't blame AMD. Blame all those lazy programmers that either can't or won't or don't have enough resources to program for multi-threaded environment and are stuck in 1-2 threaded functions. And also blame the world. If you bought less phones and tablets and more High Performance PCs the market interest and innovation would have been different. It is the mainstream that defines the performance increase for enthusiasts. Cheers.