I used to upgrade every 2 or 3 years. I had a 486, P100, PMMX 233, PII300, Atlhon X2 3600, and i7 920.
I still run the i7 920. A decade later, it is 50% as fast as the fastest new CPU on single thread, -that's comparing a decade old, budget i7 against a high end new CPU.-, and I would not upgrade until I can get 16 cores, at around 250$, because multithread escalates less than proportionally than the core count.
After the 920, Intel started offering 5% improvement by each generation, raising the prices, (and asking for an entire new mother and memory). Too much cost for too little performance.
I have an i7 7700 at work, and I do not perceive the difference, compared to the 920.
All my older CPU upgrades were day and night.
The brainies at Intel could had sold me many CPUs if they had allowed me to keep old mother/memory, offered more cores instead of wasting transistors on integrated video, and priced their processors accordingly to the poor performance improvements they offer.
A 15% price cut doesn't move the dial. When considering the total cost of an upgrade, lost performance due to hardwired security bugs, the loss of hypertreading, the artificially locked overclocking, and the fact that even future generations of Intel CPU will be basically the same.
Since I do not see the benefit of gaming at 4K, the last game I played on that i7 920, ran at 120 hz, raytraced.
My high performance computing tasks run on GPU powered tensorflow. The CPU doesn't matter much.
I use Autocad daily, and it still is a single core application. A decade and half after the introduction of multicore. That's on an i7 7700, which is close to the fastest single thread performance available today. My Acad works had become so bloated that it painfully crawls. I freak every time I have to use it, and I would kill for a faster CPU, but there is nothing available on sale, with significant faster performance, at any price. I cannot go to the IT and ask for a faster processor which would only be like 5-10% faster.
When I run the same files on my i7 920, it is basically the same experience. 50% slower single thread performance doesn't changes the experience.
Given that Moore's law is basically dead, Intel should go the opposite way of integration, and should be making modular designs, where each component is designed for high specialization and the highest performance possible. Take the GPU out of the processor. Stop wasting transistors that raise heat and limit clocks. Maybe move the north bridge out of the processor. Design a CPU which can be cooled from both sides. Integrate a Peltier cooler for key areas of the CPU. Run only the bottlenecked parts of the CPU at higher clocks. Maybe if only the floating point transistors are needed at a given point, they can run, lets say at 10 ghz. Maybe the cache can be produced at a different node process, and be 3D integrated as a layer, to rise yields of key parts, and reduce prices. Maybe yields could be raised by adding tiny floating point coprocessors made at 5 nm, with the rest of the CPU made at 7 nm.
Maybe the SSEx circuits can be duplicated, so they can be rapidly switched each time they overheat.