The rate of CPU speed improvement has fallen to about 3% per year lately. So barring a technological breakthrough in CPUs, any i5 or i7 you buy today will be good for the next 15 years (at which point a "modern" CPU should be 55% faster at the same clock speed). I'm still on a Sandy Bridge i5 and have no plans to upgrade.
The limiting factor is actually what other computer features are supported by the CPU - like USB 3.0/3.1, or M.2 slot for a SSD. Or if Intel swings a deal with Hollywood which makes 4k video decryption dependent on a feature only found on Kaby Lake CPUs.
While an i7 gets you 8 threads via hyperthreading, the reality is that those 4 extra threads give you very little additional computing capability in most cases. The problem is that hyperthreading works by letting other threads use an unused portion of a CPU. If your CPU is the bottleneck, it's usually because your CPU is being limited despite running the same task on all physical cores. Any extra threads need to use the parts of the cores which are already in use, so hyperthreading doesn't help at all (and in fact can actually make things slower as time-dependent tasks get assigned to a thread which is going to end up CPU-starved). So a program requires a very eclectic set of processing requirements (doing lots of different things at the same time) to really take advantage of hyperthreading.
In real-life use, the few tasks which do this and really benefit from hyperthreading are video encoding, encryption, and data compression. Those typically see about a 30% speedup from hyperthreading (so the 8 virtual cores on the i7 actually runs more like 5.2 cores). Most tasks only see about a 5% speedup from hyperthreading, so your 8 virtual cores run more like 4.2 cores. Barely different from the i5. If you're going to go for an i7, do it because it comes with an extra 1MB of L3 cache (usually 4 MB vs 3 MB), although even that only helps a lot when doing processing on large amounts of data (again, video encoding, encryption, and data compression).
Basically, most tasks break down into being limited by single threaded performance (so clock speed matters most), or are able to be massively paralleled like bitcoin mining (in which case assigning it to a GPU is better). Unless you're doing lots of video encoding, encryption, data compression, scientific data crunching, or running a large multi-user server, the benefit of adding more than 4 cores (physical or virtual) is extremely limited. Intel knows this, which is why they also give the i7 extra cache and higher clock speeds - so you can't do an apples to apples comparison.