Somewhat answering some questions/requests in the comments:
If you want more data on i3s with Hyper-Threading enabled versus disabled, check out some of the reviews that compare i3s to Pentiums. That's very close to the same thing if you adjust for frequency differences where they exist. You'll find that in games which scale well across as many threads as it supplies, Hyper-Threading will net you somewhere around 25% more performance per core. The graph here from the Witcher 3 test in the article demonstrates this quite well, though I should point out that the around 25% number has been known for many years now from many previous tests. The SBMs are great resources for this.
Ignoring the pitfalls to 0FPS because they're basically just terrible stutter, you can see around a 25% improvement from the purple line to the blue line. Of course, considering what some people mentioned even about comparing i5s and i7s with how Hyper-threading can "smooth" things out a bit even if FPS doesn't go up, fixing stutter issues like that certainly demonstrates that. Still, I'd argue that for the most part, the i5s aren't so much worse that it matters in current games nor for the next few years.
If anyone wants an i7 for that, I'd recommend going all the way to the cheaper six core models. The jump in platform cost isn't high enough to ignore the 50% increase in cores if you want to say that an i5 isn't enough for you. Of course, anything beyond the six core i7s is just silly. Prices skyrocket while performance is already plateaued for several years to come. Keep in mind that with Hyper-Threading enabled, the differences between 4, 6, 8, and 10 core i7s will shrink even further because games just don't use the performance of the larger i7s.
Also, for Braddsctt's question about the i7-5775C:
Skylake's architectural improvements are so small that the i7-5775C's eDRAM cache can often actually make it faster than the i7-6700K when the two are at the same frequency. Check out previous tests when the i7-6700K first came out; that 128MB eDRAM basically acts like an L4 cache, though do keep in mind that we're still talking typically less than 5% differences. These things really aren't noticeable.
For everyone asking about putting i5s in the analysis, keep in mind that an i7 with Hyper-Threading disabled behaves like an i5. The only difference (other than stock frequency, which is irrelevant here) is an 8MB L3 instead of a 6MB L3 and that really doesn't matter. Even the i3's little 3MB L3 doesn't handicap it much.
This article is probably the most comprehensive evidence supporting much of what a lot of us have been saying for a while now. i3s are the minimum now for a good experience in most of today's big games, i5s are a decent step up, quad core i7s are a small step up at best except in specific circumstances, 6 core i7s follow similarly over quads, and anything above 6 core i7s will not demonstrate noticeable gains in realistic gaming situations. The 8 and 10 core CPUs aren't even more practical for multi-tasking while gaming unless you're trying to run a render in the background.
DX12 alone won't change that because even if it delivers on improving CPU scaling later on, it's also supposed to reduce overall CPU load to the point where the 8 and 10 core CPUs are even less relevant. Other than increasing player counts (BF3/BF4 Multiplayer and similar games FPS games with large player counts, like BF4's 64 player maps, can scale the CPU load of players very well across six or eight cores) further (what's next, 128 or 256?) or throwing more CPU physics processing into the mix, there isn't a whole lot you can do to utilize that much CPU performance. Maybe they'll shrink game size with more compression/on-the-fly decompression of game resources? That's probably still not going to raise CPU usage much, although it could easily run on cores that the main engine isn't heavily utilizing for slightly better scaling.