As a side note - measuring only throughput/framerate is not the right thing to do for gaming. Framerate stability/smoothness is of equal priority. For example, a higher-clocked i5 can give higher average framerate, but lower-clocked i7 can deliver more even framerate, depending on the machine config of course.
It's technically correct, but nobody is going to consider gaming with a single core. So, using that as the baseline is ridiculous. Secondly, it's not like this is some render which could either take 10 minutes or 5 minutes, and you just have to decide whether it's worth the $ to save that extra 5 minutes. What we're talking about is up to 2x the throughput. So, if a game is CPU-bottlenecked, then doubling the core count could mean up to 2x framerate improvement.It’s also worth considering this on a purely theoretical basis. If you have eight cores, you could reduce your processing time to 12.5% if everything shared perfectly, saving you 87.5% compared to one core. But if you add another eight cores, that only takes you down to 6.25%, only saving you a tiny amount. In fact, the biggest saving comes from the first few cores you add, because there will always be work for them to do.
Those actually suggest lock contention or races involving lock-free data structures. Either way, I'd chalk it up to deficiencies in the software's design. That might also go some ways towards explaining why enabling HT caused the average framerate to increase quite so much.Remember those huge pauses that plagued the i3? They’re ironed out when The Witcher 3 has four threads to work with.