Like most these cpu article they can't just say it makes no difference in real world game usage, nobody would read their articles and no advertiser would pay either. Even though a lot of the 14th gen reviews are very close to saying that.
If we ignore the crazy counter strike player who plays on low settings nobody who buys a $1600 video card is going to run at 1080. For people that can only afford to run at 1080 the cost of better performance is likely not going to be cpu related.
This means for almost all people who are building machines for gaming they are going to be GPU bound and they will see no difference even between the newest chips and say older chips like a 5800x3d.
Problem is until the GPU catch up, mostly in affordability, the summary for future CPU chips will likely be similar. They are all way overpowered compared to the video cards they are commonly matched with.
have you even taken a logic course? cause yours is faulty. I'll explain the problem
1) they need to review the CPU, part of that review is determining it's "power/speed" whatever you want to call it, because clock speed means nothing.
2) in order to determine the speed/power of a part in a complex machine you have to control for the influence of other parts. else instead of determining the power of the part you want to test, you end up determining the slowest part of the machine
-a dead give away your not testing the one part your changing in your test rig is if the results don't change. this means you're not testing anything. This is like trying to find out the fastest runner in a group of people by only watching the person who finishes last. that tells you nothing.
3) the solution is to eliminate the slow runners. Use fast RAM, use fast GPUs, use fast SSDs, make sure you have plenty of power and cooling, and make sure you use the same parts for every CPU you're testing.
4) Next you have to make sure you're not benching the GPU so you use low resolution, this means the bottleneck (the slowest part) will be the cpu being tested.
They are testing these the right way because they don't know what YOU will use the cpu for, and knowing which cpu is the strongest/best for gaming helps inform decisions.
-----
what you're asking for is something else. you want to know about "your use" experience. this is the old AMD fanboy argument back when Piledriver was up to offer. The argument went like this; I'm gaming at 1080p or 1440p, gpus get maxed out before the piledriver cpu does at those resolutions, so why buy an i5-3570k when i can buy an FX8350 and get the same user experience in 1440p.
Well there is a point to that AMD argument, it's true though the i5 was almost 45% faster IPC, at 1440p there was no graphics card on the market that would make the FX8350 feel slower then the i5. However how many people were on 1440p in 2013? what about people who were doing more then just gaming? what about 2 years down the road when the gtx 1080ti launched? Now if you had gone with the FX8350 you'll need to replace the whole system in order to upgrade to the 1080ti.
So offering the testing results which show the i5 is 45% faster DOES matter, just as showing how this chip is basically identical to the 13th gen chips, only they run so hot they're almost impossible to cool, they throttle on 360mm rads, and offer almost no performance uplift against 14th gen. If you want the best chips on the market you'll still be buying AMD. That's why this testing is needed. even if it doesn't show exactly what you want to see, it shows everything a consumer in the market needs to know to make an informed decision.