Cache benefits a ton in light scenes (looking at the sky) when the data fits into the cache. Harder scenes that don't fit the cache and it hits a brick wall.
Factorio (and MSFS) are prime examples of that, since that is the game that benefits with cache the most. You bench the small 10k maps and holy cow, the 7800x 3d is over twice as fast as a 14900k. Over TWICE. That's insane. Problem is, they are both spitting fps into the hundreds, around 300-350 fps for the 14900k, and around 600-700 for the 7800x 3d.
Then you move into the actual big complicated maps of factorio and.....the x3d is even losing to alderlake and this time it actually matters cause it barely stays above 60. So yeah, what's the point of having a lead in light scenes when every cpu can spit lots of fps and then completely crumble in the harder scenes when you actually need the fps.
Especially since the game is basically restricted to 60ups, the goal on this one is to stay above 60 with as big a map as possible. The 3d chips fall flat on their face here. But then you watch hwunboxeds review and you have to pretend that they are 100% faster than the competition in this game, lol
https://factoriobox.1au.us/results?...9f36f221ca8ef1c45ba72be8620b&vl=&vh=&sort=ups
Same happens in eg. Cyberpunk and MSFS. Digital foundry made a video with live footage comparing the cpus in the actual hard scenes of those games and yeah, results were nowhere near close to what you'd expect. Reviews have the 7800x 3d hitting over 200 fps in cyberpunk, actual footage from a hard area (tom's dinner) and it drops to low 60s.... Literally 3 fps higher than the 9950x, a CPU that is unsuitable for gaming according to the experts here