The game is largely GPU bound, no one here is arguing that, but from the chart below the CPU is not inconsequential to the overall frame rate, there is a 20% difference between the top and bottom and 8% difference if you factor out the 5800x3d and the 5800x bottom result. Also, it's clear by these stats (and others) that this game is largely written to utilize a single core on the CPU side. If you take this evidence and large evidence and the evidence that FPS performance for not Rosetta2 games is about 10% then it's easy to suspect that Rosetta2 is causing some loss in FPS, likely 2 to 5 FPS, making the difference close to 4 to 8 overall, not Earth shattering, but it shouldn't be thrown out when doing comparisons either.
But as that chart shows, they are running the game at 720p on very high-end graphics cards to prevent the graphics hardware from limiting performance of the CPU, to highlight any potential differences. That's because at a more real-world resolution for those cards, like 1440p or 4K, performance would be effectively capped by the number of frames the GPU can render each second, making the results of all CPUs virtually identical. But the frame rates the game is pushing on all of those CPUs are over 200fps, with even the 1% lows over 100fps. So the much slower integrated graphics of these laptops capping performance at around 30fps effectively means the CPU cores should have no trouble keeping up with the graphics hardware, with performance to spare whether the code is emulated or not.
Take, for example, this desktop i5-12400F review, similarly paired with a high-end RTX 3080. At 720p resolution, average frame rates result in a fairly large difference in performance between the benchmarked hardware, with the fastest CPU included in the chart pushing around a 38% higher frame rate than the slowest in that game (scroll to the bottom for Shadow of the Tomb Raider)...
https://www.techpowerup.com/review/intel-core-i5-12400f/15.html
But all tested CPUs, even the nearly seven year old i7-6700K, managed to average over 250fps in that game under their test conditions at 720p. Move up to 1080p though, and despite the graphics card still being overkill for that resolution, performance is very similar between all processors, effectively capped at around 250fps. The 6700K does drop a little to 236fps, but the fastest CPUs are now less than 7% faster...
https://www.techpowerup.com/review/intel-core-i5-12400f/16.html
Move up to 1440p, and the graphics hardware is limiting frame rates to around 180fps for all tested CPUs, with there being a less than 1% difference between them, well within the margin of error, as the order becomes a lot more random compared to the prior charts. All of the tested CPUs have enough performance to keep up with the graphics card at this frame rate, so that 38% performance difference at 720p has completely evaporated...
https://www.techpowerup.com/review/intel-core-i5-12400f/17.html
And of course that continues at 4K, with the graphics hardware limiting performance to 98fps for all CPUs. If a CPU's performance characteristics make no tangible difference when the graphics hardware limits the game to 180fps, then certainly there shouldn't be any difference when slower graphics hardware limits it to around 30fps, even if they happen to be testing a more demanding area of the game. So, this should strictly be a graphics performance comparison, with the CPU cores not pushed to their limits and in turn not likely making any measurable difference to performance.