News Apple's M2 Beats AMD's Ryzen 7 6800U in Shadow of the Tomb Raider

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Again, you just have to look at the FPS increase from the 5800x to the 5800 x3d, it should be obvious that differences in op code handling can have a big impact.
Sure. But not with those "weak" iGPUs. Pair a 5800X and 5800X3D with a 680M and DDR5 and you won't see any difference in most games as well. Even if you get some CPU spikes of way more than 20% it doesn't change anything at all. In like 99% of the time the GPU is still the limiting factor in the gaming loop.

And if you haven't seen it, Igor uses 6900 XT / 3090 Ti for his comparisons, at lower resolution. That says us about differences with iGPUs at FHD exactly NOTHING.

With the AOT argument you're making an assumption that Apple put a massive effort into optimizing those AOT instructions to minimize the instruction overhead. The odds are they didn't, I'm sure they did some, but this isn't Java or .Net where if the performance isn't well optimized, no one is going to use your platform. In Rosetta's case it's just a hold you over solution, it doesn't need to be optimized for peak performance and likely isn't.
Actually I think Apple is putting a lot of effort in Rosetta. It helps them now with the transition from Intel to their own hardware. Which might take a while. And it might help them in the future with RISC-V designs. Apple is not only focused on ARM.
 
Last edited:
The game is largely GPU bound, no one here is arguing that, but from the chart below the CPU is not inconsequential to the overall frame rate, there is a 20% difference between the top and bottom and 8% difference if you factor out the 5800x3d and the 5800x bottom result. Also, it's clear by these stats (and others) that this game is largely written to utilize a single core on the CPU side. If you take this evidence and large evidence and the evidence that FPS performance for not Rosetta2 games is about 10% then it's easy to suspect that Rosetta2 is causing some loss in FPS, likely 2 to 5 FPS, making the difference close to 4 to 8 overall, not Earth shattering, but it shouldn't be thrown out when doing comparisons either.



Shadow-of-the-Tomb-Raider-FPS-1280-x-720-Pixels.png
But as that chart shows, they are running the game at 720p on very high-end graphics cards to prevent the graphics hardware from limiting performance of the CPU, to highlight any potential differences. That's because at a more real-world resolution for those cards, like 1440p or 4K, performance would be effectively capped by the number of frames the GPU can render each second, making the results of all CPUs virtually identical. But the frame rates the game is pushing on all of those CPUs are over 200fps, with even the 1% lows over 100fps. So the much slower integrated graphics of these laptops capping performance at around 30fps effectively means the CPU cores should have no trouble keeping up with the graphics hardware, with performance to spare whether the code is emulated or not.

Take, for example, this desktop i5-12400F review, similarly paired with a high-end RTX 3080. At 720p resolution, average frame rates result in a fairly large difference in performance between the benchmarked hardware, with the fastest CPU included in the chart pushing around a 38% higher frame rate than the slowest in that game (scroll to the bottom for Shadow of the Tomb Raider)...
https://www.techpowerup.com/review/intel-core-i5-12400f/15.html

But all tested CPUs, even the nearly seven year old i7-6700K, managed to average over 250fps in that game under their test conditions at 720p. Move up to 1080p though, and despite the graphics card still being overkill for that resolution, performance is very similar between all processors, effectively capped at around 250fps. The 6700K does drop a little to 236fps, but the fastest CPUs are now less than 7% faster...
https://www.techpowerup.com/review/intel-core-i5-12400f/16.html

Move up to 1440p, and the graphics hardware is limiting frame rates to around 180fps for all tested CPUs, with there being a less than 1% difference between them, well within the margin of error, as the order becomes a lot more random compared to the prior charts. All of the tested CPUs have enough performance to keep up with the graphics card at this frame rate, so that 38% performance difference at 720p has completely evaporated...
https://www.techpowerup.com/review/intel-core-i5-12400f/17.html

And of course that continues at 4K, with the graphics hardware limiting performance to 98fps for all CPUs. If a CPU's performance characteristics make no tangible difference when the graphics hardware limits the game to 180fps, then certainly there shouldn't be any difference when slower graphics hardware limits it to around 30fps, even if they happen to be testing a more demanding area of the game. So, this should strictly be a graphics performance comparison, with the CPU cores not pushed to their limits and in turn not likely making any measurable difference to performance.