Microsoft Flight Simulator 2024 PC performance testing and settings analysis — we tested 23 GPUs, the game is even more demanding than its predecessor

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I'm currently running on an i5-10600Kand 2070 Super and its a'ight, but the graphics in-game are nothing like the graphics on the 15 minute load time! :)

The one question I had is support for multiple monitors - For example, if you had 3 1080p monitors for a wider field of view or perhaps dedicated instruments, etc, would the performance be linear? ie. take the 1080p numbers and simply divide by 3? I feel there are a lot of MSFS pilots that run multiple monitor setup - thanks in advance!
 
The one question I had is support for multiple monitors - For example, if you had 3 1080p monitors for a wider field of view or perhaps dedicated instruments, etc, would the performance be linear? ie. take the 1080p numbers and simply divide by 3? I feel there are a lot of MSFS pilots that run multiple monitor setup - thanks in advance!
Performance is rarely linear when it comes to graphics so it's always best to compare the closest resolution/number of pixels when looking to guesstimate performance. 4k is 4x the number of pixels as 1080p however performance is pretty much never going to be 1/4th of 1080p.

In the case of 3x 1080p displays you'd be looking at 5760x1080 resolution or 6,220,800 pixels. Most often reviewers do 1080p/1440p/2160p when testing video cards. 1440p is 3,686,400 pixels and 2160p is 8,294,400 pixels so your 3x 1080p displays would be a bit closer to 4k than 1440p. If you were trying to get an idea of performance splitting the difference between the two would be about as close as you can get without the exact resolution being tested.
 
  • Like
Reactions: JarredWaltonGPU
Performance is rarely linear when it comes to graphics so it's always best to compare the closest resolution/number of pixels when looking to guesstimate performance. 4k is 4x the number of pixels as 1080p however performance is pretty much never going to be 1/4th of 1080p.

In the case of 3x 1080p displays you'd be looking at 5760x1080 resolution or 6,220,800 pixels. Most often reviewers do 1080p/1440p/2160p when testing video cards. 1440p is 3,686,400 pixels and 2160p is 8,294,400 pixels so your 3x 1080p displays would be a bit closer to 4k than 1440p. If you were trying to get an idea of performance splitting the difference between the two would be about as close as you can get without the exact resolution being tested.
Thanks for the advice. This current machine is a build I did for my daughter back in 2020, and its driving one 4k monitor now... again, in a very low video mode, but the frame rate is good, so the game is totally playable. I'm thinking it might be time for a new build however! Everytime I'm done playing, I need to move my yoke/quadrant controllers out of the way. I think a sim needs a more "dedicated table". Unfortunately, it appears I'm going to need one of the most powerful GPUs if I'm going to do this build correctly and get some resolution that doesn't look like 1980's Battlezone. I've got a ton of extra monitors and I'm really thinking the 3+ monitor setup is the way to go. Cheers
 
Thanks for the advice. This current machine is a build I did for my daughter back in 2020, and its driving one 4k monitor now... again, in a very low video mode, but the frame rate is good, so the game is totally playable. I'm thinking it might be time for a new build however! Everytime I'm done playing, I need to move my yoke/quadrant controllers out of the way. I think a sim needs a more "dedicated table". Unfortunately, it appears I'm going to need one of the most powerful GPUs if I'm going to do this build correctly and get some resolution that doesn't look like 1980's Battlezone. I've got a ton of extra monitors and I'm really thinking the 3+ monitor setup is the way to go. Cheers
Yeah, what thestryker said. Generally speaking, 3x 1080p will be less demanding than 4K but more demanding than 1440p. And of course, 3x 1440p (7680x1440) would be more demanding than 4K. My rough ballpark estimate is that doubling the number of pixels typically causes about ~30% loss of performance, give or take.

So, as an example, 4K is four times the pixels of 1080p, It causes a 30% loss on top of a 30% loss, or combined it usually ends up being about half the performance. But it still varies by game and GPU, because going to such a high resolution also puts a bigger strain on memory capacity and bandwidth. If you don't run out of bandwidth and capacity, half the performance at 4K compared to 1080p is pretty reasonable, but that's still only an estimate.

Besides MSFS24, we can also look at my recent Stalker 2 testing. RTX 4070 Ti Super got ~80 FPS at 1080p Epic, and 41 FPS at 4K Epic. So that's perfectly in line with my "half the perf" estimate. But the 4090 only ran about 33% slower (CPU limits), and the 4070 was 55% slower (started running out of VRAM at 4K).

There are also some idiosyncracies with triple monitors that can sometimes reduce performance a bit, IIRC. I don't normally try to run more than one monitor, just because while dual displays are popular, they're still niche relative to a single display for gaming purposes, and there are usually dozens of other equally viable scenarios I could try to benchmark.
 
Hi, appreciate your article.
I have one question... you didn't test on real low cpu.
I play actually fs2020 on an i7 8700k (6 core), 32 gb ram, and rtx4070 ti oc. It is enough smooth (50-75 fps) until I fly over a big city, which is a stutter-festival. Even if I reduce all cpu-demanding features, like details of objects, field of view, etc... I'm playing on 5100x1400px screen, but I think this impact the gpu, not cpu. The stutters are due to the too-high-demand on one core only.

I'm wondering about the results I could see with fs2024. My gpu should be enough (I can reduce details and resolution), cpu not, but the 2024 should be better parallelized & multi threaded.
Do you have an experience? with fs2024 with a i7 7xxx, 8xxx, or 6 core?
 
It doesn't need to be a fringe scenario. If the CPU is fully hogged (try testing this is a lower end CPU and any GPU), then AMD should come out on top more often than not because of the CPU scheduler in the nVidia driver. That's the theory at least.

This being said, I do agree it's way more testing and, perhaps (not sure), a hill not many want to die on, since if you're using a low end CPU and low-ish end GPU, blaming the nVidia scheduler for lower FPS'es is kind of moot.

I guess the question here would be: which games and/or scenarios would load a mid-range CPU to the point it would really, and noticeably, affect the performance of an nVidia GPU that makes sense in the pairing?

Like an i3/i5 with a 4060/4060ti vs RX6600/RX7600? I'm sure there's still plenty people using 8700K's and 9700K's, no?

Regards.
Apologies for the thread necromancy, but I thought it would be interesting to bring this up given the context:

View: https://www.youtube.com/watch?v=npIpWFSfmv4


View: https://www.youtube.com/watch?v=3dF_xJytE7g


View: https://www.youtube.com/watch?v=00GmwHIJuJY


To the point I was raising about AMD having less driver overhead vs nVidia and now Intel, again, saying "hold my beer, fellas". Now, that is the theory at least, but it illustrates that it can happen on any GPU given a certain performance tier and pairing. Perhaps randomly investigating this from time to time would be nice?

Regards.
 
  • Like
Reactions: thestryker
To the point I was raising about AMD having less driver overhead vs nVidia and now Intel, again, saying "hold my beer, fellas". Now, that is the theory at least, but it illustrates that it can happen on any GPU given a certain performance tier and pairing. Perhaps randomly investigating this from time to time would be nice?
What I'd be curious about is testing across all three over a wide selection of games because the Arc overhead seems really arbitrary (Spiderman isn't particularly CPU heavy, but has some of the biggest and linear drops) whereas nvidia's seemed directly linked to CPU usage.
 
What I'd be curious about is testing across all three over a wide selection of games because the Arc overhead seems really arbitrary (Spiderman isn't particularly CPU heavy, but has some of the biggest and linear drops) whereas nvidia's seemed directly linked to CPU usage.
Yep. There is something in there and it could be memory/cache related, because the VCache'd* CPUs and CPUs with bigger L2 and L3 work slightly better.

Regards.