Microsoft Flight Simulator 2024 PC performance testing and settings analysis — we tested 23 GPUs, the game is even more demanding than its predecessor

Admin

Administrator
Staff member
Thanks a lot for this testing Jarred. As always, it's interesting to see how the cards stack to one another in MS-FS.

I'll have to pick you on this one though: "This isn't all that unusual. We've seen plenty of games where, if you're CPU limited at lower resolutions, AMD's drivers and GPUs still hit higher levels of performance. This is reputedly thanks to better threading optimizations, and of course, we're also looking at somewhat silly test settings — $700 to $2,000 GPUs running on a top-tier gaming PC at 1080p medium isn't normally the intended workload."

Sorry, but that feels a bit dishonest. Everyone in the industry knows nVidia doesn't have a hardware scheduler; or at least, not in the same vein as AMD. AMD has implemnted a full hardware scheduler for command queue inside the GPU, so nVidia has their implementation in the driver (which, all things considered, is very good), but has the massive pitfall that when the CPU is hogged, then your command issuing will suffer. The result of that is exactly the graph you have at 1080p. This became a real tangible problem when games started using more threads and MS introduced HAGS, as I'm sure most remember how nVidia couldn't even use it effectively until a few years back.

Happy to be corrected in the deeper technicalities of this, but again, that paragraph felt more like a Userbenchmark justification than just giving credit to AMD for having their scheduler inside the GPU design instead of leeching off the CPU instead.

Not to mention how you're throwing everyone that wants infinite FPS at lower resolutions under the bus :)

Regards.
 
The amount of data the game downloads is pretty crazy. At the low end of 250 MB (2000 Mb) a hour, it would be 120 Gb per month at 2 hours gameplay per day. At the high end, 575 MB (4600 Mb) it would be 276 Gb per month with the 2 hours per day usage. Hopefully if a player sticks around the same general regions the data downloads are far less. Otherwise, those with data caps might take a hit.
 
Thanks a lot for this testing Jarred. As always, it's interesting to see how the cards stack to one another in MS-FS.

I'll have to pick you on this one though: "This isn't all that unusual. We've seen plenty of games where, if you're CPU limited at lower resolutions, AMD's drivers and GPUs still hit higher levels of performance. This is reputedly thanks to better threading optimizations, and of course, we're also looking at somewhat silly test settings — $700 to $2,000 GPUs running on a top-tier gaming PC at 1080p medium isn't normally the intended workload."

Sorry, but that feels a bit dishonest. Everyone in the industry knows nVidia doesn't have a hardware scheduler; or at least, not in the same vein as AMD. AMD has implemnted a full hardware scheduler for command queue inside the GPU, so nVidia has their implementation in the driver (which, all things considered, is very good), but has the massive pitfall that when the CPU is hogged, then your command issuing will suffer. The result of that is exactly the graph you have at 1080p. This became a real tangible problem when games started using more threads and MS introduced HAGS, as I'm sure most remember how nVidia couldn't even use it effectively until a few years back.

Happy to be corrected in the deeper technicalities of this, but again, that paragraph felt more like a Userbenchmark justification than just giving credit to AMD for having their scheduler inside the GPU design instead of leeching off the CPU instead.

Not to mention how you're throwing everyone that wants infinite FPS at lower resolutions under the bus :)

Regards.
I don't honestly know for sure what the difference is at 1080p medium. It could be as you say, and it's a reasonably regular occurence at lower resolutions and lower settings. There are lots of details that neither AMD nor Nvidia really provide, and I don't want to make blanket assumptions about the root cause. So I do a handwavey "differences in drivers and architectures" and move along, because the amount of time required to properly analyze this stuff isn't small and doesn't add enough to an article to justify my trying to figure it out.

Ultimately, the performance is what it is. AMD does better with its top-tier GPUs at settings that don't really matter that much for such expensive hardware. It's in the same category as "winning" at 4K ultra performance on a budget GPU that does say 15 FPS while the competition only does 10 FPS. Technically? Yes, that's 50% faster. But it just doesn't matter because no one is using budget GPUs to run 4K ultra. Just like almost no one is using extreme GPUs to play at 1080p medium.

(And don't say eSports because that's a tiny market overall, and if you're running 1080p medium or 1080p low in an eSports game you're going to hit a similar limit on a $500 GPU as on a $2000 GPU. It's basically a fringe scenario, in other words. Plus I don't test or write about eSports games.)
 
Wow, no love for the 3090?
Nope. And no love for the 3080 Ti, 3070 Ti, 3070, 3060 Ti, 6950 XT, 6900 XT, 6800 XT, 6800, 6750 XT, 6700 XT, 6700, 6650 XT, 6600 XT, 6600, 6500 XT, and 6400 either. Or most GPUs from the RTX 20-series and RX 5000-series and earlier. Because while the numbers are potentially interesting, gathering all that data can consume multiple days.

In general, the 4070 Ti Super and 4070 Super should bracket the 3090. In this particular game, based on the 3080 10GB results, it looks like Ampere generation cards may do better vs Ada than in some other games. So, I test all the latest generation (mostly) and a few previous generation that can be used as points of reference.
 
Optimizations will come soon.
Don't forget that this is not really a "game" but a simulator, and it gets updated continuously, like a production software, over the years.
This is just v1.0 for now.
 
Hi Jarred,

With MS2024 being VR capable, do you/Tom's ever hope to cover more games that do incorporate that element? I guess it's a whole lot of testing to add to the already heavy load. But, interesting to see.

I know there probably isn't enough uptake in the VR world to warrant the effort, so can understand why it's not really relevant.

As a matter of interest, do you test the VR parts of any games you've reviewed just to see how they play?
 
  • Like
Reactions: NinoPino
If you're looking at that and wondering what happened to the RTX 4060 Ti and RTX 4060, the above discussion about running out of VRAM applies here as well. Probably, framegen would work fine at 1440p and 1080p, but at 4K ultra, the 8GB cards run short of memory, and you get a choppy mess. Wasn't it a great decision by Nvidia to not go with a 192-bit interface and 12GB of VRAM as the baseline, like it did with the (original) RTX 3060? Now, anyone who bought a 4060 or 4060 Ti gets to pay the price.
What a bizarre interjection.
Not only are the 4060 non-Ti and 4060 Ti 8GB not present is ANY chart (let alone the framegen one that paragraph sits under), but even if they were then going by the 4060 Ti 16GB's performance they would be getting sub-30 FPS at 4K Ultra regardless of whether framegen was on or off. In other words, those cards would be gong from unplayable to also unplayable.
As the 4060 Ti 16GB results show (from from 31-34 FPS to 24-37 FPS with framegen) more memory would have done nothing of value, the performance bottleneck is elsewhere - likely the sheer lack of hardware available for both shading and upscaling.
 
I don't honestly know for sure what the difference is at 1080p medium. It could be as you say, and it's a reasonably regular occurence at lower resolutions and lower settings. There are lots of details that neither AMD nor Nvidia really provide, and I don't want to make blanket assumptions about the root cause. So I do a handwavey "differences in drivers and architectures" and move along, because the amount of time required to properly analyze this stuff isn't small and doesn't add enough to an article to justify my trying to figure it out.

Ultimately, the performance is what it is. AMD does better with its top-tier GPUs at settings that don't really matter that much for such expensive hardware. It's in the same category as "winning" at 4K ultra performance on a budget GPU that does say 15 FPS while the competition only does 10 FPS. Technically? Yes, that's 50% faster. But it just doesn't matter because no one is using budget GPUs to run 4K ultra. Just like almost no one is using extreme GPUs to play at 1080p medium.

(And don't say eSports because that's a tiny market overall, and if you're running 1080p medium or 1080p low in an eSports game you're going to hit a similar limit on a $500 GPU as on a $2000 GPU. It's basically a fringe scenario, in other words. Plus I don't test or write about eSports games.)
It doesn't need to be a fringe scenario. If the CPU is fully hogged (try testing this is a lower end CPU and any GPU), then AMD should come out on top more often than not because of the CPU scheduler in the nVidia driver. That's the theory at least.

This being said, I do agree it's way more testing and, perhaps (not sure), a hill not many want to die on, since if you're using a low end CPU and low-ish end GPU, blaming the nVidia scheduler for lower FPS'es is kind of moot.

I guess the question here would be: which games and/or scenarios would load a mid-range CPU to the point it would really, and noticeably, affect the performance of an nVidia GPU that makes sense in the pairing?

Like an i3/i5 with a 4060/4060ti vs RX6600/RX7600? I'm sure there's still plenty people using 8700K's and 9700K's, no?

Regards.
 
"Microsoft's Flight Simulator 2020 already had hefty requirements for max settings, and the new 2024 release ups the ante. It also ditches DirectX 11 support and leverages DirectX 12 for all systems" - Jarred Walton

Lol, Microsoft had DX12 on the market for almost six years by the time FS2020 was released. As for FS2024, that's nice that they're using their own latest graphics API... *cough* 10 years later *cough*.

To be fair, I know game engines don't always benefit significantly in performance terms by going from DX11 to DX12 -- a lot of it depends on draw calls, multi-threading efficiency and heavy multi-threading in general, and of course the hardware itself. That said, additional enhancements and changes since DX12's original launch like DXR (DirectX Raytracing) are value-adds.
 
  • Like
Reactions: NinoPino
Also, interesting that AMD came out that far ahead. Additionally, Jarred tested with the October (24.10.1) AMD drivers (they didn't release new GA ones in November) while yesterday, AMD released their December drivers (24.12.1) with release notes that mentions official game support for FS2024.
https://www.amd.com/en/resources/support-articles/release-notes/RN-RAD-WIN-24-12-1.html

So, sorry Jarred but you have to restest all the AMD GPU's!! :weary:
(lol j/k of course)

Guess I'll go with the hardware scheduler theory as well.
 
  • Like
Reactions: NinoPino
Optimizations will come soon.
Don't forget that this is not really a "game" but a simulator, and it gets updated continuously, like a production software, over the years.
This is just v1.0 for now.
I find your optimism endearing, but misplaced: you're dealing with M$ here!

I can't remember significant updates other than outright bug fixes for FS2020. And that Asobo may have to fix on budget already paid by M$.

And given that this is contract work and not a studio selling its own wares, somebody at M$ would have to sign off for this additional work: that company only takes your data for free, they aren't into wasting effort into happy customers.

The only "improvements" I've seen on FS2020 over its life-time was content, which M$ again evidently could just buy on the 3rd party market and make available for free via a marketing budget. Pretty sure that was extremely cheap for the creating the illusion of continous improvements.

Given that my main performance pains remained exactly the same between the two releases (stuttering in VR independent of graphics quality settings with plenty of CPU and GPU resources remaining availalbe), I'm very doubtful this will be fixed in this release.

Evidently VR is too much of a niche to get QA attention.
 
Hi Jarred,

With MS2024 being VR capable, do you/Tom's ever hope to cover more games that do incorporate that element? I guess it's a whole lot of testing to add to the already heavy load. But, interesting to see.

I know there probably isn't enough uptake in the VR world to warrant the effort, so can understand why it's not really relevant.

As a matter of interest, do you test the VR parts of any games you've reviewed just to see how they play?
I don't do VR, at all. I theoretically could, but it requires a bunch of extra stuff and I've never even tried to get there. It's not my niche, basically. I've used VR before and I fog up the glasses every time. I guess I sweat / run hot or whatever from too much overclocking?
 
Also, interesting that AMD came out that far ahead. Additionally, Jarred tested with the October (24.10.1) AMD drivers (they didn't release new GA ones in November) while yesterday, AMD released their December drivers (24.12.1) with release notes that mentions official game support for FS2024.
https://www.amd.com/en/resources/support-articles/release-notes/RN-RAD-WIN-24-12-1.html

So, sorry Jarred but you have to restest all the AMD GPU's!! :weary:
(lol j/k of course)

Guess I'll go with the hardware scheduler theory as well.
I'm testing GPUs for a massive GPU hierarchy update (that will take months...) so MSFS24 will get some retesting love there. I have to say that in general, a lot of "Game Ready" driver releases don't actually seem to matter much. I guess I'll have to see if things improve with the 24.12.1 now, though. Might impact the Stalker 2 results as well!
 
"Microsoft's Flight Simulator 2020 already had hefty requirements for max settings, and the new 2024 release ups the ante. It also ditches DirectX 11 support and leverages DirectX 12 for all systems" - Jarred Walton

Lol, Microsoft had DX12 on the market for almost six years by the time FS2020 was released. As for FS2024, that's nice that they're using their own latest graphics API... *cough* 10 years later *cough*.

To be fair, I know game engines don't always benefit significantly in performance terms by going from DX11 to DX12 -- a lot of it depends on draw calls, multi-threading efficiency and heavy multi-threading in general, and of course the hardware itself. That said, additional enhancements and changes since DX12's original launch like DXR (DirectX Raytracing) are value-adds.
The only reason for ditching DX11 is saving money. Having to support and validate distinct graphics APIs is costly and Asobo isn't using anybody elses game engine to hide that cost.

Since DX12 is ubiquitious if only through emulation, maintaining DX11 in the game engine would be zero vendor benefit.

FS2020 and 2024 remain exceptionally bad when it comes to turning those resources into visual quality.

I've thrown my 7950X3D and an RTX4090 at FS2020 and FS2024 and still get stutters and a digital twin which bears no resemblance with ground truth.

What I can get from an Orange PI 5+ at 4k via Chromium and Google 3D terrain maps, not only bears resemblance to what's actually there, it also shows just how much visual performance a truely optimized software can tease out of a fraction of compute resources and 100x less power consumption.

FS2024 and FS2024 remain horrendously badly optimized pieces of software where most of the effort goes into marketing not code and data.
 
I don't do VR, at all. I theoretically could, but it requires a bunch of extra stuff and I've never even tried to get there. It's not my niche, basically. I've used VR before and I fog up the glasses every time. I guess I sweat / run hot or whatever from too much overclocking?
I not only need glasses, but ever since I turned half a century, I need different ones for screen work and anything a little more forward looking... which includes VR.

So trying to get VR working didn't just involve putting on a headset and removing it, but often enough swapping glasses, too. And no, just leaving them off was no option, because, of course, both eyes vary in their degree of short sightedness and for any illusion of 3D I need both to be sharp!

I just had to do VR when the DK1 was proposed for funding, as a technology lead in the company, some sacrifices were part of the job description, even if I paid them from my bonus! And I kept on with the DK2 and the CV1, until the HP Reverb Pro for the last few years. As with all things gaming hardware, my 20:20 vision kids perhaps profit a little more... and that's ok.

My eye issues mean that for VR I typically have to resort to contacts, dual focus contacts to be precise, so I can still read the screen dead on, while the outer areas allow full focus in the long distance: pretty cool stuff, glasses can't do! But nothing my eyes tolerate as a default.

That elimantes a lot of the hassle and some fogging, but it means I have to clear my schedule to go into VR. And then it has to stay clear, too.

With dear old M$ ditching AR my favorite HP Reverb Pro headset is getting tough love, so I looked around and finally conceeded going with Oculus again using a Quest 3, after making sure it indeed didn't require Facebook, "just" a Meta account.

And while it meant I had to upgrade my Wifi to 7, the visual quality is very much the same 2k by 2k.
And €500 for Wifi 7 later, I'd say Wifi 6 probably wasn't the reason for the flight sim stutters: Alas...

Most importantly, the Quest 3 see-through cameras allow me to get things done around me and even on my 4k monitors without removing the head-set, nor the long distance glasses, which fit much better under the Quest than they did on the Reverb.

Long story short: even if I never wanted to move to the Quest as long as the Reverb could do the job, it's turned out to be a vast improvement in terms of spontaneous use. Not only can I just use my normal long-distance glasses for everything I do there (they also work just fine with the virtual desktop), but evidently the very discrete fans built into the Quest 3 manage to keep fogging at bay (didn't try them in summer).

Not having to bother about cables is a big benefit, too and the extra battery pack at the back does wonders to keep the headset balanced and with enough battery life to make it useful... perhaps not for a full transatlantic flight, but at that point what little realism flight sims have to offer is enough to convince me that pilot would not have been a career for me.
 
Hello Jared and thanks for the review, I understand that MSFS review was very challenging.
That said, I would appreciate if you could further deepen the CPU analysis, specifically regarding multithreading, cores load, ideal core count, and memory bandwidth. Also the lack of Ultra 285 and 9950, is very disappointing.
Thanks in advance.
 
Odd, because i'm pretty much getting the same performance if not a little better vs MSFS 2020. I can now run MSFS in 4K @ 30FPS with Ultra settings and it is a lot smoother landing at airports. Large US airports are about the same, a little choppy which is to be expected. I compensate by lowering airport air and ground traffic but that was also the same with MSFS 2020.

I'm using an HP Omen 30L i9-10850K, 32GB of 3200Mhz DDR4, RTX 3080.
 
Odd, because i'm pretty much getting the same performance if not a little better vs MSFS 2020. I can now run MSFS in 4K @ 30FPS with Ultra settings and it is a lot smoother landing at airports. Large US airports are about the same, a little choppy which is to be expected. I compensate by lowering airport air and ground traffic but that was also the same with MSFS 2020.

I'm using an HP Omen 30L i9-10850K, 32GB of 3200Mhz DDR4, RTX 3080.
It may have a sense, according to declarations made be the developers, that affirm that multithreading was is greatly improved.
 
See HTC Vive XR elite with adjustable diopter dials
Thanks for the hint: that does indeed look good, somehow those dials never popped up in any feature list: they must be too unique or I just wasn't paying attention...

I believe the Oculus DK2 came with three sets of lenses, which compensated my different eyes well enough, but that was basically a single Samsung Note display at THD for both eyes and had giant pixels...

Anyhow, nothing new until the Quest 3 breaks or is made obsolescent, too.
 
Hello Jared and thanks for the review, I understand that MSFS review was very challenging.
That said, I would appreciate if you could further deepen the CPU analysis, specifically regarding multithreading, cores load, ideal core count, and memory bandwidth. Also the lack of Ultra 285 and 9950, is very disappointing.
Thanks in advance.
I've done some testing using Lasso, trying to pinpoint where the stuttering in VR came from...

Of course I only had a 7950X3D to play with and the RTX 4090 on 4k with up to 144Hz.

I always fly the same route from Frankfurt airport over my home to Wiesbaden airbase for testing. And I typically use the AeroElvira Optica because it offers the best view of the outside and is super easy to fly. It was the ultra light airplane on earlier FS models for the same reason, I'm not training to become a pilot, but have fun exploring the planet.

I could not see any significant difference between exclusively using CCD0 (V-cache) or CCD1 (higher clocks) or just both, neither in 4k ultra with DLSS nor with VR and the primary screen dialed down to essentially THD low settings, while the headset is set to highest quality. Overall a Ryzen 9 is just bored to death on all cores.

Astonishingly enough for the VR stuttering it also made very little difference what the primary monitor was running at: Setting that to 4k and ultra and using the top VR quality settings also made very little difference: running essentially two 4k screens instead of one didn't bother the 4090 much, but also didn't solve the stuttering outside world, while the rendering of the cockpit inside VR is always extremely fluid.

Changing VR Hz from 72 to 120Hz with all the intermeditates supported made no difference to the world render stutter, but evidently there is a small difference between 72 and 90 Hz in how the cockpit renders on rapid head movement.

Basically the GPU is just as bored as the CPU, never goes near any limit, utilization, memory bandwidth, memory size, whatever resources HWinfo can list, there is plenty left over.

Changing the world update rate from 30 to something higher, mostly seems to make stuttering somewhat more erratic and causes some extra CPU consumption. I don't see it going anywhere near the maximum of 90 Hz I tried, I guess stuttering is mostly the world update rate going below the 30 Hz default when VR is active. Without VR the world is usually rather smooth, but still probably not near 90 Hz if that's what I set. I have no way of measuring that, unfortunately: there is no effective rate or skipped frame rate for the internal world updates.

Something responsible for world rendering seems to be single threaded and if because of VR more than one angle of the world needs to be rendered, it evidently can't just simply use another CPU for that or has to go around semaphores or locks with code fragments so big, it causes those stutters. When that single thread is just spread around all those cores, the CPU looks very bored, even if a single program thread uses 100% CPU. There is no thread consistently using 100%, just because I can hear you asking: I looked for it and didn't find it.

My personal impression is that that's because of a single thread limit inside the time critical part of the flight simulator, same old, same old M$ flight sim has never made good use of extra cores and that architectural limitation still hasn't been removed.

My current error theory is that modern CPUs may have moved it to a point where it's not a concern most of the time, because even a single CPU core isn't overloaded by peak simulator load, but with VR and perhaps extra angles on extra monitors or windows active, that single thread will peak above what a single physical core can deliver and then cause world render stutter, that's otherwise less noticable. You can't see it on the CPU graphs, because that loaded thread gets migrated between cores via the Windows scheduler much faster than any sampling graph could show.

So far I only tried to test just how important extra cores were by using Lasso to keep FS2024 on different core sets. As I already noted above, 3D or not didn't make much of a difference, I also tried disabling SMT with CCD0 and CCD1, because that's a "Game Mode" BIOS option today.

Nothing really happened and then I just took more cores away, again with little impact. At two cores (no SMT) I started noticing some extra lags, FS2024 runs almost as good (or bad) even on a single core as with 16.

It just cannot spread the crucial bits and unless you use VR, a single fast enough core will carry most of everything else, too. Even at 4k Ultra.

I feel safe to say that FS2024 would run just as well on a 4 core machine, even with an RTX 5090 at ultra, and especially if that at least one core on that CPU ran at 6 GHz or beyond with good IPC.

FS2024 doesn't consume significant amounts of VRAM, with dual 4k screens and the VR headset somehow involved I never saw it use more than around 11 of 24GB of GPU memory.

I didn't see it use any significant part of the 96GB of DDR5-6000, nor does HWinfo show any significant RAM bandwidth being used. Basically I used HWinfo and Lasso on the 2nd 4k screen to search for anything that might be a bottleneck and the only thing I could identify is that the single core it uses for everything critical, would need to run at 10GHz or more.

In summary: FS2024 pretty much like FS2020 is pretty near only limited by the speed of the fastest CPU core. Most of the time that's good for better than 60Hz, unless you do VR: then the simulated world stutters in ways it never would in a real plane, which kills immersion faster than a crash would kill the pilot. No currently available CPU will give you 240 Hz, perhaps not even an RTX 4090 at 720p. Throwing extra hardware at it, just doesn't help because the issue is software.

As far as I can tell, under the hood the FS2024 core isn't that much changed from the 2002 edition which M$ sells every few years as something completeley new.

The code that invents the outside world, streets, vegetation, houses, cars etc. when there is only 2D data in Microsoft's card data has not changed much in terms of quality and realism since Flight Simulator X.

It has not changed at all since FS2020 even if it has moved to the cloud, because the rather stately 1830 villa where I live in the servants quarters under the roof, on FS2024 looks like the exact same generic cottage it is on FS2020. And the low barn on the other side of the field, has been a rather substantial multi-floor Hamburg harbour style storage house, the likes of which FS likes to sprinkle generously across all of Germany for some reason. Everything up to the color of each individual house on my street is exactly the same on FS2020 and FS2024 and apart from a rough outline has zero similarity to what's actually on the ground.

You paint a house in that red in Germany, you get put into prison before you're half finished, but Microsoft doesn't know that, not even with OpenAI.

And real cars don't drive through rivers, across fields or straight into each other on four lane highways. Again, it should only take a few ten thousand weights to teach that to a machine and that model would easily run on the tiny CPU inside the TPM, perhaps even on the old 80486 on PCHs, much less powerful than all those idling E-cores.

M$ Flight simulator is so pathetic it's must be inspiring: so many words to prove it, sorry!
 
  • Like
Reactions: Roland Of Gilead
I'd be curious to see if bumping up the memory to 64GB makes any difference in performance or data usage. PC Gamer's hardware writer saw some differences, but they were also using a 9900X which I imagine isn't ideal due to the 6 core CCDs.
 
Thank you for the extensive testing @JarredWaltonGPU even bringing in different CPUs.
Do you think the game is badly optimised and there’s much more potential for performance increase on today’s hardware or is this what we have to expect more or less?