News Microsoft Flight Simulator Performance and Benchmarks: Your PC May Need an Upgrade

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
For anyone following this thread, I just added a bunch of memory testing results. What a pain! I thought capacity would be a big deal, but it's not really -- at least, not with an 11GB RTX 2080 Ti. RAM speed on the other hand made a rather huge difference, up to 25% (compared to 'JEDEC DDR4-2133', which absolutely no one should be using on a gaming PC).
 
  • Like
Reactions: Awev and st379
For anyone following this thread, I just added a bunch of memory testing results. What a pain! I thought capacity would be a big deal, but it's not really -- at least, not with an 11GB RTX 2080 Ti. RAM speed on the other hand made a rather huge difference, up to 25% (compared to 'JEDEC DDR4-2133', which absolutely no one should be using on a gaming PC).
Why in ultra settings the game limit is 50 fps? I don't see this problem on lower settings.
 
Why in ultra settings the game limit is 50 fps? I don't see this problem on lower settings.
More objects to draw means more CPU computational power is required. That's why low gets to ~95 fps in my testing -- far fewer objects and draw calls. Medium increases the complexity and max fps with a 9900K ends up around 80 fps. Then high adds even more complexity and the 9900K drops to 65 fps, and then finally 50 fps at ultra quality.
 
  • Like
Reactions: Awev and st379
More objects to draw means more CPU computational power is required. That's why low gets to ~95 fps in my testing -- far fewer objects and draw calls. Medium increases the complexity and max fps with a 9900K ends up around 80 fps. Then high adds even more complexity and the 9900K drops to 65 fps, and then finally 50 fps at ultra quality.
Thank you.

I hope people will not get confused and think their gpus is the problem when they see the 50 fps.
Not everyone will see that there is a problem when rtx 2060 super performs the same as rtx 2080 ti.
 
You should also do a test with various RAM configurations of quantity, speed, 2 channel versus 4 channel, ect.

I pulled almost 26GB utilized yesterday while flying around Dubai, which is the most I have ever seen any game use. I'm about to run a few of my own tests and see just how far I can utilize this 32GB RAM.

I've seen lots of people with 16GB complaining about performance tanking, and I suspect it has to do with online data cache.

I've seen my RAM utilization as high as 29-ish GB.
 
OMG, dropped sim like a hot potato back then, after the twin towers fell. Our fellow Americans bravery in dispatching reprobate homicidal maniacs who locked themselves inside was a bookend to horrific scenes seared into my brain, taking place in Manhattan.
 
Last edited:
I've seen my RAM utilization as high as 29-ish GB.
My testing with memory suggests that the game will use more RAM just for caching purposes, but it doesn't need the additional memory. I'd still be hesitant to try and play it on an 8GB PC, but 16GB should be fine. Of course, with future mods and such it could use more memory than the base game.

Generally speaking, though, all the data (textures and geometry) in system RAM basically needs to fit in VRAM for optimal performance. If the game were actually consistently using 24GB as an example, an 11GB GPU would spend a huge amount of time (relatively) swapping data over the PCIe bus. I'd assume much of the RAM use is for textures that might be used (ie, stuff around the plane that's not currently visible, but stays in RAM just in case it's needed).
 
This chart is a GPU killing field à la Arma III. Maybe, just MAYBE, Microsoft should try incorporating multi-GPU support for FS2020 because, just like a certain X-Men villain, no one can stand against it alone.

(in my best X-Men TAS Apocalypse voice...)
"I AM THE PROGRAM THAT WILL BRING YOUR RIG TO IT'S KNEES! I HAVE BEEN KNOWN BY MANY NAMES LIKE CRYSIS, VISTA AND ARMA III. LOOK UPON ME, YE MIGHTY GAMING BUILDS AND DESPAIR, FOR I AM FS2020, AND I HAVE COME FOR YOUR SOULS!"
UAGgfyqmEX0.jpg
 
It's sooo weird, the wide range of variable results between what I've seen among the community and my own experimentation with the sim. This article seems to show that the clouds are the most taxing element, but on my system, changing from Ultra to Low clouds only gave me 2 to 3 fps increase at the most, but in many situations, had no effect at all. I run on a 1440p monitor, but even dropping the sim resolution to 1080p made no difference in frame rates either. Seriously.

i7-6700K
RTX 2080
32GB RAM
 
It's sooo weird, the wide range of variable results between what I've seen among the community and my own experimentation with the sim. This article seems to show that the clouds are the most taxing element, but on my system, changing from Ultra to Low clouds only gave me 2 to 3 fps increase at the most, but in many situations, had no effect at all. I run on a 1440p monitor, but even dropping the sim resolution to 1080p made no difference in frame rates either. Seriously.

i7-6700K
RTX 2080
32GB RAM
So a few things: first, clear skies have less clouds. The test was done with typical western WA weather, which means a fair amount of clouds.
Second, you are almost certainly CPU bottlenecked with the 6700K. Clouds are very likely a GPU limit, and RTX 2080 is quite potent.
You can see in my testing, even at 1080p ultra the RTX 2070 Super is hanging with the RTX 2080 Ti (and your RTX 2080 would be slightly faster than 2070 Super).

I've attached my test clip. Enable all assists and let the autopilot take it in for a landing. I use a 90 second capture period, and start about ~8 seconds into the landing (right as the 'marker' appears on the right of the screen and the blue rectangle outline approach vector goes off the right of the screen). Run that on your PC and see how your numbers compare to my 2070 Super numbers.

Extract the files to [UserAppData]\Local\Packages\Microsoft.FlightSimulator_8wekyb3d8bbwe\LocalState
 

Attachments

Microsoft Flight Stimulator is the new kid on the block! The latest released version is the most advanced and good looking version. It is believed that the folks who are fond of playing such a stimulating game do not mind spending to upgrade their PC.

It is being predicted that the sale of hardware products may shoot up to $2.6 Billion in the coming 3 years. The reason being it is a high-end game and needs the best in class hardware to support the gameplay. It needs a large display, high resolution, and usage of Virtual Reality as it is a highly sophisticated game.

It makes sense to upgrade your PC as this high-end game will need demanding processing capability so get ready to upgrade your current CPU.
 
Hi,

I've been testing the game on my computer for about a week on the standard ultra settings (rendering at 100, level of detail 200) at 1440p and I'm a bit disappointed of the performance on my setup (see below).

AMD Ryzen 5 3600x
GIGABYTE GeForce RTX 2070 SUPER WINDFORCE OC 3X 8G
GIGABYTE B550 AORUS ELITE, socket AM4
Kingston A2000, 500 GB SSD
2 x Corsair 16 GB DDR4-3200

  • Seems to be that only flying above cities at +5000ft will get me above 30FPS.
  • Flying low over mountain area's, nature, performs much better at around 25-30fps
  • High altitude +30.000ft easily stays above 30fps
  • Flying over NYC, Paris, L.A. under 1000ft is around 20fps!
  • Landing and taxing at L.A. international Airport , Paris Charles de Gaulle, is quite impossible at around 10-20 fps.
Any advice on how I can archieve better performance on low altitudes by keeping great visual results ? From lowering the settings to the article suggested 'tuned settings' to updating my hardware setup?

Update: executing the 'tuned settings' give me 23-28 fps on low altitudes, 32 fps on 10.000ft.

Maybe a better CPU can change a lot? Although the performance results makes it clear that a AMD Ryzen 9 doesn't affect the FPS at all... Maybe AMD will come soon with a cpu that reaches the Intel Core i9-9900K?

Thanks!

These are my specs:

AMD Ryzen 5 3600x
GIGABYTE GeForce RTX 2070 SUPER WINDFORCE OC 3X 8G
GIGABYTE B550 AORUS ELITE, socket AM4
Kingston A2000, 500 GB SSD
2 x Corsair 16 GB DDR4-3200
 
Last edited:
Hi,

I've been testing the game on my computer for about a week on the standard ultra settings (rendering at 100, level of detail 200) at 1440p and I'm a bit disappointed of the performance on my setup (see below).

AMD Ryzen 5 3600x
GIGABYTE GeForce RTX 2070 SUPER WINDFORCE OC 3X 8G
GIGABYTE B550 AORUS ELITE, socket AM4
Kingston A2000, 500 GB SSD
2 x Corsair 16 GB DDR4-3200

  • Seems to be that only flying above cities at +5000ft will get me above 30FPS.
  • Flying low over mountain area's, nature, performs much better at around 25-30fps
  • High altitude +30.000ft easily stays above 30fps
  • Flying over NYC, Paris, L.A. under 1000ft is around 20fps!
  • Landing and taxing at L.A. international Airport , Paris Charles de Gaulle, is quite impossible at around 10-20 fps.
Any advice on how I can archieve better performance on low altitudes by keeping great visual results ? From lowering the settings to the article suggested 'tuned settings' to updating my hardware setup?

Update: executing the 'tuned settings' give me 23-28 fps on low altitudes, 32 fps on 10.000ft.

Maybe a better CPU can change a lot? Although the performance results makes it clear that a AMD Ryzen 9 doesn't affect the FPS at all... Maybe AMD will come soon with a cpu that reaches the Intel Core i9-9900K?

Thanks!

These are my specs:

AMD Ryzen 5 3600x
GIGABYTE GeForce RTX 2070 SUPER WINDFORCE OC 3X 8G
GIGABYTE B550 AORUS ELITE, socket AM4
Kingston A2000, 500 GB SSD
2 x Corsair 16 GB DDR4-3200
Your PC should be doing way better than that. A few thoughts:

  1. Verify your RAM is truly running at DDR4-3200 (enable XMP or A-XMP or whatever it's called in the BIOS). You can use CPU-Z to check your RAM speed in Windows as well.
  2. Verify GPU drivers are properly updated (run DDU and wipe out all AMD/Nvidia GPU drivers, reboot, reinstall latest Nvidia drivers)
  3. Verify AMD chipset drivers are installed (may need to uninstall them first and then reinstall them)
  4. Just on an off chance, run Command Prompt as admin and type "bcdedit" and look for "useplatformclock yes" -- if this is present, type "bcdedit /deletevalue useplatformclock" and reboot. (And don't ever use x264 HD Benchmark again -- this useplatformclock option is known to cause serious lag/issues with some applications, and x264 HD is the only thing I've ever encountered that requires it. Long story...)
  5. Start shutting down (temporarily) all extraneous applications, services, and processes running on your PC. Maybe something is interfering with MSFS and causing poor performance. You should be able to get 35-40 fps closer to the ground I would think.

#1 seems the most likely to me, maybe #2 as well.
 
Your PC should be doing way better than that. A few thoughts:

  1. Verify your RAM is truly running at DDR4-3200 (enable XMP or A-XMP or whatever it's called in the BIOS). You can use CPU-Z to check your RAM speed in Windows as well.
  2. Verify GPU drivers are properly updated (run DDU and wipe out all AMD/Nvidia GPU drivers, reboot, reinstall latest Nvidia drivers)
  3. Verify AMD chipset drivers are installed (may need to uninstall them first and then reinstall them)
  4. Just on an off chance, run Command Prompt as admin and type "bcdedit" and look for "useplatformclock yes" -- if this is present, type "bcdedit /deletevalue useplatformclock" and reboot. (And don't ever use x264 HD Benchmark again -- this useplatformclock option is known to cause serious lag/issues with some applications, and x264 HD is the only thing I've ever encountered that requires it. Long story...)
  5. Start shutting down (temporarily) all extraneous applications, services, and processes running on your PC. Maybe something is interfering with MSFS and causing poor performance. You should be able to get 35-40 fps closer to the ground I would think.
#1 seems the most likely to me, maybe #2 as well.
Hi,

Thanks for your swift reply

  1. I already had selected the XMP in bios and verified in CPU-Z: Frequency is at 1596.8 Mhz
  2. Done
  3. Uninstalled chipset and redowloaded chipset drivers. Went to gigabyte website where I found a more recent update (2‎.07.21.306 from Aug 11th) then on the AMD website (2.07.14.327 from July 21th). I installed the more recent one. When installing the chipset drivers I saw 2 parts which I didn't see at the uninstall window, namely: AMD GPIO driver and AMD GPIO driver (for Promontory)
  4. not found in prompt
  5. Done

-> performance improved by around +- 10 fps . Curious what was the problem :)

Thanks!
 
  • Like
Reactions: JarredWaltonGPU
Hi,

Thanks for your swift reply

  1. I already had selected the XMP in bios and verified in CPU-Z: Frequency is at 1596.8 Mhz
  2. Done
  3. Uninstalled chipset and redowloaded chipset drivers. Went to gigabyte website where I found a more recent update (2‎.07.21.306 from Aug 11th) then on the AMD website (2.07.14.327 from July 21th). I installed the more recent one. When installing the chipset drivers I saw 2 parts which I didn't see at the uninstall window, namely: AMD GPIO driver and AMD GPIO driver (for Promontory)
  4. not found in prompt
  5. Done
-> performance improved by around +- 10 fps . Curious what was the problem :)

Thanks!
Probably the GPU drivers had some old kruft if you've updated through a bunch of versions. I've had that happen before, even with only one or two version upgrades. It's why I routinely run DDU now to clean out old driver junk. Chipset drivers might have helped, but if so I'm not sure what specifically would have been 'fixed' -- the default Windows chipset drivers are usually fine for general performance and it's only the last 2-3% of CPU performance that improves. Glad it's working better now, regardless.
 
Thank you so much for making this comprehensive guide so quickly after FS 2020 release. The only one on the internet right now! After reading this thread a few times, it seems like the way to go now (building a rig) is via Intel as DX11 is a limiting factor to GPU performance. If the game is updated to support DX12 I would guess going via AMD would be better as it can push more to the GPU at a lower price point. Is that a fair assessment? Although, there is no way to know for certain if and when Asobo will go DX12 path and how much of a performance boost that will yield.
 
  • Like
Reactions: JarredWaltonGPU
Thank you so much for making this comprehensive guide so quickly after FS 2020 release. The only one on the internet right now! After reading this thread a few times, it seems like the way to go now (building a rig) is via Intel as DX11 is a limiting factor to GPU performance. If the game is updated to support DX12 I would guess going via AMD would be better as it can push more to the GPU at a lower price point. Is that a fair assessment? Although, there is no way to know for certain if and when Asobo will go DX12 path and how much of a performance boost that will yield.
Generally speaking, for a high-end GPU, AMD's CPUs are still slower than Intel's -- in DX12, Vulkan, and DX11 games. It varies, and a few games make better use of multithreading so that something like Ryzen 9 3900X can nearly match a Core i9-9900K. But most likely it will take Zen 3 to surpass Intel on gaming performance (and maybe not even then). But if you don't have and extreme GPU, CPU doesn't matter as much.
 
  • Like
Reactions: flying_monkey
For anyone following this thread, I just added a bunch of memory testing results. What a pain! I thought capacity would be a big deal, but it's not really -- at least, not with an 11GB RTX 2080 Ti. RAM speed on the other hand made a rather huge difference, up to 25% (compared to 'JEDEC DDR4-2133', which absolutely no one should be using on a gaming PC).

Hi Jarred,
Thanks for such a comprehensive, fact-based review. I have a question around memory speed.

I'm building a new PC around an i9-10850K with an MSI MAG Z490 Tomahawk MOBO. To future-proof it (my last build was 8 years ago + upgrades along the way and is still running strong for Cubase production), I plan to include an RTX 3090 (or another brand version of it), an AIO cooler, and OC the i9 to 5GHz. The old PC will become a slave for Vienna Ensemble Pro for music production with this new PC as the master, and also built for MSFT Flight Sim'20. I'm not a gamer otherwise and don't care about RGB colors, etc.

Your data suggests that getting the fastest memory supported would yield performance benefits (and presumably future-proofing). However, using PCPartsPicker, the '4800 RAM produces this warning: The Corsair Vengeance LPX 16 GB (2 x 8 GB) DDR4-4800 CL18 Memory operating voltage of 1.5 V exceeds the Intel Comet Lake CPU recommended maximum of 1.35 V+5% (1.417 V). This memory module may run at a reduced clock rate to meet the 1.35 V voltage recommendation, or may require running at a voltage greater than the Intel recommended maximum.

Is the '4800 memory speed really not supported, or is this simply a matter of enabling the higher voltage? Or, do I need a different brand memory that delivers the speed at a lower voltage?

Thanks again!
 
Hi Jarred,
Thanks for such a comprehensive, fact-based review. I have a question around memory speed.

I'm building a new PC around an i9-10850K with an MSI MAG Z490 Tomahawk MOBO. To future-proof it (my last build was 8 years ago + upgrades along the way and is still running strong for Cubase production), I plan to include an RTX 3090 (or another brand version of it), an AIO cooler, and OC the i9 to 5GHz. The old PC will become a slave for Vienna Ensemble Pro for music production with this new PC as the master, and also built for MSFT Flight Sim'20. I'm not a gamer otherwise and don't care about RGB colors, etc.

Your data suggests that getting the fastest memory supported would yield performance benefits (and presumably future-proofing). However, using PCPartsPicker, the '4800 RAM produces this warning: The Corsair Vengeance LPX 16 GB (2 x 8 GB) DDR4-4800 CL18 Memory operating voltage of 1.5 V exceeds the Intel Comet Lake CPU recommended maximum of 1.35 V+5% (1.417 V). This memory module may run at a reduced clock rate to meet the 1.35 V voltage recommendation, or may require running at a voltage greater than the Intel recommended maximum.

Is the '4800 memory speed really not supported, or is this simply a matter of enabling the higher voltage? Or, do I need a different brand memory that delivers the speed at a lower voltage?

Thanks again!
Hi Neil,

So one thing to note is that everything beyond about DDR4-3200 represents a significant memory overclock. Intel platforms are generally good up to at least DDR4-4000, but beyond that the quality of the motherboard (and firmware) and other factors can potentially limit your maximum RAM speed. I tested at DDR4-4000, and performance had mostly topped off compared to DDR4-3600. DDR4-4800 will often mean much higher (worse) memory timings, which often negates the boost in clock speed. It will also mean higher voltages and higher prices. Consider the following from Newegg (I strongly recommend 2x16GB as well if you're hoping for 'future-proofing'):

DDR4-4000 CL19-19-19 for $260: https://www.newegg.com/g-skill-32gb-288-pin-ddr4-sdram/p/N82E16820232669
DDR4-4000 CL17-18-18 for $277: https://www.newegg.com/g-skill-32gb-288-pin-ddr4-sdram/p/N82E16820374018
DDR4-4000 CL17-18-18 for $280: https://www.newegg.com/g-skill-32gb-288-pin-ddr4-sdram/p/N82E16820374009
DDR4-4000 CL18-22-22 for $300: https://www.newegg.com/corsair-32gb-288-pin-ddr4-sdram/p/N82E16820236675
DDR4-4000 CL19-23-23 for $330: https://www.newegg.com/corsair-32gb-288-pin-ddr4-sdram/p/N82E16820236238
DDR4-4000 CL18-19-19 for $340: https://www.newegg.com/ballistix-32gb-288-pin-ddr4-sdram/p/N82E16820164165
DDR4-4133 CL19-25-25 for $510: https://www.newegg.com/corsair-32gb-288-pin-ddr4-sdram/p/N82E16820236379
DDR4-5000 CL18-26-26 for $1326: https://www.newegg.com/corsair-32gb-288-pin-ddr4-sdram/p/N82E16820236656

Now, first let me just say that no one should buy a $500+ kit of 2x16GB memory. In fact, I'd forget about anything beyond $300. But you can see the costs escalate massively at the top of the performance stack.

The other thing to note is how the timings vary quite a bit. CL is the most important, but the next two timings (tRCD, tRP) often affect a lot of other subtimings, and lower values are still better. Based on this, it's pretty obviously that if you want 2x16GB of DDR4-4000 memory, which would be the maximum I'd recommend for an Intel platform right now, then the best option by far would be the second or third kit listed above.

G.Skill is a reputable brand, the timings are better than anything else, and the price is generally lower than anything else. You'll need to push more voltage through a memory kit to reliably hit higher clocks and timings, which is why these are rated for 1.4V, but that should still be reasonably safe. Long-term, I'd be nervous about a 1.5V kit -- I've had memory controllers on CPUs fail after a few months at higher clocks (though I admit it's been about ten years since that happened).

For MS Flight Sim, RAM capacity might not appear to matter as much as speed, but if you're thinking about DDR4-4800 (I don't even have a kit rated for that speed that I can test with), I'd start with closer to DDR4-4000 and search for RAM that's rated for 1.35V with the lowest possible timings.

DDR4-4000 17-17-17 for $170: https://www.newegg.com/g-skill-16gb-288-pin-ddr4-sdram/p/N82E16820232674
DDR4-4400 18-19-19 for $280: https://www.newegg.com/g-skill-16gb-288-pin-ddr4-sdram/p/N82E16820232776

You can see how the price takes a massive jump just to get that extra 400MHz, which won't matter that much -- it's at least partially offset by the higher latencies. DDR4-4400 CL18 is a CAS Latency of 8.2ns, while DDR4-4000 CL17 is a CAS Latency of 8.5ns. So the 'faster' RAM is 10% higher clocks, but only 3.7% better latency. And because of caching and the memory hierarchy, you'll typically only get a real-world benefit of about one third of the theoretical improvement, meaning the faster RAM in this case is probably at best 1% faster than the slower RAM.
 
  • Like
Reactions: neil111
Hey Jarred,

All I can say is "Wow!" Thanks so much for such a thoughtful and complete response... and on a holiday no less! Your advice is exactly what I was looking for - thank you!
 
  • Like
Reactions: JarredWaltonGPU
I am just wondering, as I have seen it posted somewhere else. Is this stuck on DirectX 11 so people on Win 7 can still play it? Will it even run on Win 7? Lets not consider Vista or ME (Millilumen Edition).
 
I am just wondering, as I have seen it posted somewhere else. Is this stuck on DirectX 11 so people on Win 7 can still play it? Will it even run on Win 7? Lets not consider Vista or ME (Millilumen Edition).
I believe it requires Windows 10, even though it's using DX11. That's what the Steam page says:

A DX12 patch is supposedly in the works.
 
It would be interesting to see if anyone has tried it on Windoze 7 - 64 bit.

I don't think even DX12 will help M$ FS 2020 at this point. And we know that the RTX 3080 is only twice as fast as the RTX 2080 ti only if you are playing Minecraft - maybe 30 to 70% faster otherwise. Have you had a chance to go back and try this unoptimized monster with a RTX 30#0 yet?
 
It would be interesting to see if anyone has tried it on Windoze 7 - 64 bit.

I don't think even DX12 will help M$ FS 2020 at this point. And we know that the RTX 3080 is only twice as fast as the RTX 2080 ti only if you are playing Minecraft - maybe 30 to 70% faster otherwise. Have you had a chance to go back and try this unoptimized monster with a RTX 30#0 yet?
Check the bonus benchmarks of the RTX 3080 review. It got 40 fps at 4K ultra (vs. 33 fps on the 2080 Ti). Not great, but better. Needs a reworking of the engine and code still.