Review AMD Radeon RX 7900 XTX and XT Review: Shooting for the Top

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
D

Deleted member 14196

Guest
wonders if Mandark is ever happy with any thing :)
Not at these prices lol 😂

I used to game on pc but it got too $$$

I’ve got a kid to get thru college so no gaming pc for me. I tell ya. Universities have price gouging down to a science.
 

Colif

Win 11 Master
Moderator
My last 2 GPU cost $AUD1000, so I may not be right person to feel pity. GTX 970 & RTX 2070 were both about same prices, so I just knew the next one wouldn't be that cheap, especially seeing prices overseas.

Where their prices fall determines which I will buy. They only have a certain Price range that makes sense, its only the water cooled cards like the Asrock Aqua and Sapphire Toxic that should have prices similar to what 4080 cost here. So between 1500 & 2000 would be a "sane" range after currency conversion and tax, for most of the 3rd party cards.

Online stores here, when asked about the cards, didn't know anything... so I assume I have to wait for next year to even see them in stores here. At least I get to watch reviews before I get to buy. I originally had thought I have to pick from cards before reviews today... oh well.
 
I don't think so. I mean, new architecture with 67% more raw bandwidth, supposedly even more if you look at the total Infinity Cache bandwidth, and theoretically 160% more compute. And in practice it performs as if a lot of that bandwidth and compute isn't realized in the real world. Having to go over the extra Infinity Fabric to get to L3 cache could be a big part of this.

Put another way:
RX 6950 XT has 59% of the theoretical compute of the RTX 3090 Ti and 57% of the raw bandwidth. At 4K (rasterization) on the updated testbed, it delivers 88% of the performance.
RX 7900 XTX has 26% more theoretical compute than the RTX 4080 and 34% more raw bandwidth. At 4K (rasterization), it delivers 4% more performance.
So due to architectural changes plus chiplets, AMD has gone from delivering close to Nvidia performance with substantially lower paper specs, to now needing more paper specs to deliver comparable performance.

It's also worth looking at relative chip size and performance. AMD has a 300mm^2 GCD plus 220mm^2 of MCDs. Some of that size is due to the Infinity Fabric linking the chiplets together. Nvidia meanwhile has a 379mm^2 die that has a lot of extra features (DLSS and DXR stuff). I'd wager the RTX 4080 actually costs less to manufacture than Navi 31, factoring in everything.

AMD is going to need to prove with RDNA 4 that chiplet GPUs can continue to scale to even higher levels of performance without sacrificing features. They certainly haven't done that with RDNA 3. A monolithic Navi 31 without chiplets probably would have been in the 400mm^2 range and offered even higher performance. That's just my rough estimate, and we can't know for certain, but I'd love to know how much AMD actually saved at the end of the day by doing chiplets.
I believe it was Ryan Smith at Anandtech who talked about them changing the CU design to using dual-issue instructions which is a problematic choice that AMD never explained as far as I'm aware. I imagine this likely accounts for a large chunk of the lost performance potential rather than the chiplet decision. I certainly agree AMD still has a lot to prove with their design choices and I hope their lower price point cards stand out better.
 

Phaaze88

Titan
Ambassador
Nvidia has no real reason to lower prices with this kind of showing from AMD. The first impression is what most will see. The 'fine wine', comes too late.
Sorry to the hopefuls looking to get cheaper Ada cards.

https://www.techpowerup.com/review/nvidia-geforce-rtx-4090-founders-edition/34.html
2080Ti, 3090Ti, and 4090(not THE Ada halo, but it's the current one), take darn near the same performance hit(%) in five out of eight titles on that page. The few outliers are a very minor advancement above that.
AMD isn't any better between their 2 gens for that matter.
RT has progressed along nicely. /S


Affordable gaming is dead. Long live overpriced gaming hardware! 😂
Vote with your $$$...
Folks are voting though: Gaming PCs Game-focused PCs are already a luxury thing, but stands to become super(?)... luxury, as folks get priced down to lower tiers or just priced out, not wanting to compromise on a lower gpu.

It seems like, if there are enough big spenders for the high margin products, then those individuals can make up for the majority who either don't buy, or wait for the less profitable Geforces, and thus the elevated pricing can be sustained.
There's the nasty system that are games designed around macrotransactions, where 'a few' spending big can keep the train rolling.
 

zecoeco

Prominent
BANNED
Sep 24, 2022
83
113
710
I don't think so. I mean, new architecture with 67% more raw bandwidth, supposedly even more if you look at the total Infinity Cache bandwidth, and theoretically 160% more compute. And in practice it performs as if a lot of that bandwidth and compute isn't realized in the real world. Having to go over the extra Infinity Fabric to get to L3 cache could be a big part of this.

Put another way:
RX 6950 XT has 59% of the theoretical compute of the RTX 3090 Ti and 57% of the raw bandwidth. At 4K (rasterization) on the updated testbed, it delivers 88% of the performance.
RX 7900 XTX has 26% more theoretical compute than the RTX 4080 and 34% more raw bandwidth. At 4K (rasterization), it delivers 4% more performance.
So due to architectural changes plus chiplets, AMD has gone from delivering close to Nvidia performance with substantially lower paper specs, to now needing more paper specs to deliver comparable performance.

It's also worth looking at relative chip size and performance. AMD has a 300mm^2 GCD plus 220mm^2 of MCDs. Some of that size is due to the Infinity Fabric linking the chiplets together. Nvidia meanwhile has a 379mm^2 die that has a lot of extra features (DLSS and DXR stuff). I'd wager the RTX 4080 actually costs less to manufacture than Navi 31, factoring in everything.

AMD is going to need to prove with RDNA 4 that chiplet GPUs can continue to scale to even higher levels of performance without sacrificing features. They certainly haven't done that with RDNA 3. A monolithic Navi 31 without chiplets probably would have been in the 400mm^2 range and offered even higher performance. That's just my rough estimate, and we can't know for certain, but I'd love to know how much AMD actually saved at the end of the day by doing chiplets.

A monotholic 7900 XTX was going to be priced similarly to the 4080 at $1200 or even more, but for what? a slightly more performance?
This is the first generation of a "True Chiplet GPU" so you would expect some problems and things that needs to be improved.
Cutting costs is now a priority, and its for the benefit of end-user/customers first. Tech companies will gladly sell you a monotholic chip but don't expect 2018-2019 prices.
I do understand that there is a performance hit, but this chiplet technology is still not mature and still getting improved.
This is where the industry is heading, to using chiplets. Wafer costs are skyrocking and not getting cheaper anytime soon.
Maybe intel "Tiles" approach will solve most of the chiplet problems.. who knows but one thing for sure is that chiplets are the future.
 
I don't do a ton of power testing scenarios, so I'd have to look into that more... and I really need to go sleep. As for the "being competitive," AMD is pretty much on par with Nvidia's best in rasterization (similar to 6000-series), and it's at least narrowed the gap in ray tracing. Or maybe that's just my perception? Anyway, since basically Pascal, it's felt like AMD GPUs have been very behind Nvidia. Nvidia offers more performance and more features, at an admittedly higher price.
Thank you for the hard work.
 
This is actually a bug that was already reported to AMD and they're already working on a fix.

Previous generations exhibited a similar bug when connected to more than one monitor. The access to multiple monitors caused memory access issues which required a clock bump till a more efficient method was found. I believe instead of rendering two separate monitors at two separate rates they rendered it as one big monitor and then accessed that singular map space. The memory chiplets I believe are acting in independent access manner which would cause a similar issue. When in full desktop mode they need to deactivate the mc's. But that's hard to do because even desktops are rendered in dx mode. They need to check the gpu usage and deactivate mc chiplets with lower usage (I'm betting). This is similar to pinning cpu cores.
 
Last edited:

Heat_Fan89

Reputable
Jul 13, 2020
443
213
5,090
One thing I have accepted is that yes PC gaming has gone exponentially higher in cost. Sure you can get a budget or midrange PC to play games but I want eye candy and that costs serious dough. I was pricing a new Alienware PC with a 20% discount coupon and it was still $3700 with tax out the door and it was spec'd with a 13900K, 4090, 1TB Gen 4 NVME, 32GB of DR5 @ 5200Mhz, Wifi 6E and Alienware's Cryo-Tech cooler.

But here's the thing. This PC would be for Microsoft Flight Simulator 2020 so yeah I could probably get 120FPS in 4K with DLSS 3 with every detail dialed to the MAX.

I also have an XBOX Series X that looks pretty close and so close to my current rig running a RTX 3080 @ 30FPS with MAX detail that it's really hard to tell them apart. My Series X cost $499 and my Omen 30L set me back $2000. The other thing is that this is NO longer the 90's or the early 2000's where games were designed first around the PC and then ported to the consoles unless it was a console exclusive.

Nowadays, it's just the opposite where games are built and designed for the consoles first, then get ported to the PC. I was thinking of jumping on the Alienware deal because of the 4090 but then I saw how smooth and nice looking MSFS 2020 plays on the Series X that I felt, $3700 for a gaming PC is better spent elsewhere.
 
"Chiplets don't actually improve performance (and may hurt it)"
How on earth is this even a con? who said chiplets are for performance? chiplets are for cost saving.
But what did y'all expect ? You just can't complain for this price point.. there you go, chiplets saved you $200 bucks + gave you 24GB of VRAM as bonus (versus 16GB on 4080)
It is meant for GAMING so don't expect productivity performance, for many reasons including nvidia's cuda cores that has every major software optimized for it.
RDNA is going the right direction with chiplets.. in an industry of increasing costs year after year.
Chiplet design is a solution, and not a new groundbreaking feature that's meant to boost performance.
Sadly, instead of working on the problem, nvidia decided to give excuses such as "Moore's law is dead".
This is a long technical discussion. But cheaper manufacture is just one reason for chiplets. Another is uniform binning across the cores.

At first I thought AMD would divide up the rendering backend. I was surprised when they broke up the memory controllers. But it makes sense and hence why they have so much memory. These mc's can act independently or as one. But when doing draw calls on scene elements I believe they are acting independently (or asynchronously). As the driver determines when a call is returned multiple calls can be processed at once. It also gives each dc a dedicated 4GB memory (which is huge for a dc) This is why we are seeing increased cpu overhead as the driver tracks it all when running async. So instead of having one really big backend with a ton of resources doing a single draw call fast as it can, you have multiple parallel calls rendering into the same backend resources.

This is similar to how hyper threading works on a cpu. You have less efficiency per thread as resources dwindle internally, but you have increased throughout overall.

As GPUs are memory intensive, and memory access is the biggest of the performance killers, it makes sense to have smaller elements put into dedicated mc and memory space. The disadvantage to this is you will have duplicate resources in different sections of memory (one for each draw task). This is why more memory was necessary. I'm not 100% sure how AMD is reporting free memory via it's driver. But I suspect it's not per all DC's, but rather memory usage on the frame when all DC's are composted. By this point duplicate resources are eliminated leading to lower overall memory use.
 
Last edited:
Possibly because "value" is a subjective term and the reviewer didn't want to venture into anything other than pure factual numbers at this time. What do you value in a card over what I value in a card? Pure performance? Energy efficiency? Ray Tracing? Sure you can come up with a formula to derive a number for "card value" that appears strictly mathematical and non-subjective; but there is still some subjectivity in that formula that hides behind the number.

It needs two separate metrics now:

Frames / dollar non rt

Frames / dollar with rt.

If I were to give an overall number it would be weighted based on what percentage of games use RT
 

Loadedaxe

Distinguished
Jul 30, 2016
170
100
18,790
"Chiplets don't actually improve performance (and may hurt it)"
How on earth is this even a con? who said chiplets are for performance? chiplets are for cost saving.
But what did y'all expect ? You just can't complain for this price point.. there you go, chiplets saved you $200 bucks + gave you 24GB of VRAM as bonus (versus 16GB on 4080)
It is meant for GAMING so don't expect productivity performance, for many reasons including nvidia's cuda cores that has every major software optimized for it.
RDNA is going the right direction with chiplets.. in an industry of increasing costs year after year.
Chiplet design is a solution, and not a new groundbreaking feature that's meant to boost performance.
Sadly, instead of working on the problem, nvidia decided to give excuses such as "Moore's law is dead".

You obviously missed the point.
1. Chiplets being cost saving isn't the issue, as long as it doesn't impede performance. Remember AMD is the one that said it increased performance. Not Jared.

2. We can complain, AMD hyped these cards up and they are worse than the 4090 and barely keeping up with the 4080, and thats the XTX, the performance difference for the XT is not worth the $100 savings.

3. As far as "It is meant for GAMING so don't expect productivity performance, for many reasons including nvidia's cuda cores that has every major software optimized for it." You contradict yourself here so I will leave this alone lol

Obviously you are loyal to AMD, try being neutral, you will save money and always profit with what you get for your money. :p
 
Stupid question here: Why don't we have dedicated Raytracing cards like 15 years ago where we had the physx add-in card and get that out of the GPU?

Too much info and latency to shuffle between CPU/GPU?

Calculating model behavior (physx) is independent of lighting.

Ray casting required geometry to be calculated and then deciding how the ambients/reflections interact with the rasterized surface. So some geometry and pre lighting has to be done before an accurate hit test and render can take place.

Nvidia kind of made a big deal with the 30 series because they didn't have to wait for the raster to complete before rt took hold. Well that was only partially true. You can precalculate the ray for a 1 bounce hit and then apply it using a dot product (or whatever custom shader) in parallel. More than one bounce hit though becomes problematic. As the vast majority of rts have a bounce depth of 1, it's not a huge issue
 
Last edited:
One thing I have accepted is that yes PC gaming has gone exponentially higher in cost. Sure you can get a budget or midrange PC to play games but I want eye candy and that costs serious dough. I was pricing a new Alienware PC with a 20% discount coupon and it was still $3700 with tax out the door and it was spec'd with a 13900K, 4090, 1TB Gen 4 NVME, 32GB of DR5 @ 5200Mhz, Wifi 6E and Alienware's Cryo-Tech cooler.

But here's the thing. This PC would be for Microsoft Flight Simulator 2020 so yeah I could probably get 120FPS in 4K with DLSS 3 with every detail dialed to the MAX.

I also have an XBOX Series X that looks pretty close and so close to my current rig running a RTX 3080 @ 30FPS with MAX detail that it's really hard to tell them apart. My Series X cost $499 and my Omen 30L set me back $2000. The other thing is that this is NO longer the 90's or the early 2000's where games were designed first around the PC and then ported to the consoles unless it was a console exclusive.

Nowadays, it's just the opposite where games are built and designed for the consoles first, then get ported to the PC. I was thinking of jumping on the Alienware deal because of the 4090 but then I saw how smooth and nice looking MSFS 2020 plays on the Series X that I felt, $3700 for a gaming PC is better spent elsewhere.

If you really want eye candy, specially for such a great looking game like MSF 2020, you should not look at DLSS 3 for now. Nvidia will probably make it better with time, but is not there yet for me.

On the other hand if you are happy with the graphic details the Series X show at MSF 2020, then I guess some sort of DLSS will be ok to.

You could get a little cheaper price for a PC like the one you linked, but you will have to do the hard work of putting it togheter, settings BIOS options to keep the Core i9 under leash, and installing Windows, drivers, and everything else.

Just an example:
PCPartPicker Part List

CPU: Intel Core i9-13900K 3 GHz 24-Core Processor ($599.99 @ B&H)
CPU Cooler: ARCTIC Liquid Freezer II 280 72.8 CFM Liquid CPU Cooler ($119.99 @ Amazon)
Motherboard: MSI PRO Z790-P WIFI ATX LGA1700 Motherboard ($239.99 @ B&H)
Memory: G.Skill Trident Z5 RGB 32 GB (2 x 16 GB) DDR5-6000 CL40 Memory ($159.99 @ Newegg)
Storage: Samsung 980 Pro 1 TB M.2-2280 PCIe 4.0 X4 NVME Solid State Drive ($99.99 @ Amazon)
Video Card: Asus TUF GAMING OC GeForce RTX 4090 24 GB Video Card ($1799.99 @ ASUS)
Case: Corsair 4000D Airflow ATX Mid Tower Case ($104.99 @ Amazon)
Power Supply: Corsair HX1200 Platinum 1200 W 80+ Platinum Certified Fully Modular ATX Power Supply ($263.98 @ Newegg)
Total: $3388.91
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2022-12-13 07:15 EST-0500
 

zecoeco

Prominent
BANNED
Sep 24, 2022
83
113
710
You obviously missed the point.
1. Chiplets being cost saving isn't the issue, as long as it doesn't impede performance. Remember AMD is the one that said it increased performance. Not Jared.

2. We can complain, AMD hyped these cards up and they are worse than the 4090 and barely keeping up with the 4080, and thats the XTX, the performance difference for the XT is not worth the $100 savings.

3. As far as "It is meant for GAMING so don't expect productivity performance, for many reasons including nvidia's cuda cores that has every major software optimized for it." You contradict yourself here so I will leave this alone lol

Obviously you are loyal to AMD, try being neutral, you will save money and always profit with what you get for your money. :p

Well I own a nVidia GPU, and that tells you alot. I'm not that hardcore fan but all I look at performance/value. Thats the measure, whether it was AMD, nVidia or Intel, I don't care.
Are yes you're right, everyone expected more, but lets not forget the fact that this is the first chiplet GPU ever.
You would expect some problems and performance hit. I'm not trying to defend anyone here, but this is the nature of how technology evolves.
We all want absolute top performance at a cheaper price, and thats what I'm trying to point to here.
 
Last edited:
  • Like
Reactions: digitalgriffin

salgado18

Distinguished
Feb 12, 2007
934
377
19,370
One thing I have accepted is that yes PC gaming has gone exponentially higher in cost. Sure you can get a budget or midrange PC to play games but I want eye candy and that costs serious dough. I was pricing a new Alienware PC with a 20% discount coupon and it was still $3700 with tax out the door and it was spec'd with a 13900K, 4090, 1TB Gen 4 NVME, 32GB of DR5 @ 5200Mhz, Wifi 6E and Alienware's Cryo-Tech cooler.

But here's the thing. This PC would be for Microsoft Flight Simulator 2020 so yeah I could probably get 120FPS in 4K with DLSS 3 with every detail dialed to the MAX.

I also have an XBOX Series X that looks pretty close and so close to my current rig running a RTX 3080 @ 30FPS with MAX detail that it's really hard to tell them apart. My Series X cost $499 and my Omen 30L set me back $2000. The other thing is that this is NO longer the 90's or the early 2000's where games were designed first around the PC and then ported to the consoles unless it was a console exclusive.

Nowadays, it's just the opposite where games are built and designed for the consoles first, then get ported to the PC. I was thinking of jumping on the Alienware deal because of the 4090 but then I saw how smooth and nice looking MSFS 2020 plays on the Series X that I felt, $3700 for a gaming PC is better spent elsewhere.
You said the 120 fps PC looks as good as the 30 fps console, yet costs 7 times more. Well, that's the price for 120 fps, right? And because you want fps to go over 60, you hit the diminishing returns on hardware.

When launched, the PS5 and Series X were a better deal than any PC. Now, they can run the games, but as you said, at 4k 30 fps. More than that and you either need a refreshed console (PRO etc), or a stronger PC. You can't run away from the fact that consoles are fixed hardware, and PCs evolve over time.

Also, you could just swap the 3080 with the 4090 and save a lot of money (but wait for prices to get more reasonable).
 

mjbn1977

Distinguished
What on EARTH are you talking about all the reviews I've seen put the 7900 xtx ray tracing at 3090 ti performance, and about 10% down from the 4080, which is consistent with NVidia naming schemes (x090 in one generation becaomes x080 in the next generation).

Well, depends what game and raytracing implementation you are looking at. Take Cyberpunk 2077, which is probably the most impressive Raytracing game out there, using it for Shadows, Reflections, and Lighting. Compare 7900 XTX with 4080 in 1440p everything on Ultra settings and highest settings (Raytracing on Psycho), NO UPSCALLING (DLSS). You get the following:

RTX 4080: 60 fps average (with DLSS on "quality" it is 103 fps)
7900 XTX: 40 fps average

That is 33% (!!!) down (or 4080 is 50% up from 7900 XTX, however you prefer it) and unplayable without upscaling. Nowhere near you 10% down. That raytracing performance in combination with the much lower gaming power draw of the 4080 is, to me at least, worth the extra $200. If you shopping in the $1000 range, why not going with the more well rounded product. Both cards are not budget cards......they are really stupid expensive.
 
Last edited:

salgado18

Distinguished
Feb 12, 2007
934
377
19,370
My short opinion:

  • competitive, priced right against the competition, but still expensive;
  • raytracing performance is disappointing;
  • no DLSS is not a con, since FSR2+ is improving and will be as good as and as supported as over time;
  • chiplets should be used like on Ryzens, splitting the shader cores into chiplets and using as many as needed for the card (allows a huge scale up and they only need one chip, instead of three or four), but it's a start;
  • because of chiplets, drivers could have a big impact on performance, so it should be reevaluated one or two months from now;
  • the market will be exactly as it is 3000 vs 6000: if you want raytracing, pay a bit more and get a Geforce, otherwise get a Radeon.
 
  • Like
Reactions: digitalgriffin

Heat_Fan89

Reputable
Jul 13, 2020
443
213
5,090
If you really want eye candy, specially for such a great looking game like MSF 2020, you should not look at DLSS 3 for now. Nvidia will probably make it better with time, but is not there yet for me.

On the other hand if you are happy with the graphic details the Series X show at MSF 2020, then I guess some sort of DLSS will be ok to.

You could get a little cheaper price for a PC like the one you linked, but you will have to do the hard work of putting it togheter, settings BIOS options to keep the Core i9 under leash, and installing Windows, drivers, and everything else.

Just an example:
PCPartPicker Part List

CPU: Intel Core i9-13900K 3 GHz 24-Core Processor ($599.99 @ B&H)
CPU Cooler: ARCTIC Liquid Freezer II 280 72.8 CFM Liquid CPU Cooler ($119.99 @ Amazon)
Motherboard: MSI PRO Z790-P WIFI ATX LGA1700 Motherboard ($239.99 @ B&H)
Memory: G.Skill Trident Z5 RGB 32 GB (2 x 16 GB) DDR5-6000 CL40 Memory ($159.99 @ Newegg)
Storage: Samsung 980 Pro 1 TB M.2-2280 PCIe 4.0 X4 NVME Solid State Drive ($99.99 @ Amazon)
Video Card: Asus TUF GAMING OC GeForce RTX 4090 24 GB Video Card ($1799.99 @ ASUS)
Case: Corsair 4000D Airflow ATX Mid Tower Case ($104.99 @ Amazon)
Power Supply: Corsair HX1200 Platinum 1200 W 80+ Platinum Certified Fully Modular ATX Power Supply ($263.98 @ Newegg)
Total: $3388.91
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2022-12-13 07:15 EST-0500
The 4090 is still going to be tricky to get. I just clicked on the ASUS link and even though it said in stock, the link turns into "Notify Me". It will be awhile before they are readily available. The other reason that the Alienware works better for me is the 12 months no interest finance I qualify for. That said, I am staying with what i've got.
 

Heat_Fan89

Reputable
Jul 13, 2020
443
213
5,090
You said the 120 fps PC looks as good as the 30 fps console, yet costs 7 times more. Well, that's the price for 120 fps, right? And because you want fps to go over 60, you hit the diminishing returns on hardware.

When launched, the PS5 and Series X were a better deal than any PC. Now, they can run the games, but as you said, at 4k 30 fps. More than that and you either need a refreshed console (PRO etc), or a stronger PC. You can't run away from the fact that consoles are fixed hardware, and PCs evolve over time.

Also, you could just swap the 3080 with the 4090 and save a lot of money (but wait for prices to get more reasonable).
No, no, what I was trying to say was that MSFS 2020 looks just as good on the Series X which runs @ 30FPS as it does on my Omen 30L which also runs at 30FPS with every detail MAXED out on the PC. It's hard for me to tell them apart whether I do low level or high level flying.

The Omen 30L can produce roughly 60FPS by enabling DLSS. I was throwing out the 120FPS as really the only difference maker by going with a 4090 setup.

The thing with refreshed consoles is that it's a gamble whether a particular game is going to take advantage of the refreshed hardware. I purchased a PS4 then purchased a PS4 Pro and the few games that support it really wasn't worth the purchase, at least for me. My favorite game of that era was Bloodborne and the PS4 Pro really didn't do all that much for the performance of the game.

The other thing is that even with a locked console, developers in some cases can tweak things to still give the user a pleasant gaming experience. With the PC, it is easier by throwing more brute force at it.

Yes going with a 4090 on my current rig is a consideration but the thing about genaeral aviation flight sims is that you get a smooth experience at 30 FPS. Jumping to 60 FPS doesn't really make much of a difference unless flying in heavily populated cities like NYC. Where the 4090 would come in handy would be for shooters or racing titles.
 
Last edited:

mjbn1977

Distinguished
No, no, what I was trying to say was that MSFS 2020 looks just as good on the Series X which runs @ 30FPS as it does on my Omen 30L which also runs at 30FPS with every detail MAXED out on the PC. It's hard for me to tell them apart whether I do low level or high level flying.

The Omen 30L can produce roughly 60FPS by enabling DLSS. I was throwing out the 120FPS as really the only difference maker by going with a 4090 setup.

I fly a lot on MSF 2020 (fancy 3rd party airliners). Video card isn't the problem with that game. Its mostly CPU bottle necked. Got a lot of micro stutters with the 8700k on High End Settings, especially low to ground when approaching and landing (most annoying time for micro stutters). Improved a lot when I upgraded to the 13700k, almost no micro stutter. But DLSS 3.0 frame generation helps a lot. Doubling FPS with a mouseclick.....nice to have. Input lag is not an issue in that game.
 

Heat_Fan89

Reputable
Jul 13, 2020
443
213
5,090
I fly a lot on MSF 2020 (fancy 3rd party airliners). Video card isn't the problem with that game. Its mostly CPU bottle necked. Got a lot of micro stutters with the 8700k on High End Settings, especially low to ground when approaching and landing (most annoying time for micro stutters). Improved a lot when I upgraded to the 13700k, almost no micro stutter. But DLSS 3.0 frame generation helps a lot. Doubling FPS with a mouseclick.....nice to have. Input lag is not an issue in that game.
Agreed on the CPU part unless you do 4K which then requires a good GPU and if the CPU is bottlenecking the GPU, you will get micro stutters because the CPU is not fast enough. Anything running at 1080p is going to depend more on the CPU. Going to 1440p will require some from both and 4K will primarily require a strong GPU and the CPU needs to be fast enough to keep up with its demands. My i9-10850K is still good enough to handle 4K but going from 30 to 60FPS requires at least a 3090 or 6900 XT.