We tested an early version of Far Cry 6 on dozens of graphics cards to see how it performs.
Far Cry 6 Benchmarks and Performance : Read more
Far Cry 6 Benchmarks and Performance : Read more
The scheduled forum maintenance has now been completed. If you spot any issues, please report them here in this thread. Thank you!
Incredibly lame that they make an engine that goes out of its way to nerf Nvidia performance just because it's an "AMD titled game" so the AMD cards can be on top despite the clear performance difference in RT & Rasterizing performance between cards.
I'm pretty sure there's either a drivers issue, or my 1080 Ti is having problems. I tested it multiple times and couldn't get a "clean" run -- there were periodic large dips every 5 seconds or so, which I didn't see on the 1060 or any other GPU. I may go back and rerun those tests after doing a driver wipe today, just to verify they're still happening. I'll toy with the fan settings and other stuff as well, and make sure some throttle wasn't kicking in.The discrepancy between the RTX 2060 and GTX 1080 Ti is interesting to me. Which got me thinking: I wonder if Far Cry 6 uses hardware features introduced in Turing. Another curious case I found where Turing utterly wrecks Pascal was AnandTech's benchmarks of Wolfenstein II. The RTX 2080 and the GTX 1080 Ti are supposedly on par with each other, but for some reason, the RTX 2080 commands a 50% lead over the 1080 Ti in the game. I'm also pretty certain Wolfenstein II was an AMD sponsored title as well.
But seeing a RTX 2060 dominate a 1080 Ti raises even more questions. Especially considering RDNA1 also dominates against the 1080 Ti, even thoughsomemost of the features in Turing weren't introduced until RDNA2.
I'm not sure how much of it is intentionally hurting Nvidia performance versus just using DX12 code that's more optimized for AMD hardware. We've seen that a lot in the past, where generic DX12 code seems to favor AMD GPUs and Nvidia needs a lot more fine tuning.Incredibly lame that they make an engine that goes out of its way to nerf Nvidia performance just because it's an "AMD titled game" so the AMD cards can be on top despite the clear performance difference in RT & Rasterizing performance between cards.
Remember that PS5 and Xbox X/S don't have FSR or RT support, though maybe that will come with a patch.Im not so sure the engine "nerf" nvidia GPUs. You know, nerf is such a big word. Perhaps they did kept RT to a minimum to make AMD gpus look better. Maybe even kept it that way so its "easy" on PS5 and Xbox X/S hardware.
.... Remember that PS5 and Xbox X/S don't have FSR or RT support, though maybe that will come with a patch.
The engine is the one from Watch Dogs Legion (it's actually much older, but has been improved over time), but that game, Watch Dogs Legion, is an nvidia sponsored title, it has RTX and DLSS and no FSR, so no extra performance for Radeon and guess what? Nvidia has all the advantages there and wins vs AMD.Incredibly lame that they make an engine that goes out of its way to nerf Nvidia performance just because it's an "AMD titled game" so the AMD cards can be on top despite the clear performance difference in RT & Rasterizing performance between cards.
I'm pretty confident a patch will come later to enable RT in FC6 on the consoles. It's very doable at 1080p upscaled as Insomniac did with their games. Ubisoft would be really stupid not to do that after they add the necessary improvement patches.I'm not sure how much of it is intentionally hurting Nvidia performance versus just using DX12 code that's more optimized for AMD hardware. We've seen that a lot in the past, where generic DX12 code seems to favor AMD GPUs and Nvidia needs a lot more fine tuning.
Remember that PS5 and Xbox X/S don't have FSR or RT support, though maybe that will come with a patch.
RT and FSR in Far Cry 6. They’re absent on the latest consoles for some reason. I wrote this last week, though the headline workshop maybe ended up too inflammatory. LOLWait, Im not a console guy so, Are you talking about RT support for this game, or in general?, Cause far as I know both consoles support RT and there are many titles with it already, right?
FSR is a different thing.
RT and FSR in Far Cry 6. They’re absent on the latest consoles for some reason. I wrote this last week, though the headline workshop maybe ended up too inflammatory. LOL
https://www.google.com/amp/s/www.tomshardware.com/amp/news/far-cry-6-no-ray-tracing-on-consoles
If it's something you have and isn't too much of a hassle to throw in, could you get in one of the GeForce 16 cards in there? If I went by NVIDIA's blog, GeForce 16 GPUs are just lacking RT and Tensor cores from the 20 series. I'm curious to see if there's some hint the game is taking advantage of "new" hardware features.I'm pretty sure there's either a drivers issue, or my 1080 Ti is having problems. I tested it multiple times and couldn't get a "clean" run -- there were periodic large dips every 5 seconds or so, which I didn't see on the 1060 or any other GPU. I may go back and rerun those tests after doing a driver wipe today, just to verify they're still happening. I'll toy with the fan settings and other stuff as well, and make sure some throttle wasn't kicking in.
Yeah, I'm going to see about testing the GTX 1660 Super and 1650 Super at least — that should provide a reasonable insight into how the 16-series stacks up against the other cards. I just ran out of time yesterday. LOLIf it's something you have and isn't too much of a hassle to throw in, could you get in one of the GeForce 16 cards in there? If I went by NVIDIA's blog, GeForce 16 GPUs are just lacking RT and Tensor cores from the 20 series. I'm curious to see if there's some hint the game is taking advantage of "new" hardware features.
The discrepancy between the RTX 2060 and GTX 1080 Ti is interesting to me. Which got me thinking: I wonder if Far Cry 6 uses hardware features introduced in Turing. Another curious case I found where Turing utterly wrecks Pascal was AnandTech's benchmarks of Wolfenstein II. The RTX 2080 and the GTX 1080 Ti are supposedly on par with each other, but for some reason, the RTX 2080 commands a 50% lead over the 1080 Ti in the game. I'm also pretty certain Wolfenstein II was an AMD sponsored title as well.
But seeing a RTX 2060 dominate a 1080 Ti raises even more questions. Especially considering RDNA1 also dominates against the 1080 Ti, even thoughsomemost of the features in Turing weren't introduced until RDNA2.
Async compute has been a non-issue with NVIDIA GPUs since Pascal.being a pure DX12 game i think this game simply use async compute heavily. i don't think it use turing specific features other than DXR. that's why it run so well on AMD GPU even older one like Vega.
ah yes..... AMD sponsored title, the least DXR effects and no DLSS + heavy usage of CPU just so AMD looks good on benchmarks graphs.
FYI, the charts have been updated with the GTX 1650 Super and 1660 Super now. Performance lands between the 1060 6GB and 1080 Ti, with the GTX 1660 Super coming relatively close to the latter. I suspect it will be around the same level of performance as the GTX 1080, which means that the Turing architecture definitely helps in Far Cry 6, and Pascal doesn't perform as well. Note also that the 1650 Super handily beats the 1060 6GB, which isn't normally the case. In our GPU hierarchy, the 1650 Super beats the GTX 1060 6GB by about 7%. In Far Cry 6, it's 20-25% faster. That's probably partly thanks to Turing handling DX12 and async compute better, and partly thanks to the concurrent FP+INT capability in Turing.If it's something you have and isn't too much of a hassle to throw in, could you get in one of the GeForce 16 cards in there? If I went by NVIDIA's blog, GeForce 16 GPUs are just lacking RT and Tensor cores from the 20 series. I'm curious to see if there's some hint the game is taking advantage of "new" hardware features.
Thanks for getting those benchmarks in!FYI, the charts have been updated with the GTX 1650 Super and 1660 Super now. Performance lands between the 1060 6GB and 1080 Ti, with the GTX 1660 Super coming relatively close to the latter. I suspect it will be around the same level of performance as the GTX 1080, which means that the Turing architecture definitely helps in Far Cry 6, and Pascal doesn't perform as well. Note also that the 1650 Super handily beats the 1060 6GB, which isn't normally the case. In our GPU hierarchy, the 1650 Super beats the GTX 1060 6GB by about 7%. In Far Cry 6, it's 20-25% faster. That's probably partly thanks to Turing handling DX12 and async compute better, and partly thanks to the concurrent FP+INT capability in Turing.
What settings, what CPU, what motherboard, what RAM, what drivers, what OS version, etc? I tried to make it abundantly clear that my test system was hitting CPU bottlenecks. When that happens, things can go a bit screwy.so.... couldn't believe my card ( RTX 3090) would be in the middle for those graphs, checked a few reviews/benchmarks
on tweaktown and hardware unboxed for example the 3090 shows on top of the graphs 1080p,1440p,4K
How exactly did the 3090 end up in the middle / bellow AMD GPUs on your graphs ??
Something doesn't add up.....
Async compute has been a non-issue with NVIDIA GPUs since Pascal.
EDIT: By "non-issue" I mean there is a measurable performance difference if you run a benchmark with async compute on and off (https://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/9)
However the thing with DX12 is you need to fine tune the rendering engine per GPU family because of the level of control you now have.
This isn't a problem. Microsoft shouldn't define how something should be done. It should only define what needs to be done. If you are making something standard, defining how to do it limits innovation. If someone comes up with something better, they can't use it because they won't be in compliance because the standard said it had to be done a certain way. AMD's solution is fine. NVIDIA's solution is fine. There is no right or wrong answer here.i think part of the problem is MS did not dictate how exactly async compute should be done.
They didn't define anything else about this basic requirement, i.e., that it should be done using hardware schedulers. And this is how things should be defined.A basic requirement for asynchronous shading is the ability of the GPU to schedule work from multiple queues of different types across the available processing resources.
The execution modes in Kepler and Fermi don't allow mixed workloads. So it was only going to support it "in name" only. Similarly how Intel's GMA 900 iGPUs were compliant with DX9 even though they lacked vertex shader units.i still remember initially nvidia want to support DX12 on Fermi because they said async compute is something that they have been supporting with their architecture since fermi. in the end they ditch that idea because developer probably want just one way to use async compute rather than support various way to do it.
And this is the tradeoff when you're trying to make something that's "low level." You're talking more directly with hardware, and since everyone does everything differently, you're going to have to deal with whatever quirks that come with it.that really shows that the issue with doing things in low level: developer have to do the various optimization themselves. Ashes of Singularity developer once said that doing per architecture optimization is something that they will not going to do even if low level API such as DX12 and Vulkan allows them to do that. in the end per architecture optimization still need to be done by nvidia/AMD themselves inside their drivers.