News Far Cry 6 Benchmarks and Performance

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

husker

Distinguished
Oct 2, 2009
1,202
220
19,670
Nice article, but I'm sad to see my Vega 64 no longer warrants a run on the test bench. I imagine there are a lot of Vega owners like me who had planned on upgrading but due to cost/supply issues are still hanging on. Perhaps it would be good to include some information on what cards that were tested can be used as a guide for cards not included. For instance "Vega 64 owners should expect performance slightly below the RX 6600 XT" or something like that. Although the Vega 64 has "only" 8G of memory which may hamper it even more.
 
Nice article, but I'm sad to see my Vega 64 no longer warrants a run on the test bench. I imagine there are a lot of Vega owners like me who had planned on upgrading but due to cost/supply issues are still hanging on. Perhaps it would be good to include some information on what cards that were tested can be used as a guide for cards not included. For instance "Vega 64 owners should expect performance slightly below the RX 6600 XT" or something like that. Although the Vega 64 has "only" 8G of memory which may hamper it even more.
I can run Vega 64. I assume it will be slower than the 5700 XT but faster than the 5600 XT, probably landing right around the same level as the RX 6600 XT. Mostly, I test with the latest generation of GPUs from AMD and Nvidia, then add in a few key points from the previous generation, and then maybe one or two cards from two generations back. I tossed in the GTX 980 as well just for kicks, even though it's now seven years old, because I wanted to see if Pascal had any major advantages over Maxwell architecture in this game. But at some point I just have to stop testing and get the article out the door. Anyway, give me a bit and I'll see about adding in the RX Vega 64. Probably matches the GTX 1080 Ti in this case, maybe even a bit faster due to the DX12 nature and AMD optimizations / promotion.
 

bwohl

Distinguished
Apr 21, 2008
48
0
18,530
Due to current prices, currently running 8GB 5500XT. Jarred, where would you guess it to fall at 1920 X 1080 Ultra settings? Might pick up title, might not. ;) Thanks for your guess!
 
Due to current prices, currently running 8GB 5500XT. Jarred, where would you guess it to fall at 1920 X 1080 Ultra settings? Might pick up title, might not. ;) Thanks for your guess!
The RX 5500 XT 8GB tends to be pretty close to the RX 580 8GB (or RX 590) in performance. It might vary a bit, depending on whether the game benefits more from the RDNA vs. GCN architecture, but usually it's within 5%. The 5500 XT 4GB tends to be slightly faster than the RX 570 4GB overall. See: https://www.tomshardware.com/reviews/gpu-hierarchy,4388.html

Also, for Huster, I updated the charts with Vega 64 now. It ends up being closer to the RX 5600 XT than the 5700 XT, and even loses to the RX 6600 XT. I will say my Vega 64 card is an original reference model and does seem to underperform, not sure exactly why. Anyway, other Vega 64 cards might end up being up to 10% faster. That's another reason I generally avoid benchmarking the Vega cards — the reference designs did not age well in my experience. But it does look to be about 65% faster than the RX 580 8GB, which jives pretty well with the specs and performance elsewhere. Looks like it's possible RDNA does outperform GCN in this particular game by a relatively large margin.
 

DougMcC

Commendable
Sep 16, 2021
115
79
1,660
Im not so sure the engine "nerf" nvidia GPUs. You know, nerf is such a big word. Perhaps they did kept RT to a minimum to make AMD gpus look better. Maybe even kept it that way so its "easy" on PS5 and Xbox X/S hardware.

Reality in the industry is that a launch cobranding like this means the devs spent time optimizing performance on AMD gpus. Nothing more than that really. No need to imagine they tried to make Nvidia look bad on purpose, they just tried to make AMD look good.
 

BeedooX

Reputable
Apr 27, 2020
70
51
4,620
Reality in the industry is that a launch cobranding like this means the devs spent time optimizing performance on AMD gpus. Nothing more than that really. No need to imagine they tried to make Nvidia look bad on purpose, they just tried to make AMD look good.
If anything it should demonstrate to the grumpy conspiracy theorists that different architectures are... different. If, without intentionally gimping the competition with alternate code-paths, an AMD optimised title doesn't look good enough on an Nvidia GPU, well hard luck I guess. Nvidia have been pulling this crap for years - and I've bought almost every one of their cards up to the 2080 series.

I heard a rumour that if very large BVH structures were put into Infinity Cache, it could cause quite a few issues for Nvidia cards with RT, so I guess we're lucky AMD didn't ask for some underhanded trick like that with FC6, but what do I know, I don't write games....
 
I heard a rumour that if very large BVH structures were put into Infinity Cache, it could cause quite a few issues for Nvidia cards with RT, so I guess we're lucky AMD didn't ask for some underhanded trick like that with FC6, but what do I know, I don't write games....
They would likely need that anyway considering that RDNA2 is missing a component that NVIDIA's RT cores have.

EDIT: Went to go find out what it was, according to https://www.hardwaretimes.com/why-a...on-rx-6000-a-comparison-of-ampere-and-rdna-2/ , RDNA2 doesn't have hardware accelerated BVH traversal and has to rely on the compute shaders for that.
 
Last edited:

HyperMatrix

Distinguished
May 23, 2015
118
115
18,760
Reality in the industry is that a launch cobranding like this means the devs spent time optimizing performance on AMD gpus. Nothing more than that really. No need to imagine they tried to make Nvidia look bad on purpose, they just tried to make AMD look good.

One could argue that by only putting in optimizations for a specific card brand that only accounts for <20% of the market, including specific use of limited DXR features, and only inclusion of image sharpening, or "FSR" as AMD calls it, that it is intentionally designed to make Nvidia look bad. It's no different than when Crysis had a ton of unnecessary tessellation which Nvidia cards handled well, but dropped performance on AMD. Regardless of how you phrase it...these kinds of behaviors are in effect reducing the performance of a certain set of cards. So "nerfing" would be apt.

And we know Ubisoft does this sort of thing all the time. For example AC: Valhalla. No RT or DLSS, despite having terrible performance and needing it. And despite Watchdogs having both RT and DLSS because it was the Nvidia promo game. So they're taking bribes to implement features by one manufacturer or another. And now the same thing is happening with Far Cry 6. It's really inexcusable for a AAA game from such a large publisher which does have experience working with both card brands. Both DLSS and FSR should have been offered since from all I've read...it barely takes any time to implement. So what's the reason for not doing it other than taking money from one side to hobble performance of the other side? This applies to other games as well. NVIDIA/AMD Sponsored shouldn't mean "paid for" lower performance on competing cards.
 
  • Like
Reactions: JarredWaltonGPU
Regardless of how you phrase it...these kinds of behaviors are in effect reducing the performance of a certain set of cards. So "nerfing" would be apt.
"Nerfing" would imply a deliberate attempt to butcher the performance of the competitor. Just because they spent more time optimizing for one side than the other doesn't mean they were deliberately butchering the performance of said part. Game developers are on a time constraint to deliver a product. If their time budget only allows for one side or the other, whoever gives them a better deal is going to win.

"Nerfing" would be more like what Intel has done in the past, where if the CPUID doesn't report it's an Intel CPU, Intel's compilers would skip over using instructions that sped up certain things even though said processor supported the feature. (related, sort of: https://arstechnica.com/gadgets/2008/07/atom-nano-review/6/)

There's a reason why Futuremark refuses to (or rather, those involved in GPUs said not to) make optimizations on their benchmarks even though DX12 requires it to achieve the maximum performance.
 
Last edited:

husker

Distinguished
Oct 2, 2009
1,202
220
19,670
Thanks for the updated Vega 64 results!!! Although you found it "underperformed", I found the results quite encouraging for such an old card on a cutting edge game, and have realistic expectations. Also, given that your card is an original reference and potentially worst case scenario, that gives me a good "minimum baseline" to look at for my closed loop OC card. #JarredRocks!