Battlefield 1 Performance In DirectX 12: 29 Cards Tested

Status
Not open for further replies.
Nice article. Would have been interesting to see the 1080ti and the Ryzen 1800x mixed in there somewhere. I have a 7700k and a 980ti it would be good info to get some direction on where to take my hardware next. I'm sure other people might find that interesting too.
 
Good job, just remember that these "GPU showdowns" don't tell the whole story b/c cards are running at Stock, and there are GPU's that can get huge overclocks thus performing significantly better.

Case in point: GTX 780TI

The 780TI featured here runs at stock which was 875 MHz Base Clock and 928 MHz Boost Clock, whereas the 3rd party GPU's produced ran at 1150 MHz and boosted up to 1250-1300 MHz. We are talking about 30-35% more performance here for this card which you ain't seeing here at all.
 
Great write up, just a shame you didnt use any i5 CPUS, i would of really liked to se how an i5 6600k competes with its 4 cores agains the HT i7s
 
And then you run in DX11 mode and it runs faster than DX12 across the board. Thanks for effort you put in this but rather pointless since DX12 has been nothing but pile of crap.
 
Fascinating stuff! Love that you are still including the older models in your benchmarks, makes for great info for a budget gamer like myself! In fact, this may help me determine what goes in my budget build I'm working on right now, which I was going to have dual 290x (preferably 8gb if I can find them), but now might have something else.
 
It's funny how at 4K the Fury and Fury X are able to match 980 Ti speeds yet the game is utilizing way more than 4GB of VRAM. HBM doing its' wonders.

Also what happened to Kepler, RIP 780 Ti vs 290X.
 
Like I wrote above, the GTX 780TI they have here is running a stock which was 875/928 Mhz. A third party GTX 780TI such as the Gigabyte GTX 780TI GHz Edition that boosts to 1240 MHz, scores 13540 3D Mark Firestrike Graphics score, which is just 20 marks less or so than the R9 390X at 3D mark Firestrike performance results, and significantly faster than the R9 290X, R9 470, R9 480 and GTX 1060 6GB.
http://imgur.com/KIP0MRt 3D mark Firestrike results here: https://www.futuremark.com/hardware/gpu
 
Awesome work folks! The data!!
The only thing I felt was missing was as Xizel mentioned. It would have been great to see an I5 included in the Scaling: CPU Core Count chart. All of the I7s perform similarly, with only one I3 outlier for data points. It would have been nice to see a middle-of-the-road offering in one of the I5s.
 
I would have liked to see some actually i5s in the mix. While I get disabling hyper-threading emulates them to a degree, I can't tell you how many posts in the forums I have responded to with i5s claiming/or troubleshooting that ended up with a CPU bottleneck in this game, especially in multiplayer. Folks running good GPUs (everything from RX 480s to GTX 1080s) getting 100% CPU usage or close (again the worst of it was in multiplayer). Ultimately I feel like this article says in-directly 4C/4T is enough when every day posts in the forums say the opposite. While I know you could never get a fully accurate benchmark in multiplayer, I would like to see an article on core scaling in multiplayer all the same. It would have to be more about the reviewers impression of how smooth game play is but doing some benchmarks that have CPU utilization and frame variance/ frame rate would be useful in helping those with i5s (or any 4C/4T CPU) figure out if their experience is being hindered or is typical compared to other 4C/4T users. This article had a ton of info but I feel it only scratched the surface of what gamers are dealing with in real life.
 
Wow a lot of work went into this very well done! Go AMD, I like how they get better with age lol. Looks like my next card will be the new 580's coming out.
 
Does increasing the quality settings help the color banding in the gradients of the smoke, fog, etc? On XB1 the banding is horrible in some cases and im curious if that issue is the same on PC.
 


I agree. My Core i5-4690k at 4.8GHz is between 95-100% usage in multiplayer.

 
Just a question, why did you try to use Rx480 in cross fire since the price of 2 RX480 is the same price as 1 1080Ti, this would be interesting and i think this is the scenario that AMD used when they lunched their RX family.
 


The performance difference isn't nearly that great. I had a look at the GTX 780Ti "GHz Edition" reviews, and benchmarks showed it performing around 15% faster than a stock 780Ti when it wasn't CPU limited. 30% higher clocks does not necessarily equal 30% more performance. Assuming the cards used in these benchmarks were at stock clocks, then the best you could expect from the GHz Edition would be right around the GTX 970's performance level at anything above "low" settings.

Also, it should be pointed out that most 780 Tis didn't run anywhere near those clocks. You can't take the highest-OCed version of a graphics card and imply that was the norm for third-party cards. And if we take into account overclocked versions of the other cards, then the overall standings probably wouldn't change much. The 780Ti likely just isn't able to handle DX12 as well as these newer cards, particularly AMD's.

It might have been nice if this performance comparison also tested DX11 mode though, since I know Nvidia's cards took a performance hit for DX12 back at the game's launch. I was also a bit surprised to see how poorly Nvidia's 2GB cards fared here though, while AMD's seemed to handle the lack of VRAM more gracefully. The 2GB GTX 1050 dropped below 30fps for an extended length of time even at medium settings, and all of Nvidia's 2GB cards plummeted to single digits at anything higher than that. Meanwhile, the 2GB Radeons stayed above 30fps even at high settings. It kind of makes me wonder how these 3GB GTX 1060s will fare a year or so from now, especially when you consider that the RX 480s and even 470s all come equipped with at least 4GB.
 


with Fury AMD need to make special optimization on the VRAM so the VRAM usage did not exceed the 4GB capacity. something similar can also be done on GDDR5 memory if you want to. i have GTX660 and GTX960 (both 2GB model). on the same graphical setting the 660 will use much less VRAM than GTX960. that's because GTX660 have weird memory configuration that nvidia for their part try to fit the memory on the first 1.5GB portion first. that's why AMD create HBCC with Vega so they no longer need to tweak the VRAM usage per game basis.

as for what happen to 780ti vs 290X that's what happen when you win majority of the console hardware contract. but most often nvidia kepler still very competitive to it's respective Radeon counter part for tittles that is PC exclusive or coming out on PC first console second. take this for example:

http://gamegpu.com/rts-/-%D1%81%D1%82%D1%80%D0%B0%D1%82%D0%B5%D0%B3%D0%B8%D0%B8/sid-meier-s-civilization-vi-test-gpu

http://gamegpu.com/action-/-fps-/-tps/shadow-warrior-2-test-gpu

nvidia to certain extend try to fix the issue they have with kepler with maxwell but they know that will not going to be enough when AMD keep directing more and more games development towards their hardware core strength with new console hardware win. that's why nvidia is back in the console business with Nintendo Switch.
 
Status
Not open for further replies.