What aspects of a graphics card does Fortnite mostly utilize?

polis_putrus

Prominent
Apr 24, 2018
229
0
710
What aspect of a graphics card such as: memory bandwith, texture rate, pixel rate, shader processing units (cuda cores), memory speed, does Fortnite mostly utilize? does anyone know?? I know CPU matters too, but Fortnite is more GPU intensive.
 
Solution
Most of the numbers on the spec sheet are useless for comparison. Things like memory bandwidth, texture and pixel rate are calculated from specs (bit depth times memory clock or cores times clock speed). That means nothing in the real world when architecture and memory algorithms affect real world performance. The number of cores isn't relevant like a cpu where something uses a certain number of threads because a gpu is much more paralleled. You can't just look at clock rates, memory speed or bit depth when in the end, they're only parts of total bandwidth/performance so all of them have to compared in total.

Specs can't be compared on different architectures which makes comparing them pointless when a higher model tends to be better...
I'd like to help but your question makes it difficult. All these aspects are linked together in various ways. For instance, you aren't going to see a lot of 384 bit memory bus set ups on a videocard with 96 CUDA cores and DDR3 memory.

This doesn't even take into account the fact that the amount of something like CUDA cores can't be directly compared across generations. Maybe a lot of CUDA cores in a GTX 5xx card isn't as good as fewer CUDA cores in a GTX 1xxx series.

You're better off looking at the framerate you want, and the resolution you want to use, and the settings you want to use. Then you can figure out which cards can do what you need.
 
Most of the numbers on the spec sheet are useless for comparison. Things like memory bandwidth, texture and pixel rate are calculated from specs (bit depth times memory clock or cores times clock speed). That means nothing in the real world when architecture and memory algorithms affect real world performance. The number of cores isn't relevant like a cpu where something uses a certain number of threads because a gpu is much more paralleled. You can't just look at clock rates, memory speed or bit depth when in the end, they're only parts of total bandwidth/performance so all of them have to compared in total.

Specs can't be compared on different architectures which makes comparing them pointless when a higher model tends to be better in every spec. Even if it weren't in some ways, I bet it still gets better fps than the lower model. You also don't have any say in affecting any gpu spec beyond clocks or a model with different amounts of vram. All in all, you can throw the spec sheet out the window because it doesn't help. Game benchmarks tell the whole story and gets you the info you want in a much simpler and direct way.
 
Solution