Modern titles now store all textures inside graphics memory whenever they load into an area. The entire texture resource needs to be in GPU memory prior to the game even knowing which of the sizes its' going to render to.
If they don't use Tiled Resources (first introduced in DX11.2 and as vendor extensions in OpenGL), then yes. Tiled Resources enable sparse textures and more.
Those textures didn't just magically appear there, they had to be loaded from disk into system memory and them from memory into GPU memory across the PCIe bus and through the GPU memory controller.
True. Accounting for isotropic MIP Map overhead, that'd amount to 64 MiB for a 4k x 4k texture.
However, I think current texture compression methods typically deliver about 6.75:1, reducing the actual footprint to a mere
9.5 MiB. That would take 2.3 ms to send over a PCIe 3.0 x4 interface (i.e. reading from SSD) and 0.58 ms to send over a PCIe 3.0 x16 interface. Halve those for PCIe 4.0, but then we should also account for some contention.
It's not nothing, but it's also not as bad as someone might expect for a 4k texture. Maybe that's why game devs have gotten complacent about using them?
Honestly, I expected the massive shader execution capacity of modern GPUs would render most static textures obsolete. That doesn't mean you don't still need plenty of GPU memory to compute things like shadow and reflection maps, though. But, if you look at the compute-to-bandwidth ratio of something like a RTX 4070 Ti, you can do 79.6 floating point operations per
byte of GDDR memory it can read or write. That's not accounting for cache hits, but GPUs traditionally don't rely on cache for much other than batching semi-coherent reads or writes. Granted, you have to use some of that for lighting and any multi-texture compositing.
Another way of looking at it is 4.83 MFLOPS/pix @ 3840x2160. If your target framerate is 120 Hz, then that still gives you 40.3 k floating point ops per pixel per frame. At 4k @ 120 Hz. Truly staggering, IMO. Makes me wonder just how expensive typical game-grade procedural textures really are.
There is a reason it has an obscene memory footprint.
I take it you've compared the memory footprint with different texture resolution settings, then? Because, for an open-world game, there's a lot of geometry to manage and other state potentially to track.
Taking a 3840x2160 resolution screen means there are only 3840 horizontal pixels and 2160 vertical pixels making a 4096x4096 larger then the screen.
The issue is just that if you're standing next to a wall, looking down its length, you wouldn't want the part closest to you to appear blurry. That doesn't mean you're seeing every pixel of the entire texture, but just the part closest to you. I know it's a silly example, but it illustrates the point that it's actually not hard to find the limits of lower-resolution textures without having to walk perpendicularly into a wall, etc.