That is my point. We have stopped working hard on image compression for game tech and thus we end up with these massive ridiculous things. We need to get back to working on all fronts to make software good, not to mention getting it out the door in reasonable shape instead of shipping extremely broken software and fixing it for a year after release. The problem is actually more the textures now for the sizes we are talking about and that is not going to get help from h.266. We need new texture compression tech that allows for these 4k or 8k textures to not blow u p our ssds.
Texture compression has different requirements than say what image format your phone's camera stores for later viewing. In particular it has to be fast, it has to be predictable, and it has to support random access.
The speed requirement is obvious. The compression technique can't spend a lot of time doing its work. Usually GPUs will actually implement the decompression algorithm in hardware.
Predictability is necessary because the rendering pipeline can't use a compressed texture to determine the color. It has to decompress the texture first. So not only does the decompression time need to be predictable, but the compression ratio also needs to be predictable so you know how much memory pressure you'll be putting on the GPU. Basically if you give the compression codec an image, it's going to typically compress to the same size regardless of content, unlike say JPEG or PNG which compresses to different sizes depending on the content.
Lastly, random access is important because it allows the GPU to only decompress what's necessary. Without random access support, the entire texture would have to be decompressed because it's very hard if not impossible to determine where in the image you could actually start decompressing.
I'm sure there are plenty of people who are working on better compression algorithms, but the requirements for textures doesn't make it easy.