One thing that I would like to see more user control over how the videocard managed and reports VRAM, While the usefulness for gaming will be questionable since game devs can shoose whether to use shared memory or not (where if they choose not to, a game can either crash or fail to properly load in some textures, but maintain good frame rates, or use the shared memory and take a 50% or so performance hit.
The reason why I want more ocntrol is that there are video cards such as the GTX 970 where even though the 512MB secondary pool is slow (runs at around 30GB/s), a game that is set to not use shared memory, will still be able to use that extra 512MB pool in a way that is largely transparent to the game.
With that in mind, what if a video card maker added a driver feature that could take that concept and trick an application into thinking the video card had 16, 32, or even 64GB extra dedicated VRAM, by allocating system RAM and having it masquerade as dedicated VRAM but managed like how the nvidia drivers manages the 2 memory pools of the GTX 970.
While it would not help much for games, it would help greatly with other GPU compute tasks, and AI tasks that are set to not use system RAM for performance reasons.
The benefit if they were to add this function is that some AI frame interpolaters for super simulated super slow motion (e.g., some current ones require over 20GB of VRAM to process a 4K video), where if you have a card like a RTX 3070 or 3080, then frame interpolation on 4K would simply fail, and you would be forced to use lower res footage and output. But if you don't mind the process taking significantly longer, then forcing it to use system memory as additional VRAM (against the wishes of the developer), could allow those cards to complete such tasks slowly instead of not completing them at all.