The test methodology was extremely flawed due to it being based on the premise that frame buffer resolution is what determines memory usage on dGPU's, this is false. Frame buffer size (aka screen resolution) was important back when we had 8and 16MB graphics cards, it's no longer relevant due to card sizing being ridiculous. Most graphics memory is being used by the drivers for managing texture resources and processing. The big memory eaters are AA and AF as those are applied to each stored texture with the exactly of FSAA which is applied to the screen frame buffer itself. FSAA is so expensive processing wise that you won't be doing it bigger then 4x and so it won't be a huge memory hog. SSAA, MSAA and the other various flavors of selective AA do their magic on the textures instead and store the aliased texture inside memory, the more textures you have the more storage is needed for aliased data. You can have many times more texture data then what is actually present on the screen and so the graphics drivers store all that inside the GPU's memory for quick access locally.
Now as to why you see so little memory usage, it's the limiter of the 32 bit NT kernel and the 2GB application memory space. Games are very careful with texture data because they don't want to exceed the application space limit, this results in the game under utilizing the GPU's memory. This will change once games and their associated engines are designed and compiled for 64 bit exclusively with zero concerns giving for the 32 bit compatibility and conservation of virtual address space.