That's an interesting idea about using viewports/making the centre area of the screen as accurate as possible while reducing the accuracy in the outside areas to reduce gfx load.
The central viewport will no doubt depend on the user's head position. I wonder how far off we are in getting the VR headset to track the actual focal point of the eye? If this were possible then this should provide big benefits all round -
1. the 'real' eye point of focus size would be considerably smaller than NVidia's viewport thus gfx processing could be greatly reduced
2. The user would be able to fix his/her head in one position while focussing the eyes on a different display area. Tracking would detect this and alter gfx fidelity/processing to suit the new focal point. This wouldn't be possible with NVidia's viewport method - the whole head would need to rotate in order for the new focal point (viewport) to become rendered in high res.