One nice thing about Vega is you get full speed INT8, FP16, FP32, and FP64 performance. It doesn't discriminate so it's awesome for compute based tasks and why it beats the 1080 and 1080 Ti in 3D rendering and other non-gamey tasks (they've been very popular for crypto 'currency mining').
NVIDIA hobbles their consumer hardware in this area on purpose to push people into the higher bracket cards which I don't think is very nice.
The only reason you don't see the Vega 64 beating the 1080 Ti in a lot more games is down to optimizations.
Compared to the 1080 Ti the Vega 64 has slightly more transistors on a smaller process node (14nm vs 16 for the NVIDIA). It's got higher compute performance, more texture mapping units, but it falls down in the area of texture/pixel fill rate.
The architectures of the two are:
Vega : 4096 shaders : 256 texture mapping units : 64 render output units
1080 ti : 3584 shaders : 224 texture mapping units : 88 render output units
(they have the same memory bandwidth at 484GB/s)
So you see they both have strengths and weaknesses but if a game engine is optimized for one the other might suffer, and vice versa, but overall the Vega 64 appears the stronger card in more areas. Not bad when the MSRP is much lower.
It just so happens to be a fact of life that NVIDIA has been very good at getting developers to optimize for their hardware. And I think it's fair to say NVIDIA optimized their hardware for older APIs whereas AMD looked forward to Vulkan/DX12.
They are going to have to get through a rocky period while they wait for more game engines to be updated and optimized for new APIs. But you can't really optimize for hardware that doesn't exist so this was probably expected.
Good news is that process has already started.