DX12, NVidia and ASync Compute...
Most people keep concluding that NVidia is way behind AMD in this area, however what you need to understand is that NVidia's new Pascal architecture is different from ASync Compute.
AotS is not optimized for NVidia's new architecture. They would have to rewrite some of the code, so NVidia can't magically make a fix for this. They may work with the game team however it is uncertain how much can be done at this point.
We do NOT have a clear understanding of AMD vs NVidia for DX12 yet, nor are we likely for quite some time. The game engine, game developer, and NVidia driver team are all instrumental in making this work.
I think most people expected DX12 to arrive and all of a sudden new games would run far better. That's NOT the case as we've seen with games like Tomb Raider that try to tack on some elements to a DX11 game.
There's a learning curve, it takes years to make a game properly, and much of DX12 isn't even properly coded for yet.
I don't expect to get a good indication of how this is panning out until sometime in 2017.
There is also a lot more than simply ASync Compute. What concerns me most for AMD (because I want them to succeed) is whether they put in the same hardware optimization for VR that NVidia did. NVidia can get up to 60% when a game uses their plugin (new GTX1080/1070 only).
This new hardware tweak has benefits for multiple monitors, but from what I understand it works so well for VR because both images (left eye and right eye) are very similar, essentially they are the same image with some elements offset. This takes advantage of that (more than 60% may even be possible but I'm guessing).
AMD's GPU design so far seems unable to reach the same frequencies as NVidia's (NVidia claims they spent a lot of engineering optimizing the distances of paths which is critical for stability at high frequencies). They can still make a product with good VALUE, however if true it will be very difficult to compete with NVidia on the high-end because yields get worse as die size increases and there is a maximum feasible die size anyway.
In the short term it looks like AMD may be the best $150 to $300 offering, and I look forward to see if that is true. I'm especially interested to see what their best APU for a laptop ends up being capable of with a Zen/Polaris, 14nm finfet design because Intel is a bit expensive, and AMD right now too hot to optimize for mobile.
I'm guessing up to the performance of a modern quad-core + GTX960 desktop in a mobile APU might be possible in 2017.