Virtual_Singularity :
It's disappointing to see nVidia and paid press PR insinuating that the 1080 and 1070 do true asyncronous computing. All one needs to do is have a look at the architecture of the cards to see there are no ACE's incorporated. Pascal, like Maxwell, and other previous generation nVidia cards, use preemption. For nVidia to incorporate ACE's they'd have had to radically change the architecture their engineers have come to rely on.
Faster clock rates and other benefits brought about by 16nm Fin fet design no doubt do much to shorten the gap between the preemption that nVidia uses and ACE's that AMD cards use. However, it's less than honorable (though not surprising whatsoever) of nVidia to practically lie in regards to this issue just before their new cards are to be released...
You are CONFUSING terms, and clearly have a limited understanding of GPU architecture. A lack of ACE does not mean a lack of asynchronous compute hardware.
Both cards have hardware level asynchronous compute methods, however they are slightly different in implementation. Here's a link:
http://www.eteknix.com/pascal-gtx-1080-async-compute-explored/
Testing has already proven that enabling asynchronous compute in Pascal improves the FPS (it helps reduce STUTTER as well). Yes, it's likely AMD's method is slightly better but this is only one part of the story. There are other software and hardware changes that can tip the balance.
Do you even know what "asynchronous compute" means?
It simply means code can be processed in parallel. In this context we're referring to Graphics AND Compute (for example, Textures and PhysX). When the code is synchronous it would be Graphics first, then Compute second.
The problem with synchronous is that if a bundle of code is too large, it takes too long to process and can cause issues like STUTTER. If the code is too short then we end up wasting precious processing cycles.
Preemption is used to halt and restart code that is not as essential so that high priority tasks can be given priority such as asynchronous time warp in VR to prevent lag when moving your head which can cause one to feel sick.
Summary:
GPU's are complicated beasts. You can draw some cursory data from reading a few reviews, but most people fall into the trap of thinking they have a good grasp on the situation when they do not.
Let's not also forget performance on existing and older games as well, and other things like NVidia's VR plugins. When I buy a video card I look at the BIG PICTURE.
I'm rooting for AMD. I've even recommended the RX-400 series (when prices are reasonable), but I do so with a good understanding of the hardware and software. I'm buying a GTX1080; if AMD had something of similar performance I'd give it some serious thought, however their financial situation and relatively poorer software support in recent times sway me to NVidia. If they can improve and maintain that I will reconsider my card purchase after this (in like five years).
(The RX-480 averages the R9-390 but swings from about 80% to 120% of its performance. Basically they made some architectural changes that are meant to HELP future games but HURT older games. AMD seems to have a history of making changes that benefit the future at the expense of the present.)
Anyway, you may wish to do more reading if you plan to comment in the future.