JohnBonhamsGhost :
you mention one game actually developed using strictly DX12 based engine. this is still considered "very little opportunity" for testing.
even considering the one other, Ashes of the Singularity, seems to show similar performance increase using AMD 200/300 series GPUs. it is still too early to state anything definite across the board for the future, but those using AMD can hope the trend continues.
And using a DX12 based engine means what for this argument precisely? We ARE talking about DX12 performance here, not DX11 (and even in DX11 they are pretty much matched in performance). Furthermore, aside from base DX12 boosting Radeon cards more than GeForce, DX12 Async Compute has so far massively boosted the performance of Radeon cards.
This is also not the beginning of this long debate. Months before Hitman and FC: Primal benchmarks showed up it was debated that GeForce cards wouldn't see significant performance boosts because nvidia engineered the current architecture purely towards DX11, whereas AMD's GCN architecture has been oriented towards parallel processing since practically its inception, and these performance boosts were predicted some time ago.
Are 3 titles too little to get a definite result? Yes they are, but 3/3 for AMD and better performance in a game built with
GAMEWORKS is overwhelming evidence.
Additionally, the argument for increasing VRAM needs still stands. The 970 will not have enough VRAM for high/ultra settings in triple-A titles in ~2 years considering today's games already almost max it out at 1080p.
Finally, while this is not an argument against the GTX 970 itself, nvidia
did deliberately screw up the performance of 600 and 700 series cards (including the original Titans) in The Witcher 3. Do you want to trust a company that gets people to spend 1000$ on a graphics card and then deliberately makes it perform about the same as a 350$ one?