renz496 :
the situation is a bit similar if you look closely. just that the difference between the two is hardware compatibility. with tessellation you need your hardware to be compatible with shader model 5 before you can use the feature. but that's not the case with DX12. so in tessellation case AMD did not get the benefit "being ahead" with the feature in their earlier hardware. by the time tessellation become part of the API spec nvidia was somehow able to create solution that is "stronger" than AMD. there are rumor talking about why fermi late to the market. true the yield of TSMC 40nm was terrible for their big chip but they say nvidia also late with fermi because they add tessellation to their design because fermi initial design did not include tessellation engine. back then many speculate about how nvidia will handle tessellation via software only instead of using dedicated hardware like AMD. now look at asnyc compute. AMD have the hardware since the very first GCN. what if DX12 specifically need new hardware like DX11 was? probably not even AMD GCN 1.1 can use their ACE engine if that's the case. so this time having the hardware since their first generation of GCN definitely giving them the advantage.
Good thing you mentioned Fermi, because it is a great example on what I meant by "DX12 has been there for nVidia and AMD all this time". To be honest, I can't remember how much time passed between DX12 and the 980 launch, but if nVidia delayed (taking what you said as truth) Fermi because of a single feature, then why not do the same with Maxwell's generation?
I do agree it doesn't really matter who supported it first, but instead it's who supports it better. Tessellation was a mini-war that nVidia won and now the ASync battle is being won by AMD so far. Two very specific features that result in some form of rage-inducing fanboism with the masses, haha.
renz496 :
in project cars developer clearly mention the lack of AMD interaction with them. and AMD not even trying to deny that. that's not the case with hitman. if something wrong happen with devrel we should hear something from developer or nvidia themselves like what happen in AoS. just that in AoS the issue mostly happen with nvidia PR team (AoS dev already make clarification on this). another example is Dragon Age 2 and TR reboot. in case of DA2 nvidia said during game launch day that they did not have access to the game build until the game officially being launch so they can't make any performance optimization prior to that. with TR they specifically mention that they did not get the final build for final version of tressfx being use in the game. then there is dirt showdown. we know how bad nvidia performance in that game. we never heard nvidia making any statement or complain about it. the game simply using tech that is favorable to amd architecture. all nvidia did was keep working with dev to improve their performance in that game. in several driver release i saw dirt showdown being highlighted as having more performance optimization back then. i think something similar happen with the new hitman. the game simply favoring amd hardware more. funny thing is people accuse PhysX as a reason why AMD performance a poor in Project Cars and yet the new Hitman also use PhysX (same as absolution).
I have no idea why those situations you mention come to be, but I have anecdotal evidence with DiRT Showdown. It happens that I have all DiRT games (1, 2, 3, Showdown and Rally) and I have seen their changes in the EGO engine first hand. Showdown was more of a tech showcase with little to do with content. They added (if memory serves right) new illumination techniques ("global" or something like that) and improved the physics engine a lot; they even added some specific things for the Intel graphics! They were also included in GRID2. At that time I had the 670 and then moved to the 7970Ghz. The game ran, to my own eyes, pretty much the same without the new things enabled. The 7970Ghz took a hit with those features enabled, but not as big of a hit as the 670. All in all, the visual difference was negligible at best XD
All in all, you can say "X Dev supports Y Company because it performs better". Well, yes, that has always been the case and it will continue happening. It's the same discussion when using MSAA vs TXAA and all the other flavors that one GPU handles better than the other. And then you have the more explicit efforts like GameWorks and whatever AMD has (that I think almost no one uses? Sound thingy, LiquidVR and some other stuff).
The only way to really compare "apples to apples" is to either:
1.- Disable all custom stuff and see how they stack up (very little sites do this).
2.- Enable each proprietary equivalent per game and see how they stack up (most sites do this).
And even then, some things can't be disabled since they could be "built in" into the engine. I would love to have a mix of the above and not only "ultra, high" settings. Taking a bit more time to analyze *why* a single game performs the way it does also adds a LOT of value to a benchmark, since tells more than just "A behaves better than B".
I would believe the only "solution" for people that doesn't know, is to teach them? Leaving fanbois and trolls aside, I do believe most young enthusiasts are ignorant to such things and only take reviews at face value: "A is better than B, because Z game says so!". There's always more to that and we all know it.
renz496 :
to certain extend yes they were biased towards AMD because of mantle. so nvidia have hard time with async. so did intel, qualcomm, imagination and ARM have async implementation that is on par or exceeding AMD implementation? nvidia have support for all DX12 feature except async compute because MS did not dictate how async compute should really be done. this is problem with DX12. MS want fast and wide adoption ASAP they did not dictate how the implementation should strictly be done. Crytek dev mention that the problem with async compute is there is no standand way of implementing them between gpu vendor. since AMD hardware in Xbone and PS4 developer prefer how it is done on AMD hardware. and speaking about DX12 support Polaris also still not incorporating all DX12 feature.
I don't disagree with it. AMD did have a head start with MANTLE and we all knew the consoles would have that effect at some point. Plus, no API is specific to the point on how to implement them. OGL has had that issue for *ages*. And yeah, the whole point of DX12 and Vulkan is for the Dev to decide on *how* to implement stuff. But I like have always said, that is why you have frameworks like UE, Cryengine, Unity and Source, among many others. They kind of take away part of the burden and give you great tools to develop.
In any case, thanks for the answer, renz.
Cheers!