AMD Radeon R9 300 Series MegaThread: FAQ and Resources

Page 52 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.


nah i have a 980 ti g1 and i only play 1080p even then with max settings i don't get 60 frames.
 

Sorry for my english jiimysmitty but i dont understood . Then would i do the trick with undervolt (less consumption) for two months? I think is the time lapse i will have to get the money for new psu.

 

CB.de are the only ones I've seen testing a 390x, it MATCHES THE 980ti!! Yes in dx12 with async on an oc'd 290x hangs with the great 980ti lol

 


AMD finally get to use the hardware that usually sit idle in DX11. for NVidia they most likely don't have similar hardware in their gpu hence they did not benefit at all with async compute being enable. good thing that DX12 does not strictly need new hardware like it did with DX11. if not were are going to see another repeat with AMD 3k and 4k series where the tessellation hardware inside those gpu were left unused even when tessellation being part of direct x new hardware strictly needed before you can access tessellation function. this is also the reason why GCN are more power hungry. because all those extra hardware end up using electricity but they did nothing in helping improving the performance for AMD gpu. but still this improvement will not going to affect all DX12 based game. because they need the game to specifically using async compute. and just like some other feature in DX12 async compute is not a mandatory feature to get DX12 working. because if that's the case then NVidia card will not be able to run DX12 code path at all.
 
The AMD driver defect discussed in the Guru3d article is a concern. Apparently it allows AMD cards to show a higher FPS than the game engine allows (over 60 FPS with VSync enabled). It could be an honest mistake, but it wouldn't be the first time AMD has been caught artificially boosting it's FPS numbers through driver tricks.
 


nVidia has stated that ASYNC is still not enabled on their GPUs but that the 9X0 series is capable of ASYNC.



True for both sides though. nVidia has done it too.
 
I'm excited to see what will happen with this new DirectX 12 API moving forward. AMD seems positioned to compete on performance with Nvidia, given it's apparent head start here. It'll be an interesting race!
 
The key thing about AoS is that it's an AMD sponsored title, with a long history of AMD development dating back to the Star Swarm demo. This is a best case scenario for AMD. It would be naive to assume that most games will be so favorable, particularly given the sheer number of Nvidia sponsored titles flooding the market. Async Compute is an advantage in the same way that Mantle was an advantage, remember that?
 


I don't disagree, but point out that what you wrote brings up a few things. First, the first of these articles cites above did a decent job explaining why Ashes of Singularity is likely not biased in favor of AMD. That obviously doesn't rule out the possibility, but claiming bias over the article's points should address the points. Second, Mantle was different. That was a software-based benefit in drivers. Asynchronous compute is a hardware-level difference. Neither of these things is a huge deal because the GPU market is constantly in flux, and because Ashes of Singularity is just a single data point, but they are interesting things to consider until we have more information in the coming year or so. The truth is that no one really knows, which is why I think it is such an interesting time.
 


I haven't bothered reading the articles and neither has AMD it seems! :lol:

AMD has partnered with Stardock in association with Oxide to bring gamers Ashes of the Singularity – Benchmark 2.0 the first benchmark to release with DirectX® 12 benchmarking capabilities such as Asynchronous Compute, multi-GPU and multi-threaded command buffer Re-ordering. Radeon Software Crimson Edition 16.2 is optimized to support this exciting new release.

http://www.guru3d.com/files-details/amd-radeon-software-crimson-16-2-driver-download.html
 


Ha, yeah. I think the article said that the company partnered with both AMD and Nvidia.
 
They work with both company but that game is sponsored by AMD. this asyc compute is a bit complicated stuff to discuss with. I think even game developer got confused with this stuff. Remember AoS developer themselves once mention that they probably understand it the wrong way? Some people actually debating if AMD async compute in which they utilize their very on hardware (ACE engine) is really the Async compute as define by directx 12? Because if such hardware are needed by DirectX12 why it only exist in Radeon hardware? Why nvidia claim have support for Async compute but there is no ACE like hardware in their gpu? This is unlike FL12_1 where it is part of DX12 but the one that did not want support the feature from the very beginning was AMD. They even telling the public FL12_1 as not important because there is no such hardware in console. It will be interesting to see what will happen in the future. What I can see is depending on their sponsorship different game will feature different DX12 feature. Company like Codemaster and Avalanche (Just Cause 3) already talking about supporting FL12_1 feature. I heard that Codemaster will patch F1 2015 with DX12 in the future that most likely going to use FL12_1.
http://www.dsogaming.com/news/codemasters-ego-engine-4-0-supports-dx12-raster-ordered-views-conservative-rasterization/
http://www.dsogaming.com/news/just-cause-3-engine-already-capable-of-supporting-dx12-pc-exclusive-dx12-features-revealed/
http://www.dsogaming.com/news/f1-2015-will-receive-a-directx-12-patch-in-the-future/
 


Thanks for the link. Turns out AMD is using the Microsoft recommended rendering model and method, DWM. On the other hand FCAT assumes that gpu is using Direct filp which used to be norm earlier (dx11). So actually it's guru3d and FCAT (and nvidia) which aren't updated. On top of that guru3d posts this -

"hours before the release of this article we got word back from AMD. They have confirmed our findings. Radeon Software 16.1 / 16.2 does not support a DX12 feature called DirectFlip, which is mandatory and the solve to this specific situation."

Well, they are right that it's the solution to the specific situation but for the rest :pfff:
 
And AMD comes out swinging again

http://www.tweaktown.com/news/50737/amd-beats-nvidia-early-doom-benchmarks-dominating-4k/index.html

"NVIDIA's GeForce GTX 980 Ti fall behind the R9 290 - which is a strange thing to see indeed."

Lol. On the bright side, the 980TI gets 1 more FPS than an R9 280X.

"Now remember, these are only early benchmarks - and things will change considerably for the launch version of Doom. But will the tables turn this much for NVIDIA? Because AMD is giving out quite the ass whooping right now." :lol:
 
dev already mention the game build right now only have console optimization even in the pc version. so seeing GCN based card doing well on pc was really not that strange. nvidia is very aggressive with their OpenGL optimization.
 
Id software (now part of Bethesda) has a history of being straight up buddy-buddy with Nvidia, and even almost anti-AMD. Not likely the final game will favor AMD. That would be a first for any game from either of those companies.

John Carmack: Let me caution this by saying that this is not necessarily a benchmarked result. We've had closer relationships with Nvidia over the years, and my systems have had Nvidia cards in them for generations. We have more personal ties with Nvidia. As I understand it, ATI/AMD cards are winning a lot of the benchmarks right now for when you straight-out make synthetic benchmarks for things like that, but our games do get more hands-on polish time on the Nvidia side of things.

Nvidia does have a stronger dev-relations team. I can always drop an email for an obscure question. So its more of a socio-cultural decision there rather than a raw “Which hardware is better.” Although that does feed back into it, when you've got the dev-relation team that is deeply intertwined with the development studio. That tends to make your hardware, in some cases, come out better than what it truly is, because it's got more of the software side behind it.
http://www.pcgamer.com/id-softwares-john-carmack-picks-a-side-in-the-nvidiaamd-gpu-war/#!
 


Not only are they "only early benchmarks" they are Alpha benchmarks with drivers not optimized for the game yet. Something is very wrong with the optimization if a R9 280X, a 5 year old GPU based on the HD7970, is getting almost the same performance as a GTX 980Ti.



They may have but just before AMD bought ATI, ATI was pretty big in dev relations. That all dies. Now they are doing better but nVidia is still well ahead of them in that and probably will be since all nVidia does is GPUs.
 


That article has interesting wording. They talk about the OC Edition of the 980Ti being the only one that beats the Fury X at 1440P but the OC 980Ti is the only one people can and will buy now. The one used in that benchmark is not even the fastest one, My Asus Strix is faster than the G1 Gaming.

Either way 4K is a back and forth battle and it is interesting to see but neither GPU currently is good for 4K and the majority of PC gamers are still on 1080P with more shifting to 1440p.

Maybe Polaris and Pascal will give us our first 60FPS 4K capable GPUs.

And I am pretty sure nVidia is OK. They do have almost 80% market share. Funny thing is that back in 2011 AMD had more than 50% of the AIB market share.