AMD Radeon R9 300 Series MegaThread: FAQ and Resources

Page 34 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Nano has power limiters, which might explain the supposedly more efficient-seeming architecture. Guru3d did increase the power limit, overclocked the card, and then only saw a 30 watt increase in power usage. So that's not too bad, but it's still not in GTX 980 territory in terms of power efficiency.
http://www.guru3d.com/articles_pages/amd_radeon_r9_nano_review,36.html

In terms of the "drama", I'm really looking forward to equal treatment for all those who were getting down on "evil" Nvidia for suggesting that Oxide deactivate their asynchronous compute from their demo. I'd say AMDs attempts to censure the press is quite a bit more dastardly tactic. I'm sure we'll be seeing plenty of outrage on the forums over that. Not.
 
I hardly see how choosing who they give free samples too equates to censorship? Any site is free to buy and review the card, why should they get this kit fit free?

Also @17seconds, that power graph on guru 3d is a single fixed measurement at maximum power draw at 100% usage. All that proves is nano can draw more power at maximum (which makes sense given its a larger gpu). Look at the comparison of the nano to the 970 mini in the Tom's review, it's comparing on a game by game basis taking into account frame rate. Nano matches the 970 in pref per w. That is a massive improvement.

@juan, performance of nano is within 10% of fury x and 5% of fury at higher resolutions as shown in the review. It isn't 30% behind as you say. the very Worst case is probably 20% . Read the review.
 
Has anyone figured out why the fury x does so terribly at 1080p in comparison to a 980 ti? Is it the hbm memory speeds? Or poor optimization or something else entirely? Was it fixed in a driver update?
 
Cpu overhead? In the past there are simple test to see cpu are bottleneck or not. When going down resolution your frame rates should also goes up since there is less pixel for the gpu to crunch. But it going lower does not increase the frame rates then it is some sort of bottleneck effect. Hence by general going up the resolution will make the game more gpu bound and less cpu bound.
 
Yes, the crappy performance for Fiji in lower than x1440 has been attributed to CPU driver overhead. It seems like AMD is putting all the eggs in the DX12 basket, since the performance changes too drastically.

Also, I do remember in the Fury X reviews that Asus was able to draw 40W *less* than the other Fury X'es, so Fiji, as a GPU, can still be optimized further in terms of build. I don't have any clues or even intuition on what Asus could have done, but I'd say some of what they did was put to good use with the Nano.

I say that, because when Asus or Sapphire bring their Nano cards, they *might* improve the efficiency a tad more over the ref design.

Cheers!
 
Actually they can improve their situation IF they actually use all DX11 feature. But AMD said stuff like DCL as 'broken' hence they go their way to create Mantle. And i heard interesting stuff in regard to Async compute. Some programmers that actually program for DirectX and OGL said that Async compute is not really part of DX12, at least not like what AMD slides have mention. They said what MS mean by Async compute are not the same as what AMD said in their slides. Hence they speculate that the way Async compute used in AoS can only bring performance increase in AMD GCN. So the Async Compute use in AoS is pretty much exclusive to AMD hardware right now and not really part of DX12.
 
An amd card with lower wattage and better performance or same? Could make for some interesting overclock in the future. So i guess it all comes down to dx 12 drivers whenever those come out. Well i mean amd did scrap mantle in favor of m.s.'s dx 12 bid. Calculated move maybe. Especially considering amd provided the tech for the xb1 which is going to have dx 12. So if trends show anything amd'll probably have a significant edge in dx 12 over nvidia? Though i doubt it'll last. Man now i'm starting to regret buying a 980 ti.
 


That's pretty interesting, do you have any sources for that? Not trying to be a jerk, I'm genuinely interested.
 


It is 30% behind according to AMD hot chips slides, but they used a more aggressive frequency reduction to further reduce power consumption by ~60%. Toms and Anand obtain 10% performance behind Fury X, but only 30% lower power consumption. Again this efficiency gain is not any technical merit from improved muarch, but just the result of ordinary frequency scaling silicon laws.

Reducing frequency by 10% reduces power between 25% and 35%, depending of parameters of the process node used. GCN continues being an inefficient architecture. AMD is relying on HBM to remain competitive, but that is a single use trick.
 


The same source (Fudzilla) claims in the same article that Intel is interested on acquiring AMD. Before they claimed that Samsung was going to acquire AMD, and before it was Apple, and before it was Qualcomm, and before it was a chinese company...

And tomorrow, the 'rumor' will be associated to another company.
 
GCN does both synchronous and asynchronous execution but Maxwell does synchronous only. so it's obvious GCN will have lower efficiency compared to Maxwell. also it'll perform better in comparison to Maxwell in dx12.
 


That really depends on how game developers actually write the games, right now all we have is a single game benchmark for an async heavy game; that doesn't mean all demanding game developers will go that route.
 

The ashes dev has already said they dont use that much async and the game is not the best example of it.
 


Not having DX12 on the most popular OS in use (W7) is going to put a damper on that. Even those that do use DX12 will very likely provide a DX11 fallback
 


Either way, a single benchmark is nothing to go by at this point in terms of calling how developers are going to optimize their games under DX12; more than likely the cards that will truly take full advantage of DX12 are neither AMD nor Nvidia current series anyway since they are both missing features.
 


thanks for the phoronix link. hope vulkan is a hit.
 


I think SteamOS might help make it more important that OGL 4 was. If Steam Machines take off there's a lot more incentive for devs to work in Vulkan over DX12 as there will be a sizable market to address. Currently linux gaming is a bit of an afterthought.
 


Agreed. Hope it is the death of the DX monopoly. And hopefully yields visual and performance improvements too
 
Linux gaming can success if AMD put more effort into it. like it or not if you really want to play games on linux your best choice is using nvidia card right now. look at this benchmark:

http://www.phoronix.com/scan.php?page=article&item=1080p-b-value&num=1

outside synthetic benchmark we have GTX950 end up faster than Fury!!!

and from the talks that i see in linux community Vulcan might only supported by GCN 1.2 in linux. if that's true then most of GCN based card might not have Vulcan driver in linux.