AMD RX 400 series (Polaris) MegaThread! FAQ & Resources

Page 42 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


the thing is as some other people pointing out in other forum Doom does not have built in benchmark. so it is hard to compare existing reviewer numbers to AMD numbers since they most likely run using different part of the game. take one of hexus most recent bench for example. in their test even Fury X able to average around 60fps at 4k ultra setting:

http://hexus.net/tech/reviews/graphics/99664-asus-geforce-gtx-1080-rog-strix-gaming-a8g/?page=7

You know a well-coded and optimised title when you see one. We've jacked the settings all the way up to ultra and run the game via the Vulkan API. The end result is super-smooth performance at 4K. You really don't need a card as powerful as this for Doom.
 
Love the AMD chill technology, I have a 60hz monitor so I can cap the chill at 60FPS and reduce the GPU load by not having it produce unneeded frame rate. the temps are as much as 10-15 degrees celcius down on peak load. The lag in switching is not bad, it is very short and quick. only downside is I only have witcher that works with it.
 


i thought we can already do this for years with with vsync enabled? or if you don't like vsync just enable frame rate cap that also available for years on third party tool like riva tuner? that is if you simply don't want your frame rate to never exceed 60FPS. ichill should have something extra with it. for example if the game have 90FPS when you were idle or sitting still ichill will lower you frame rates for example down to 60fps. when you start moving once again the FPS will ramp back up to 90fps. in other words the driver try to control your FPS up and down intelligently based on you in game movement. honestly i don't know how this will going to affect in game smoothness especially in competitive shooter like counter strike.
 


Yep. We can already do this with V-Sync.
 


V-Sync doesn't cap the GPU running at 100% potential, so you don't get benefits greater than 60FPS but your GPU is running at its maximim TPD. Chill throttles the wasted power line back to produce only the amount of power needed to get 60FPS or whatever you set it at.

 
In AMDs case, you could do this with RadeonPro using Dynamic V-Sync (God, I miss that). It's basically the same thing nVidia had, but for AMD cards through RadeonPro.

Yes, yes, I will keep on whining about RadeonPro not existing anymore until something replaces it in my heart.

Cheers! 😛
 


No. Vsync results in higher input lag and GPU load. FRTC was already a lot better than Vsync, and now Chill is a further improvement on that.
 


AMD had FRTC in the driver before Chill.
 


If memory serves right, Dynamic V-Sync was FRTC with another name way before AMD introduced it in their drivers. It did have an FPS cap built in as part of the "dynamic" trickery.

Edit: Case in point!

http://www.radeonpro.info/features/

God I miss RadeonPro T_T

Cheers!

EDIT: Typo and link.
 


You were right. The AMD presentation was a zen (Ryzen) overview. Only saw a Vega demo at the end.

 


been using rivatuner OSD for years i can see gpu utilization definitely going down with vsync turn on or frame rate cap being use. with more modern gpu the power saving feature go one step further by throttling gpu and memory clock speed as well.
 


if you don't want to use vsync you can use frame cap. rivatuner have such feature for years. and personally i'm no competitive gamer and hate screen tearing which will happen as long as vsync off (including when FPS is much lower than screen refesh rates) so i will enable vsync pretty much in every game that i played.
 


That still isn't optimized for input lag.
 


what simple frame cap? there should be no optimization needed for input lag if you simply enable frame cap. for vsync it is a bit more complicated. but they also available for years. now nvidia try to reintroduce them with new name called fastsync.
 
my clock speed is extremely inconsistent. It doesn't drop way down or anything, it just bounces around within 100mhz or so. I was worried, but then I saw this benchmark. They talk about variable clock speed built into the 480 which is suppose to adjust it based on what is needed to keep it running cooler. It also would explain why the gpu usage is always at 100%, but the clock speed is lower. it always runs max but lowers the clock speed which I would think is the same as keeping the clock speed but lower the usage %. I don't know I'm guessing there is a reason that makes it better to do it this way or else they wouldn't have.

http://www.hardocp.com/article/2016/06/29/amd_radeon_rx_480_video_card_review/3#.WFIXD4-cFhE

what do you think of this? anyone else familiar with this already?
 


i think AMD try to make their boost tech more like nvidia gpu boost.
 
How do AMD's Frame Rate Target Control, Chill and ordinary V-Sync that I can enable in games interact with one another? I'm asking because I see that FRTC was introduced for the Fury cards ( https://www.amd.com/en-us/innovations/software-technologies/technologies-gaming/frtc )and I don't know if it works with the RX 400 cards.

I'm strongly considering buying an RX 470/480 over Christmas and since I'm running a 1080p screen I would like to know if I limit games to 60FPS the three technologies would still work together OK to limit the power draw on whatever card I get (because I would prefer minimal noise.)
 


FRTC should work on all their GCN-based cards. Certainly the GCN 2 cards and up.

Chill is just a more advanced/dynamic implementation, but it needs to be optimized for each game and thus doesn't support everything out there.
 


I have a first gen GCN card (R9 280- which is the same GPU as a HD 7950). I can confirm FRTC works on that.

A couple of caveats though- it depends on the game. It works fine for newer DX11 titles (I assume it works in DX12 as well). It doesn't support older games though and doesn't appear to do anything in OpenGL stuff (unsure about Vulkan, I have Doom but it's demanding so don't want to slow it down so haven't tried it!).

You don't need to combine them- you would either use FRTC or V-Sync or Chill. They all achieve a similar thing.

I think Chill is just a new name for FRTC- what both appear to do is slow the gpu down once it hits a given frame rate (i.e. by lowering clocks), which has the benefit of lower power use and also doesn't add latency. V-Sync on the other hand just locks the output but doesn't throttle down the gpu so won't give as much of a power saving and has the issues of adding in input latency that FRTC doesn't due to how the mechanisms work.
 


i believe that is the idea of the new features. adjust on the fly to maximize performance while lowering power used. not sure why 100% gpu at lower clock speeds is more effecient that higher clock speeds but lower utilization. would need someone more informed than i to help sort that out. but this is how it is designed as i understand it.

much like nvidia and boost 3.0. it speeds up and slows down as needed. boosts well past out of the box speeds when it can and slows down as much as needed when it can as well.
 


Chill and FRTC are separate features. The specifics of how Chill is improved over FRTC are kind of unclear at this point. But it only works on some games, where FRTC works globally.
 
GDDR5X? Then that explains the Vega picture in the recent presentation for Pro cards!

So, can we say "little" Vega will be using GDDR5X instead of HBM2, and it's going to be released first? Would that be fair to say? Or maybe "big" Vega will be released first with GDDR5X and then a second version with HBM2 will be released? That last would make little sense, so it can be dropped I guess.

Exciting times!

Cheers!
 
Status
Not open for further replies.