Lower FPS in Planetside 2 with RX 480 that HD 7870

DavejaVous

Reputable
Jul 16, 2015
10
0
4,510
I replaced the HD 7870 in my system with an RX 480 (8GB) in the hope that I'd get a better experience in Planetside 2 and other games such as TF2 and CS:GO

On the old card, I had about 40-50 fps in light battles in PS2 and about 30 fps in heavy battles. On the RX 480 I get under 20fps in the menu and about 10-15 in-game :fou: :fou: :fou:

I have used the DDU program to uninstall the previous graphics drivers and downloaded the latest ones from the AMD site for the RX 480, as I have seen other people recommend.

(Before I post my specs, I will point out that I am well aware that my CPU will be a bottleneck, but adding a better graphics card shouldn't decrease performance. At worst, nothing should change?)

Specs:

CPU - AMD A8-5600K OC to 4.2GHz
Mobo - ASRock fm2a75m-dgs
RAM - 2 x 8GB at 1333MHz
PSU - Corsair RM750
SSD - 256 GB Samsung 850 Pro

Why is a better graphics card giving me worse performance, and how do I fix it?
 
Solution


Artificial loads like Unigine heaven/valley or furmark or the like will pin the GPU at 100%. I had several particular problems only in games (not even when pushed hard doing stuff like folding@home). Drivers newer than 16.9.2 technically "fixed" it. It still happens, enough to be noticeable to me, but a lot less than it used to. Well, to be more exact, rolling back all the way to 15.7.something fixes it entirely.

Arguing? We were arguing? I know I went on a huge tangent and went all story mode on y'all, but I didn't...

amtseung

Distinguished
Man I see this all the time in yell chat.

The reason for worse performance is a singular, permanent function of the AMD drivers known as AMD Powertune. It's been a problem ever since the R9 family of graphics cards came out. The gist of it is that the harder you load the graphics card, the slower it'll run in an attempt to save power. There is no known way to disable it without some pretty hardcore workarounds. Clockblocker, the RadeonPro method, the sometimes-wonky Afterburner 3D lock method, are all options. I'm not sure with the RX series, since I'm an R9 owner, but I hear that you can fudge with the graphics card's C-states in AMD Wattman, and that combined with maxing out the Power limit slider can help overcome Powertune dropping C-states harder than Skrillex.

The drivers are at fault, your card is functioning as intended. You can't fix it, you can't remove the thing causing it.

The other thing at fault here is PS2's engine and you being CPU bound. When your rig is CPU-bound, framerates are spastic and all over the place. The more CPU-bound you are, the worse the drops get. The converse is also true. The less CPU-bound you are, the drops get less and less. Being GPU-bound actually yields a surprisingly stable framerate all across the board. You should try playing with a potato ini. At the very least, turning off foliage, shadows, will yield a big framerate improvement.
 

TJ Hooker

Titan
Ambassador
@amtseung PowerTune has been around since before the HD 7000 series came out. And I can't see why it would hurt performance, or how it's relevant here. Even if it is related, you should be able to get around it by raising the power limit.

Unfortunately I can't answer the OPs question though.
 

amtseung

Distinguished


The problem is you used to be able to turn it off. Rather, it turned itself off the moment you went and fudged around with the sliders in an overclocking utility. It's permanent now, and is linked to a piece of hardware directly on the PCB that has to be shorted if you want to disable Powertune, let alone that ridiculous power limit scaling which forces anyone who wants any semblance of performance to perform a risky power limit doubling mod.

I know what you mean. My HD4870 and HD7950 were both fine. Going to an R9 380x was a headache and a half, and then some. At this point, I'm considering going team green for the first time... ever, I think.
 

amtseung

Distinguished
I once tried to capture video of the card downclocking itself, but the video crapped itself out from the strain of recording. xD

Here comes story time. I love story time. Just kidding.

First example would have to be osu!, a very not-demanding game based on OpenGL, which I play in windowed mode at 1280x1024 on my 30" Apple Cinemadisplay. First up is the HD7950. With the bios switch set to the master position, which I left completely bog standard, clocks would fluctuate based on the load but always within 50mhz of the maximum. Normal behavior. With the switch moved to slave, which I changed to a maximum power limit scaling of 200%, +150mV, 1.7ghz on the core, and close to 8ghz memory clock, its speeds, as long as the game was running, was pinned straight at 100% the entire time, regardless of load. My fps was always well above 700, sometimes above 1000, in both scenarios. With the R9 380x installed, at bog standard settings, it sits at about 500mhz core but full speed memory clocks at 6000mhz. With the R9 380x then pushed to 150% power limit, +88mV, 1.2ghz core and 6100mhz memory on the slave bios, it still sits at 500mhz core but 6100mhz memory in menu. Actually playing the game, it slowly drops over a period of about 15 minutes from 500mhz core all the way to 300mhz core (2d/idle speeds), where the computer crashes and I get a BSOD followed by persisting graphical artifacting. And this is with the fps unlocked. My fps never really exceeded 55, and frequently would stutter to 0 and come back. The hottest the card got to was 46C. However, with RadeonPro locking the card to its maximum overclocked speeds and voltages, it easily achieves a stable 1200fps average in game, 1100 with triple buffering enabled, and reaches a maximum temp of 73C.

Next example: Planetside 2, with the same monitor, unlocked FPS, in fullscreen (2560x1600). The HD4870 pins itself at full speeds all day, regardless of settings. I guess I was pushing it hard enough to be running in 3D speeds even just watching youtube at 1080p. The HD7950 would also pin itself at 100% speeds with absolutely no fluctuation when overclocked while in game, and wouldn't fluctuate if left at default clocks either. The card would downclock to idle speeds (300 core, 150 memory) when the game was exited. Normal behavior, and identical to its behavior in osu. With the 380x though, if left at default settings, would run at about 850 core when the game first launched, constantly flopping about between 300mhz and 900mhz as charted in afterburner and trixxx in separate occasions. At some point it would flatline at 500mhz and BSOD. At the time, the hottest the GPU ever got was 52C. When overclocked, the game would launch at about 950mhz, and continue to flop between 900mhz and 300mhz, and eventually BSOD. Framerates frequently went from 55 quickly to 0 and back up to 55 again. Huge stutters. However, with RadeonPro locking the card to its maximum overclocked speeds and voltages, it easily achieves 160fps average in game at a toasty stable 74C.

For reference, this is the Sapphire Nitro, which ships with clock speeds of 1040mhz core, 1200mhz base (6000 effective) memory. It doesn't thermal throttle until about 80C (I tried to get there, the cooler is so good it's actually pretty hard to do without forcing the fans to turn off). Mosfet temps based on my IR thermometer never exceeded 95C~ish under an overvolted full load continuously looping unigine valley for an entire afternoon (well within the safe zone of the mosfets on this thing), so it's not VRM thermal throttling either. I didn't touch the fan curve of this card at all, left it on auto the whole time, whether overclocked or not. I've never seen it get over 76C, even when the room temperature was over 100F during a summer heatwave. [Edit: I did miss something, I've never seen my CPU over the low 50's Celsius, so CPU temp problems are instantly ruled out. Motherboard VRM's never exceed 40C, so again, not an issue. Even during that ridiculous heatwave.]

My PS2 outfit leader's RX480 had exactly this same issue. He had gone from a 780ti to an RX480, experimenting with a budget-friendly upgrade, and got a downgrade instead. After removing one friggin' resistor on the PCB by lighting it on fire with a soldering iron, all of the problems were fixed. He went from a stuttering 25fps mess with audio glitching and frequent BSOD's at 1440p ultrawide to a stable 170fps+ average with the same graphics settings (and the same hardware: 4.3ghz 4770k, quad channel 32GB DDR3 of some absurdly high speeds, two Samsung 750 evo's in raid 0, an absurd ax1200i). He gave up, returned the RX, and bought two MSI GTX 1070's and an HB SLI bridge. Don't ask me why.

End of story time.

Powertune will continuously drop C-states in an attempt to meet an artificially low power limit, regardless of temperature. It's not so bad on the RX series cards since the actual GPU die is far more power efficient than hawaii/tonga, but it's a problem that persisted through the generations. The more recent versions of Crimson (16.9.2, I think) claimed to "fix" this problem, but it isn't fixed, just reduced, which is why you don't notice it as much. The power limit scaling of 1/2 or 1/3 still stands in the way of unlocking the true performance of this card (the purpose of that resistor). It's also the primary reason I can't get a good overclock out of this GPU. Removing said resistor gives you back actual power limit control, which removes/overrides Powertune's aggressive behavior on the R9 series cards.

Fuggit I'm saving this in a text file for future reference. I'm tired of typing this out multiple times. In fact, I'm just tired. I hope I didn't miss any details in my story time.

Holy wall of text.
 

DavejaVous

Reputable
Jul 16, 2015
10
0
4,510
The guys on the LinusTechTips did less arguing and more solving. The problem was fixed by using the beta driver that was available in Radeon Settings, as well as a few tweaks to the in-game settings.

CS:GO runs at about 150 fps which is a great improvement.
Planeside 2 at about 50-60 fps, but its CPU bottlenecked.
 

Zerk2012

Titan
Ambassador


The CPU is the problem.
 

amtseung

Distinguished


Artificial loads like Unigine heaven/valley or furmark or the like will pin the GPU at 100%. I had several particular problems only in games (not even when pushed hard doing stuff like folding@home). Drivers newer than 16.9.2 technically "fixed" it. It still happens, enough to be noticeable to me, but a lot less than it used to. Well, to be more exact, rolling back all the way to 15.7.something fixes it entirely.

Arguing? We were arguing? I know I went on a huge tangent and went all story mode on y'all, but I didn't realise we were arguing. Sorry.

Short answer is: keep your drivers updated.
 
Solution