I once tried to capture video of the card downclocking itself, but the video crapped itself out from the strain of recording. xD
Here comes story time. I love story time. Just kidding.
First example would have to be osu!, a very not-demanding game based on OpenGL, which I play in windowed mode at 1280x1024 on my 30" Apple Cinemadisplay. First up is the HD7950. With the bios switch set to the master position, which I left completely bog standard, clocks would fluctuate based on the load but always within 50mhz of the maximum. Normal behavior. With the switch moved to slave, which I changed to a maximum power limit scaling of 200%, +150mV, 1.7ghz on the core, and close to 8ghz memory clock, its speeds, as long as the game was running, was pinned straight at 100% the entire time, regardless of load. My fps was always well above 700, sometimes above 1000, in both scenarios. With the R9 380x installed, at bog standard settings, it sits at about 500mhz core but full speed memory clocks at 6000mhz. With the R9 380x then pushed to 150% power limit, +88mV, 1.2ghz core and 6100mhz memory on the slave bios, it still sits at 500mhz core but 6100mhz memory in menu. Actually playing the game, it slowly drops over a period of about 15 minutes from 500mhz core all the way to 300mhz core (2d/idle speeds), where the computer crashes and I get a BSOD followed by persisting graphical artifacting. And this is with the fps unlocked. My fps never really exceeded 55, and frequently would stutter to 0 and come back. The hottest the card got to was 46C. However, with RadeonPro locking the card to its maximum overclocked speeds and voltages, it easily achieves a stable 1200fps average in game, 1100 with triple buffering enabled, and reaches a maximum temp of 73C.
Next example: Planetside 2, with the same monitor, unlocked FPS, in fullscreen (2560x1600). The HD4870 pins itself at full speeds all day, regardless of settings. I guess I was pushing it hard enough to be running in 3D speeds even just watching youtube at 1080p. The HD7950 would also pin itself at 100% speeds with absolutely no fluctuation when overclocked while in game, and wouldn't fluctuate if left at default clocks either. The card would downclock to idle speeds (300 core, 150 memory) when the game was exited. Normal behavior, and identical to its behavior in osu. With the 380x though, if left at default settings, would run at about 850 core when the game first launched, constantly flopping about between 300mhz and 900mhz as charted in afterburner and trixxx in separate occasions. At some point it would flatline at 500mhz and BSOD. At the time, the hottest the GPU ever got was 52C. When overclocked, the game would launch at about 950mhz, and continue to flop between 900mhz and 300mhz, and eventually BSOD. Framerates frequently went from 55 quickly to 0 and back up to 55 again. Huge stutters. However, with RadeonPro locking the card to its maximum overclocked speeds and voltages, it easily achieves 160fps average in game at a toasty stable 74C.
For reference, this is the Sapphire Nitro, which ships with clock speeds of 1040mhz core, 1200mhz base (6000 effective) memory. It doesn't thermal throttle until about 80C (I tried to get there, the cooler is so good it's actually pretty hard to do without forcing the fans to turn off). Mosfet temps based on my IR thermometer never exceeded 95C~ish under an overvolted full load continuously looping unigine valley for an entire afternoon (well within the safe zone of the mosfets on this thing), so it's not VRM thermal throttling either. I didn't touch the fan curve of this card at all, left it on auto the whole time, whether overclocked or not. I've never seen it get over 76C, even when the room temperature was over 100F during a summer heatwave. [Edit: I did miss something, I've never seen my CPU over the low 50's Celsius, so CPU temp problems are instantly ruled out. Motherboard VRM's never exceed 40C, so again, not an issue. Even during that ridiculous heatwave.]
My PS2 outfit leader's RX480 had exactly this same issue. He had gone from a 780ti to an RX480, experimenting with a budget-friendly upgrade, and got a downgrade instead. After removing one friggin' resistor on the PCB by lighting it on fire with a soldering iron, all of the problems were fixed. He went from a stuttering 25fps mess with audio glitching and frequent BSOD's at 1440p ultrawide to a stable 170fps+ average with the same graphics settings (and the same hardware: 4.3ghz 4770k, quad channel 32GB DDR3 of some absurdly high speeds, two Samsung 750 evo's in raid 0, an absurd ax1200i). He gave up, returned the RX, and bought two MSI GTX 1070's and an HB SLI bridge. Don't ask me why.
End of story time.
Powertune will continuously drop C-states in an attempt to meet an artificially low power limit, regardless of temperature. It's not so bad on the RX series cards since the actual GPU die is far more power efficient than hawaii/tonga, but it's a problem that persisted through the generations. The more recent versions of Crimson (16.9.2, I think) claimed to "fix" this problem, but it isn't fixed, just reduced, which is why you don't notice it as much. The power limit scaling of 1/2 or 1/3 still stands in the way of unlocking the true performance of this card (the purpose of that resistor). It's also the primary reason I can't get a good overclock out of this GPU. Removing said resistor gives you back actual power limit control, which removes/overrides Powertune's aggressive behavior on the R9 series cards.
Fuggit I'm saving this in a text file for future reference. I'm tired of typing this out multiple times. In fact, I'm just tired. I hope I didn't miss any details in my story time.
Holy wall of text.