RX 480 power draw is 125 watts max. What's wrong?

deadgamer2

Honorable
Feb 11, 2013
48
0
10,540
The TDP on AMD says it's 150 watts, and on bencharks sites it would hit around 160 watts. Mine only draws 125 watts max when gaming. What's the problem?
The tool I use to find out my power draw is MSI Afterburner.

At first, I thought it was a problem with my power supply, so I bought a new one (upgraded from Thermaltake TR2 500W to EVGA Supernova 750 B2). But that didn't seem to solve the problem.

Increasing the power limit using MSI Afterburner didn't solve the problem either. And when I set the power limit to -20, I get the exact same (or negligibly worse) performance, but less power draw.
Overclocking the memory clock didn't seem to add more power draw whatsoever.
I've also tried to re-plug the 6+2 pin power cable. Nothing changed.

My GPU is Gigabyte RX 480 8GB G1 gaming. Is the design newer (like less TDP) or something? New drivers?
Or could it be bottleneck? I don't believe I'm being bottlenecked. My CPU is i5-6600k.
 
Solution
As the chip fab gets older the dust levels go down and they learn how to make better wafers with less flawed chips that have to have things disabled on them. The RX460 and 470 are the same chips, but with flaws so they disable the bad parts of the chips. They then save the worst of them and make a future oem "RX450" or a slower clocked RX460SE. In some cases they just discard them.

So the chips are being made better with less dust, flaws so they can run on less watts. This will also mean in the future they will be able to come out with faster clocked RX480's too.

Sometimes you get a lucky chip too. This happened with my A6-3650 Llano apu. @3.56ghz, all fsb overclock too. Others like the A8-3870K in other builds only clocked to...
Lol TDP of the whole card is not only chip but also ram chips and other components on the board. It was known that at release date RX had some issues with binning the chips thanks to GloFo. But now the process matured and you possibly get a far better chip now, also the board itself might be designed to take a bit less power itself. Then there is drivers or even firmware which might changes few things here and there to tweak the final power draw of the card. TDP of 150W was a value of a reference card produced by AMD on a release date. If performance is there, assuming you did some 3DMark testing and it's within expected range there is nothing to worry about. In fact it might lead to possibly higher OC on the model you currently posses.
 


Yep they created a nice fud and people will remember crap like this endlessly. You are a one big proof.
The issue was with reference card and with a power draw exceeded on PCIE slot. I don't even wanna drag into this topic, coz i have my own thoughts what and why exactly that appeared as a "serious issue" but nonetheless AMD did fix the power draw with the driver release.
It doesn't mean this is any cause of concern regarding OP's question. This is a totally different story, which i did explained above.

Edit: Op said it uses max 125W TDP while gaming which means the game doesn't utilize GPU to the limit. It might be V-sync, the game itself or the points I've described above. Let him run Furmark, he will find out what's the real stress test for GPU.
 
regardless of what your personal opinion is amd released a driver that can lower the cards power consumption--i assume that driver was for all versions of the card not just reference versions

toms hardware also reviewed that

and in the new driver theres also a compatibility mode option i believe--that may also affect power draw
 
Did you notice "Mine only draws 125 watts max when gaming." statement? Just like said in the edit, it could be V-Sync, the game itself or tons of the other reasons. Let OP run an actual synthetic GPU stress test and he will find out everything.

It takes to much = bad, it takes to little = bad (that must be suspicious). This is "much important" and yet after all those reviews people still forget what TDP means.
 
As the chip fab gets older the dust levels go down and they learn how to make better wafers with less flawed chips that have to have things disabled on them. The RX460 and 470 are the same chips, but with flaws so they disable the bad parts of the chips. They then save the worst of them and make a future oem "RX450" or a slower clocked RX460SE. In some cases they just discard them.

So the chips are being made better with less dust, flaws so they can run on less watts. This will also mean in the future they will be able to come out with faster clocked RX480's too.

Sometimes you get a lucky chip too. This happened with my A6-3650 Llano apu. @3.56ghz, all fsb overclock too. Others like the A8-3870K in other builds only clocked to 3.2ghz to 3.3ghz and needed more voltage and ran hotter to do that.

-If you want it to draw more power, with MSI afterburner
1. Set the fan profile to be a bit more aggressive.
2. Increase the power limit to +10% or more.
3. Overclock and then test it with furmark+prime95 as a quick test, and once you get the overclock close some games like Metro. Some games like Metro and other newer games will work the gpu harder than furmark. If it crashes or artifacts, lower your clocks a bit and retest it.

I am using a fan profile for my GTX-680+670 physx card like this..
50% fan until 60C. (Set the fan to run the fastest it can while being quiet)
65% fan until 70C (This one is the compromise speed between quiet and noise)
80% fan at 70C and then 100% fan at 80C.
-This will vary from card to card, you will just have to try it and go by how noisy the fan is.

-Things like adaptive Vsync, frame pacing, ect. will limit the TDP when it's not needed. There is no point in rendering the game at 85fps when 1080p is 60fps.
In most games my system runs at 60.7fps max and min is often 59.9 with some at about 45fps-55fps min.

Not all gpu's are equal. I used to run crossfire HD-7850's one was a Asus with a single fan, the other
was a gigabyte with a factory 900mhz OC with two fans. The gigabyte needed to be underclocked to 880mhz to run while the asus would run up to the limit of 1050mhz for games that didn't support crossfire. Of course when running crossfire the clocks must be matched or some games will crash.

 
Solution


what you see most likely the power consumption on gpu only and not the power on the entire card. hence you did not see power increase when overclocking your memory because there is no sensor to detect the power from memory (and everything else). if you aware pretty much every reputable reviewer site never use MSI AB to see how much power consumption the card use in their various test for power consumption. instead they measure the power at wall for the entire system or use more expensive hardware so they can isolate the power consumption for the graphic card alone (like TH and TPU did).

 

TRENDING THREADS