AMD To Issue Software Fix To Address RX 480 Power Consumption Problems

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
You guys do know that AMD can just draw more watts from the PCIe connector. It's not like AMD didn't have headroom there. The whole "AMD is trying to sneak extra performance" theory is the least thought out one yet.
 
Hopefully AMD can issue fix to solve money consumption problems. These [bold](MOD EDIT)[/bold] made liars said "$200 - $240" - Guess what : cards are selling for different prices.

AMD's reaction: "It's not our problem!"
Oh, so it isn't your problem, huh? Then I suppose selling this card to the masses after building it for the masses is not your problem either.

Buying your excrement is not our problem either, so keep it to yourselves.

Seriously, $517 for an RX480 when even a GTX 970 beats the excrement out of
it?
 

That depends on how the power balance between PCIe slot and auxiliary 12V sources is managed and whether the components on the board can actually handle it. If 20% of the motherboard load gets shifted to VRM phases powered by the auxiliary supply, the auxiliary phases will dissipate ~35% more power. Since they are already running at ~95C though, that wouldn't be a very good idea.
 
@disturbed soul, 300w is for the entire architectural configuration, not for the slot itself. Jesus.

All sizes of ×4 and ×8 PCI Express cards are allowed a maximum power consumption of 25 W. All ×1 cards are initially 10 W; full-height cards may configure themselves as 'high-power' to reach 25 W, while half-height ×1 cards are fixed at 10?W. All sizes of ×16 cards are initially 25 W; like ×1 cards, half-height cards are limited to this number while full-height cards may increase their power after configuration. They can use up to 75 W (3.3 V × 3 A + 12 V × 5.5 A), though the specification demands that the higher-power configuration be used for graphics cards only, while cards of other purposes are to remain at 25 W.[12][13]

Optional connectors add 75 W (6-pin) or 150 W (8-pin) power for up to 300 W total (2×75 W + 1×150 W). Some cards are using two 8-pin connectors, but this has not been standardized yet, therefore such cards must not carry the official PCI Express logo. This configuration would allow 375 W total (1×75 W + 2×150 W) and will likely be standardized by PCI-SIG with the PCI Express 4.0 standard. The 8-pin PCI Express connector could be mistaken with the EPS12V connector, which is mainly used for powering SMP and multi-core systems.
 
They are gonna underclock the card and lose the performance of the card when they should of recelled all of them and replaced them with an 8 pin connector, nice one AMD first you lie about the power draw then you drop the power down when you gimped your own card to show how powerful and fast it was, your software now is going to reduce it's so callled advertised performance. What a decitful move from AMD.
 


True AMD did release the GPU with a serious flaw, but where on earth are you seeing it selling for $517? I've checked multiple retailers, all of them have stayed around $240.
 
Err, several I believe German and French sites mentioned the PCIE power issue as well, before or at the same time as Tom's. Kinda obnoxious how they focus on giving themselves all the credit...
 
The card would not have gotten the PCIe-badge if it hadn't gone through the official PCI-SIG verification testing, so it is not just AMD who missed the problem. I'm guessing there might be some kinks in the manufacturing process. Definitely need a re-test though. Also nicely highlights why "we don't pre-order" is still a good slogan.
 

Not all reviewers check out the board and pci-e power connector load.

I am guessing many cards are effected(some users have reported issues with boards, but it is hard to say what is fact and what is trolling at times), but most boards should tolerate this. It is a huge oversight to let this issue slip into the production cards and may hurt AMD for a while.

An 8-pin cable would have been an optimal solution since it may also help overclockers as well.

 
AMD likely has a way of adjusting the max current limit from each rail. Nvidia cards also have this function, e.g., you can from my own testing, many of their cards will pull more than 90 watts from a 6 pin power connector, if you up the limit in the bios.

You can adjust how much it pulls from the PCI-e slot, as well as the 6 and 8 pin power connector, in addition to specifying a global max power.

For my GTX 970, I adjusted the limits set to 75 for the PCI express slot, 90 watts for the 6 pin, and 175 watts for the 8 pin, and a global max of 340 watts. (so I do not run any power based throttling. The card will never use that much power, but for the max it can pull, the VRMs are more than capable of delivering the needed power based on their data sheets.

Overall, AMD may be doing something similar. They likely have a driver function which can tell the card to use a specific power table rather than the default one in the bios, similar to how you can adjust the voltage and fan speed.
 


Till Thursday.

http://videocardz.com/61753/nvidia-geforce-gtx-1060-specifications-leaked-faster-than-rx-480

More importantly GTX 1060 will launch in July, and the fact that we have these slides already, could further suggest that the rumored launch date is true (7th July).




The Asus 960 had an issue related solely to the Asus design... however, it did not overcurrent the PCI slot as the 480 does and therefore no one is frying MoBos as a result.




75 watts from the slot, and 225 from an 8 pin (150 watts) + 6 pin (75 watts) is just fine..... 100+ watts as measured thru the PCI slot is not fine..... 200 watts thru a 6 pin (75 w) and PCI slot (75 w) is not fine.


Before everyone gets all up about AMD, remember that all the FE cards are experiencing thermal throttling:

https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1080/31.html

During gaming, the [1070 FE] card goes above 82°C, which results in lower clocks due to Boost 3.0; see our Boost 3.0 Analysis for more details.

https://www.techpowerup.com/reviews/MSI/GTX_1070_Gaming_X/28.html

During gaming, the MSI 1070 Gaming] card also runs much cooler than the reference design, which avoids clock throttling above 82°C.

https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1080/31.html

During gaming, the [1080 FE] card goes above 82°C, which results in lower clocks due to Boost 3.0; see our Boost 3.0 Analysis for more details.

https://www.techpowerup.com/reviews/MSI/GTX_1080_Gaming_X/28.html

During gaming, the [MSI 1080 Gaming] card also runs much cooler than the reference design, which avoids clock throttling above 82°C.

Do ya think we will ever stop seeing people advise forum posters to "just get the cheapest card, they are all the same and you can overclock it to the same speed as the non-reference cards" ? Nope.

Aftermarket cards should address both issues.... and this should , again, remind consumers never to buy a reference card.
 

Even overloading a 6-pin connector would have been a much better option since even the "low current" MiniFitJr crimps on #18 wires would still be safe up to about 300W (8.3A per pin vs 8.5-9A spec) on its own, just don't plug two GPUs on the same cable if the cable has two PCIe AUX connectors on it.
 
without raising the power limit (which makes it your responsibility!) the only test that went over the pci-E margin was the stress test that resulted in a highly unlikely and unrealistic load.

so that will be the only benchmark to show any significant performance difference after the fix. they might also fix the issue of witcher 3 going slightly over spec (but well within margins) which could have a impact but a very marginal one.

all this is is a bug in the power limiter not recognizing that the card was drawing too much power in a situation no user is going to put it under for any length of time. AMD did stress test the card (furmark shows it well below 150 for example) but a certain combination of factors that AMD didn't test resulted in the power limiter failing to respond correctly in tom's test.

nothing sinister, nothing underhand, just a software bug that a user won't ever encounter.
 
I'm wondering from that little amd article which seems to express the memory bandwidth if these issues are only on the 8gb cards with their faster memory throughput.

Not seen any testing done with a 4gb model card with regards to the power draw issue.

Just a little theory of mine that testing was done at production with a lower memory bandwidth (as on the 4gb cards) the bandwidth was upped on the 8gb cards & this has overinflated the power envelope much more than expected & not been tested properly post production.

I'm likely wrong but it seems suspect that the memory through put on the 8gb is 15% higher - I personally think think this was an afterthought decision by amd to make the 8gb cards a more attractive proposition ,because without that little performance increase there would be no reason to buy one just for the extra vram IMO.
 
I don't really see the problem with it. Third party resellers will modify the design with 8 pin power connectors anyway.
It's pretty much the same as the Nvidia 1080 reference cards overheating and throttling issue which is solved by a Third party cooling type card.
Sometimes AMD and Nvidia just get it wrong, luckily we can rely on the third party companies to fix up those problems.
 


So, it DOES support 150w to 300W, but the 75W are a specification for certain required cooling in the pc, so... what if the reddit claims wich some have pictures of really * up cases (watch the full pictures in some cases ive read), so if they dont have a case where you can exchange air its amd's fault... nice...
 
From what I've seen over on the AMD subreddit, all of the people who have tried so far have been able to pretty significantly undervolt the card. Undervolting the card actually allows it to attain higher clockspeeds, which seems counter-intuitive but true. All that I have seen are able to easily hit the same clock speeds as stock and most have been hitting the 1340 range.

With the lower voltage the power draw is significantly lower and oddly enough seems to line up with the 154 or so Watt power envelope that AMD pushed in pre-release information slides.

So there is the possibility that there will not be a negative performance impact. It seems possible at this point that AMD just messed up the settings and over-volted the card by default. Only time will tell for sure!
 
Then they probably broke the embargo. Toms said they told AMD about the issue, days before the embargo ended.

In any case, I think everyone who discovered this issue independently (not just people spreading rumors) deserves equal credit.
 
I wasn't aware that PCI-SIG had any kind certification program. Interesting.

Does the tester have to do the same thing as graphics card reviewers and try to find a load that would induce max power draw? That sounds odd. I wonder if they just run whatever software AMD gives them, or if the card has some built-in test feature to simulate a max load.
 
This post interests me.

Welcome to the forums!
 



It has nothing to do with the cooling in the PC. It has to do with what the PCI circuit traces and VRMs can handle. High end boards probably won't have a problem even though the draw might be above spec. Budget and middling boards MIGHT. Either way, it's the fact that more than 75w or amperage beyond design that are likely to cause physical damage to the traces ( The physical "circuits"/"connections") of the slot that are susceptible and no amount of cooling is going to mitigate that fact. It's no different really than a direct short through the large gauge wire of your positive battery cable on your vehicle. If you short it to ground or draw a higher amperage through it that it was designed to withstand, it's definitely going to let some smoke out and fry the cable. Same principal.

A typical 8 cylinder starter current draw is around 250A. If you have a faulty starter that draws slightly more, it probably won't fry the cable. If however it pulls significantly more, say, 400A, you're probably going to melt the insulation and fry the cable. Doesn't seem like this ought to be that difficult to understand.
 


Thumbs up for automotive metaphor 😉
 


Yet, i dont see why the port would fry and not the whole circuit going there... i dont see how such tiny cooper paths/connections wont fry and a more solid copper one would... i mean, about 20 years ago i burnt a ram slot because i didnt fit it right and i made a shortcut, the burnt was just like the one ive seen on motherboards pictures of people who allegates it was due to this card in specific... some didnt even proved they had a 480 video card, im am skepticall of all this kind of shit because i almost literally dont trust anyone on the internet...

No proofs, just pictures of old pcs, i dont buy it... why would you buy a 200USD card and put it on a junk case?

PS, sorry for my english, i never studied it xD i learnt from music, games and hardware sites xD

PS2, i liked the automotive reference too xD
 
Status
Not open for further replies.