My graphics card will not connect at PCIe x16

Status
Not open for further replies.

HamBown81

Commendable
Aug 3, 2017
917
0
1,360
I have an ASUS Sabertooth 990FX R2.0 with an AMD FX-8370 CPU. I recently got my XFX RX 580 GTR-S 8GB replaced on RMA and I can't get it to connect at PCIe x16.

I have tried a ton of stuff and have twice posted a long thread showing all of my testing that got zero responses. I am hoping it is because of TLDR and not a lack of any further suggestions.

Any help would be appreciated, I have done a ton of troubleshooting already.
 
Solution
Only thing you could have done is test in a system with 3.0 slots,but you don't have acces to it afaik. Seems that rma is a good thing to do. If they do test in a system like that and find the gpu good is it the slots that make a difference.

EcF5Ojf.png

 
Link your other thread please because I am probably going to ask a lot of questions you have already answered.

What are you using to tell how many lanes your GPU has? I would use GPU-Z. Click the question mark next to "bus interface" in GPU-Z. If you are using the top pcie slot in your board, after the test, it should say PCIE 2.0 X16 @ PCIE 2.0 X16 or PCIE 3.0 X16 @ PCIE 2.0 X16.
 


http://www.tomshardware.com/answers/id-3506875/xfx-580-gtr-running-pcie-asus-sabertooth-990fx.html

Yes, you can see the render test running in the screenshot.
 
MERGED QUESTION
Question from HamBown81 : "XFX RX 580 GTR-S running at PCIe x8 on ASUS Sabertooth 990FX R2.0"



 
Well I am thoroughly stumped on this one, that doesn't make any sense. The fact that it is doing this in two different systems really has me stumped, the only constant is the GPU itself and the fact they are both AM3+ systems. However I am doubtful the AM3+ platform is the issue because I feel like there would be a lot more people with this problem if it was and I cant seem to find anyone with this same issue.

Thanks for all the info, most posters don't give nearly the amount of info you gave, I am surprised nobody commented on your first thread.

I will post your threads in the heralds forum and see if I can get some more eyes on it for you.
 


Thanks dude, I really appreciate it.
 

That is the conclusion that I keep coming too as well. I am a bit torn about moving forward with an RMA though, which may be why I am grasping at straws. The card is running at 1515MHz core clock with the VRAM at 2250MHz and is super happy about it below the high 70's. I feel like I would almost certainly get something worse in return...

In addition to that, I have provided all of this information and more to XFX and they are reluctant to believe that it is the card. They keep suggesting that it is the board or a firmware problem, despite the fact that my previous RX 580 ran at x16.

EDIT: Do you have any idea why it would have behaved the way it did when I taped the x16 pins? I have seen a few articles do this to force x8 and those cards definitely behaved differently
 
Asking for wild theories, here it comes :)

By what you already done, all points toward some kind of problem with the 580. Now, the question is, how motherboard finds out what kind of card is connected, 16, 8 or 4? Looking at PCIe standard, the answer would be: depends on certain connection on the card. If Pin 1A is connected to Pin 81B, card will register as x16; if connected to 48B, card will register as x8 and so on.

So if I'm right, on your card connection 1A-81B might be broken. Easy (well not so, but possible) way to test that would be to cover only 81B pin and run GPU-Z in such configuration. If you would get exactly same readings as your starting one, that could be a proof.

However, like I said, just a wild theory. Not even sure if that would be safe to cover only one pin ...
 

In light of the forming consensus that there is likely some type of issue with the card, I will pose the same question to you.

Considering the stable OC (1515 / 2250) and performance (15703 Firestrike Graphics score) I have been able to achieve on air would you move forward with an RMA? I feel like I got a bit lucky in the lottery, despite my lane width issue... which we know has a minimal impact on performance.

EDIT: Would this specific failure also jive with the weird results that I got when I taped the x16 pins?
 
Interestingly, there was another thread with a somewhat similar issue - albeit with a 480.
http://www.tomshardware.com/answers/id-3414532/480-running.html

Turned out a few blasts with compressed air did the trick.

Now, in this case, that's unlikely to do anything - given you've tried other slots, other cards and other systems entirely (and you've cleaned the slots).

At this point, I can only think it relates to:
Thoroughly cleaned and inspected GPU contacts (minor cosmetic scratch)

Now, while it may appear minor and cosmetic only, the connector can be sensitive - and given all the troubleshooting, shy of it just being a 'bad' card, I would suspect that has to be the cause.




1. Personally yes, I would. Because:

2. While x8 vs x16 has been debated forever and doesn't appear to impact anything; outside of a very select set of taks. When you're comparing a card running in x8 on PCIe Gen2, you're essentially running with less than the bandwidth of Gen3 PCIe x4, which is where you start to see some impacts. I can't find too much recent info on it from reputable sources..... but IIRC, x4 is where you start to see impacts.
I did find this video, can't vouch for the source by any means: https://www.youtube.com/watch?v=eglDVG84zIQ

Now, whether the gap(s) are large enough for you to care, or whether you'll ever use the variables (resolution, game engine etc) for it to matter too much is entirely your call.

Personally, if a GPU is not operating at x16, despite being an x16 card (and you're not running CFx), I'd RMA it. It's a reasonable expectation to get what you pay for.
 

The scratch is not very deep and I checked each of the pins with a multimeter. They all have good continuity to ground and equivalent resistance through the core for the transmitter lanes, from both sides of the scratch.

LiYMceT.png

 
Saw these two pics,

Radeon RX 580 GTR-S Slot #1
mIB4vY2.png


XFX Radeon RX 580 GTR-S w/ taped pins
X8RStBb.png

which suggests that with the pins taped of the performance is about half so maybe the card just reports wrong as being at x8 when really operating at x16?

Looked at specs here,
https://www.google.nl/url?sa=t&rct=j&q=&esrc=s&source=web&cd=7&ved=0ahUKEwibjPeJ7_rVAhULbRQKHbJ3CrIQFghTMAY&url=http%3A%2F%2Fxfxforce.com%2Fen-us%2Fproducts%2Famd-radeon-rx-500-series%2Frx-580-gtr-8gb-dd-led-rx-580a8dbr6&usg=AFQjCNGk9pLMgv172wtUgehIDR5YYq0ICA
and here,
https://www.google.nl/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&cad=rja&uact=8&ved=0ahUKEwibjPeJ7_rVAhULbRQKHbJ3CrIQFgg7MAM&url=https%3A%2F%2Fwww.techpowerup.com%2Fgpudb%2Fb4513%2Fxfx-gtr-s-rx-580-black-limited-edition-oc&usg=AFQjCNG2FNOZ1K-e_nt7lnHzq3lT4mBLhQ
these show that the first picture shows the right specs so maybe is just the lane bandwith a wrong reported one.

Didn't read whole thread but you have the latest bios?
 

How does it make sense that the core clock and memory clock, under load, would be half of the BIOS settings though?

Taping these pins should have resulted in no change in my GPU-Z readout IMO
http://www.tomshardware.com/reviews/pci-express-2.0,1915-5.html
 


100% agree.

One more thing though - that card was already RMA? Why?
 

The second picture that you linked, with the values circled in red is the result of my taping the x16 pins as they did in the article I linked. The results were unexpected and so far unexplained.
 

My previous card wouldn't run at the factory OC. The GTR-S ships with a 1430 / 2000MHz BIOS and is "guaranteed" to run at 1450/2025 but my previous card couldn't do it.

That is why I am scared to send this one back when it is running at 1515/2250
 

Maybe ,but these are your results which are more important than an artikel from 2008.

Another thing is that your board has pcie 2.0 lanes which is equal to 3.0 running at x8, i think why the card reports that way,it can run at pcie 3.0.but the board doesn't allow it.
Your brothers motherboard has also lanes running at max 2.0 so why it reports the same way in his system.
 

I don't put any weight on the results of the pins being covered, I just thought it was weird. My assumption was just that the way the lanes are utilized by the card is different.

Your idea about 3.0 vs 2.0 makes sense in a vacuum; however, my previous RX 580 (which this is a replacement for) was connecting at PCIe 2.0 x16 in the same slot so I don't think that is it.
 
I just think that gpu-z reads it wrong with this gpu. You can try an older version of gpu-z,see what results that gives or test the gpu in a system with motherboard that has true pcie 3.0 slots.

Can you check with hwinfo32?

download hwinfo32,
install and open it=click run,
look at the top window which is the system summary,
at the right top half should be at what speeds the pcie port runs with the other specs of the gpu
 
Status
Not open for further replies.