PCIE not running at X16

Koemas

Distinguished
Nov 29, 2014
35
0
18,530
I recently download GPU-Z and discovered that my card is only running at 4X instead of 16. GPU-z also states that my graphics card supports 16X.

GPU-Z Reading: PCI-E 2.0x16@4 2.0. (This doesn't change when I launch render test).
GPU: GTX 970 (single card and not overclocked)
CPU: 15 2500k @ 3.3
Motherboard: MSI P67A-G43 (box states it features 2 16X PCI Express slots)

Graphics card is placed in the slot closest to the cpu and the manual does not specify a specific slot for it to run in 16X. Also recently updated Bios.

So is something limiting and preventing it from running in 16x?
 
Solution
1) I don't think there's a problem.

2) I see no way to actually LIMIT to a bandwidth of "x4 @ PCIe v2" on the first slot.

3) The above specs you listed look correct, though it's not certain how it's getting the information. For example, if you used the second "x16" slot would those specs stay the same or change to "x8" to report the usable bandwidth?

4) Anyway, not sure what else to suggest. Finding repeatable BENCHMARKS is really the best way to go.

5) *Final note: I use a specific benchmark (3DMark2001 but you can use whatever) to record a score so i can periodically check to see if it's changed. Good for peace of mind, but also troubleshooting if the score comes back different.
I'd go into the bios and check what speed the slot is running at, I would wager it is set to run at pcie 2 8x. This may be why it is showing as 4x pcie 3

Note however you have a pcie 3 card but the slot is pcie 2. PCIE 2 offers half of the pcie bandwidth as pcie 3.

So after changing to pcie 16x gpuz may still only show 8x.

Open the bios, change to x16

This video should help you find the setting you need to change.... mind the music though

https://www.youtube.com/watch?v=5b5MaPE-4ms

 
You have two "x16" slots however only the first one runs at full speed. The top one runs at full x16 PCIe 2.0 and the second one at x8 PCIe 2.0.

I'm going to look at the manual, though there may not actually be a problem. Perhaps it's just the way it's reported.

I'll run GPU-Z on my setup too (completely different) and see what is reported.
 
Don't use the second PCIe "x16" slot for anything or your top x16 slot will operate at "x8 PCIe 2.0" bandwidth.

Summary:
If the second x16 slot isn't used then I suspect it's just a reporting issue due to an older motherboard and newer card. I would look at BENCHMARKS as well to see how you compare (bottleneck?) though your best best is ones like Tomb Raider with minimal CPU bottleneck.

*FURMARK BENCHMARK should be pretty accurate. I believe a GTX970 should get roughly 4500 on the default 1080p benchmark unfortunately no overclock value given but at least you'd know if you're getting massively throttled or not if you're in the ballpark (maybe you find specific benchmarks using overclock values).
http://www.ozone3d.net/gpudb/score.php?which=183344

Update:
My GTX680 got 4180, 69FPS average on the default 1080p test so... I'm a bit confused on how to compare that since it's way above the other GTX680 (though it had an FX-6300 CPU). Sigh.

I was going to DELETE all this but it's probably worth leaving in to point out you have to be careful with benchmarks.
 


Ok, I'll try and run Tomb Raider. What exaclty would I be looking for when I run Tomb Raider? To see if it still doesn't change to 16x?
 


I didn't see anything in the bios setting (or in the video) that gives me the option to change it to 16x.
 


No, just the frame rate value using exact same settings.
http://www.technologyx.com/featured/asus-strix-gtx-970-oc-review-silent-deadly/4/

The above is: Ultimate @1080p + TressFX + FXAA

Your CPU shouldn't affects results much, though how OVERCLOCKED your GTX970 is will (better or worse), and possibly newer drivers may help. I'm not sure if any TressFX optimizations were done when this benchmark came out but it's a guideline at least.

If you're getting roughly the same results then it doesn't look like your GTX970 is being bottlenecked by low bandwidth to the PCIe bus.
 


Yeah, got similar fps with a max of 96 and an average of 74.2. So that means its likely the GPU-Z is just misreading it?

 
74.2FPS average?

That's about right. The Asus Strix got 79.4FPS.

I don't think the CPU affects things too much though that could account for a few percent. Not sure what the Asus Strix default frequency is compared to yours, but as it stands now the difference is 7% only for the results.

I think boost (before FURTHER overclocking) for the Strix is 1253MHz.

CPU difference:
Test system for linked review: Single Thread is (i7-3820@4.4GHz):
2245

Yours (if not overclocked):
1897

(Probably minimal bottleneck as I said before particularly with Tomb Raider... on a side note not much need to upgrade your system for a while especially with DX12 coming to reduce CPU requirements which is nice. So future games shouldn't need a better CPU and your present one won't be much of a bottleneck even in heavy CPU usage titles.)

 
Yeah tomb raider isnt a very cpu dependant game so no bottleneck there, works fine with a high end gpu and a core 2 quad so the 6300 isnt a problem in this game.

As for gpuz is it a recent build? I read the bios section of the manual there appears only to be limited options regarding pcie options outside of pcie scheduling which alllows you to vary pcie bandwidth between active devices. As you only have the 970 its of no use.

Go into the bios and simply reset all values to default. I did read the first slot runs at 16x pcie 2 by default so its worth a shot. Try some other software as well.

What do you see in the NV control pannel under Help-> System information?
 


This is what comes up under system information:

Operating System: Windows 10 Home, 64-bit
DirectX version: 12.0
GPU processor: GeForce GTX 970
Driver version: 358.50
Direct3D API version: 12
Direct3D feature level: 12_1
CUDA Cores: 1664
Core clock: 1215 MHz
Memory data rate: 7010 MHz
Memory interface: 256-bit
Memory bandwidth: 224.32 GB/s
Total available graphics memory: 8177 MB
Dedicated video memory: 4096 MB GDDR5
System video memory: 0 MB
Shared system memory: 4081 MB
Video BIOS version: 84.04.28.00.71
IRQ: Not used
Bus: PCI Express x16 Gen2
 
1) I don't think there's a problem.

2) I see no way to actually LIMIT to a bandwidth of "x4 @ PCIe v2" on the first slot.

3) The above specs you listed look correct, though it's not certain how it's getting the information. For example, if you used the second "x16" slot would those specs stay the same or change to "x8" to report the usable bandwidth?

4) Anyway, not sure what else to suggest. Finding repeatable BENCHMARKS is really the best way to go.

5) *Final note: I use a specific benchmark (3DMark2001 but you can use whatever) to record a score so i can periodically check to see if it's changed. Good for peace of mind, but also troubleshooting if the score comes back different.
 
Solution


I was just concerned about the reading and was wondering if my graphics card was being underutilized. Since it matched the benchmark fps for TR I'm convinced that its just a misreading.