Question Slow PCIe on Asus Sabertooth X58 ?

Nov 11, 2023
8
0
10
Hello,

I have an Asus Sabertooth X58 with an i7-950.

I'm attempting to use this old but good motherboard I own to run a truenas server. I am using an LSI 8i (pcie 2.0) for the array. I've used another motherboard with a single pcie 3.0 slot and I was seeing no issues with transfers over the network. (It just didn't have the juice to run jail apps)

With the X58 I am seeing only 5mbs transfers over the network to my 100+mbs 14tb drives. I ran iPerf to test the raw network speeds between my new PC and this X58 and I'm seeing 980+ mbits. I tried an internal transfer (8i internal speed) between drives and I'm seeing 100+mbs. Again, just not between pcie 2 LSI and the network.

I'd appreciate and insight into this issue, thanks!!
 
Last edited:
Welcome to the forums, newcomer!

I am using an LSI 8i (pcie 2.0) for the array.
Which slot is the card populating on the motherboard? What other devices are populating the rest of the PCIe slots on the motherboard?

I have an asus sabertooth x58 with an i7-950
BIOS version for your motherboard?
 
Last edited:
  • Like
Reactions: tcables
Hello,

Thank you for the kind welcome.

The LSI card is in the upper most/top pcie2.0 slot. I have no other cards in any slots.

Firmware is v1402, 08/09/2012


Thanks!
 
Do you have any drives connected via the motherboard SATA you could test with (or could connect to it to test)?

The X58 integrated NICs from Asus weren't exactly known for a good implementation. I used an addon card NIC with mine, but I can't remember whether it's PCI or PCIe. Testing one of those would be another option to try to diagnose where the issue lies.
 
  • Like
Reactions: tcables
Do you have any drives connected via the motherboard SATA you could test with (or could connect to it to test)?

None connected at the moment. I will try adding one, and transferring over network with the existing Ethernet port.

The X58 integrated NICs from Asus weren't exactly known for a good implementation. I used an addon card NIC with mine, but I can't remember whether it's PCI or PCIe. Testing one of those would be another option to try to diagnose where the issue lies.

I will also try adding one of my spare nic cards to the PCI slot, and trying the transfer between the LSI drives, and then the motherboard data drive also.

Thank you for the suggestions!
 
Hello,

Sorry for the late reply. I've had success with two scenarios. I've plugged a harddrive into the motherboard's sata2 port and was able to transfer 1 terabyte of movies at 110mbs over ftp. Then I also tried plugging in a cheap pcie data card into a pcie x1 port and had the same outcome of 1tb @ 110.

The issue seems to stem from the LSI card and the pcie x16 2.0 slot still. Starts off on a reboot at 70mbs over network and ends up at 1mbs 100gb later. These are modern 14tb drives I'm writing to with the LSI card. (Internal transfer rates between drives on the LSI card still max out speeds at around 170mbs.)
 
I can't think of why this would happen, but it almost seems like the PCIe bus is the problem. Have you tried transferring between a drive connected to SATA and the drives on the card?

Also out of curiosity what is the storage configuration of the drives? Is it hardware RAID? ZFS from TrueNAS? JBOD? etc

It certainly doesn't seem like a network issue since you were able to transfer fine with a drive connected to the motherboard SATA.
 
I tried copying between the truenas mirrored zfs vdev (14tb) and the 1tb plugged into the sata2 on the motherboard and its maxing out sata2 just fine.

Correction: I let it run for a couple hours and it's only hitting 100mb/s. It was maxing it out for 20 minutes. So yes, it seems from zfs vdev to sata2 motherboard is slow. All of these stats i have been giving you are from rsync from the truenas shell. (Besides ftp)


I also tried opposite from sata2 motherboard to zfs vdev and its the same speed.


I mean at the end of the day, I should just drop $400 and buy a new Intel set, but this motherboard, processor, and ram cost me $650 in 2012, and for the specs, I don't see why I can't squeeze 5 more years out of it.


Thank you for your help!!
 
Last edited:
Some of it can be overhead from TrueNAS and ZFS as it's quite frankly not great for throughput. I've got a RAID-Z2 setup on a 12700K with 18TB WD Red Pro drives and it's not universally faster than my old SNB Xeon system which uses a hardware RAID 6 setup with 4TB Seagate Ironwolf drives.

I agree you should be able to run that system okay, but it seems to be running worse than it should. Is the LSI card running properly PCIe wise?
 
Is the LSI card running properly PCIe wise?
How would I check that?

Some of it can be overhead from TrueNAS and ZFS
This did not occur when I had a single pcie 3.0x16 motherboard and a two port sata6 card with this same system last year. I've moved to the system we are discussing here with the sabertooth recently. I maxed out the gig lan no matter how long the transfer.

So something is either up with the pcie 2 x16 or the LSI card. I might have to pull the card and plug into the old board and try that to see if the card is bad. I want to avoid that super hassle, but if you think that's the last option I will try so.
 
Last edited:
hello, sorry for the wildly late reply, crazy personal life.

lspci does not display hardware names for some reason. tried it as `$sudo lspci` same deal.

I have no idea what happened or if I just never tested FTP directly to any folder on my LSI connected drives (where I'm getting 1mbs writes to) - but im saturating the ftp connection through the main nic and the LSI card just fine as if none of this happened. a friend suggested i may be getting thermal throttling on the LSI card so i pointed a 5000rpm fan at it and there was no change.

All of this leads me to believe its just the SMB connection though my desktop thats causing the slow transfer. BUT again this slowness over smb is a new occurrence.

Thoughts?
 
That's certainly an odd problem to be having, and after doing a bunch of searching I couldn't come up with anything particularly useful. You could check to see if any hardware usage jumps up when using SMB. The only thing I saw which seemed like it could be related is if the traffic was encrypted, but I would think the CPU should be fast enough that wouldn't matter.
 
What's even more odd is that I can run an rsync copy from my desktop using the smb share paths and I will get a constant good 50mbs for hours. Also I don't have whatever that "rsync service" is on the services page enabled. But again the main issue here is with the other motherboard I was seeing 70-100bw transfers over smb for hours and on this one it goes from 70 to 70kbs within 10 minutes. Im also using the Same Linux mint desktop, same version for years.