News New PCIe adapters turn your x16 slot into a clown car of GPU and SSD connectivity

It should be noted that the 1528D is a PCIe switch that does not require your motherboard to support bifurcation to function, which is quite handy. This is a switch and not a bifurcation card. I did not check the other card for the specs.

The 1528D breaks out into four 8 lane ports. I found it is $699 on Amazon, so you will really need it for that price. Just be paying attention that the bandwidth will obviously have to be shared with all of the ports, so a four card configuration will give 1/4 of the PCIe throughput per card.

I might look for a PCIe 3.0 version for a TrueNAS system. It shows in the list but I could not get any product info on their website. Or just bite the bullet and get a 24 port IT mode HBA.
 
Last edited:
It should be noted that the 1528D is a PCIe switch that does not require your motherboard to support bifurcation to function, which is quite handy. This is a switch and not a bifurcation card. I did not check the other card for the specs.

The 1528D breaks out into four 8 lane ports. I found it is $699 on Amazon, so you will really need it for that price. Just be paying attention that the bandwidth will obviously have to be shared with all of the ports, so a four card configuration will give 1/4 of the PCIe throughput per card.

I might look for a PCIe 3.0 version for a TrueNAS system. It shows in the list but I could not get any product info on their website. Or just bite the bullet and get a 24 port IT mode HBA.
I wonder if there are switches capable of turning a PCIE 16X v5 into 2 PCE16X V4 or similar.
 
This is news? I’ve had this product since last year. PCIe 5.0 × 8 slot, with the HighPoint adapter to get 32 PCIe lanes to my PCIe 3.0 and PCIe 4.0 U.2 SSDs
 
I wonder if there are switches capable of turning a PCIE 16X v5 into 2 PCE16X V4 or similar.
I've thought about something like that for a long time. It can almost turn an AM5 platform into HEDT. I actually imagine it with PCIe5x16 -> 3xPCIe4x16 directly on the motherboard. Then you can have 3 double-spaced x16 slots. You are unlikely to hit the max IO on three cards simultaneously. Plus, not all cards need to be GPUs. You might have storage or NIC cards.

Anyway, just daydreaming.
 
  • Like
Reactions: tamalero
How exactly are you supposed to run multiple GPU's with this thing? I even went to their website to look for a configuration that made sense for that application.

Not that I can remotely afford to buy something like this, I'm more curious than anything else.
 
I have a QNAP QM2-2P-384A which uses a PCIe switch to go from PCIe 3.0 x8 to two PCIe 3.0 x4 NVMe M.2 slots. The neat thing is I have it plugged into an old AM3+ board which only has PCIe 2.0 but internally the switch still runs at PCIe 3.0 speeds for the NVMe drives, with each drive seeing PCIe 3.0 x4 (verified with CrystalDiskInfo). Only the connection between the PCIe switch and the motherboard is limited to PCIe 2.0 speeds, and that bandwidth is distributed based on need at any given moment instead of an even 50/50 split or reducing lanes. So if I'm only accessing one of the drives at a time, that one drive will run at its full PCIe 3.0 x4 bandwidth and the PCIe switch actively converts it to PCIe 2.0 x8 for the motherboard with no loss in speed.
 
I've thought about something like that for a long time. It can almost turn an AM5 platform into HEDT. I actually imagine it with PCIe5x16 -> 3xPCIe4x16 directly on the motherboard. Then you can have 3 double-spaced x16 slots. You are unlikely to hit the max IO on three cards simultaneously. Plus, not all cards need to be GPUs. You might have storage or NIC cards.

Anyway, just daydreaming.
This, or just split for storage.. Lots of availability with splitting that PCIE5 into many SSDs or standard disks.
 
Devices like this SSD HBA are why I wish board makers would give us less M.2 slots and more PCI-e x16 slots for real upgrades such as this.
 
  • Like
Reactions: Li Ken-un

TRENDING THREADS