News Intel Arc B580 GPU with two M.2 slots smiles for the camera — Maxsun GPU offers storage expansion with two PCIe 4.0 SSDs

Putting those M.2 connectors there, doesn't necessarily mean that you should mount NVMe storage drives in that very spot.

There are plenty of M.2. to M.2 or M.2 to PCIe slot cable connector variants, allowing you to put peripherals and extensions somewhere more convenient.

Providing those connectors should be default on any card that won't use all the lane allocations of the slot if physically occupies: they are simply too precious a resource to waste!

I've just put a Minisforum BD790i into service, a €500 Mini-ITX board including a Ryzen 9 7945HX APU that offers a x16 PCIe v5.0 slot and two M.2 connectors.

To my knowledge you can't buy anything to take advantage of a PCIe v5 x16 slot, perhaps some prototype CXL adapters do exist. Yet it's very easy to run out of room and ports to add things to that form factor, which is a shame, because it's a monster of compute and I/O power by virtue of its silicon resources.

To alleviate things a little I bought a cheap bifurcation adapter from Amazon which splits the x16 slot into two M.2 mouts underneath a raised slot that's physically x16 but x8 electrically and capable of fitting any normal low-profile card for normal slot total height.

I used the outward facing M.2 mount to add an ASM based 6 port SATA adapter (no space for SATA ports on that board otherwise, even if 2 come with the APU) while the inward facing one holds a third M.2 NVMe drive.

The x8 PCIe v5.0 slot currently holds an Aquantia ACQ107 based Nbase-T card (including 10Base-T) that runs with either 2 lanes of PCIe v3 or 4 lanes of PCIe v4.

With the SATA ports feeding a couple of 18GB Seagate Helium hard drives and the NVMe slots providing cache, this is a monster µ-server in terms of capabilities which currently suckes a maximum of 130 Watts at the wall with all out power viruses, yet easily idles at 30 Watts, again at the wall plug with HDDs spinning and 10Gbit networking up and alive.

If only they supported ECC RAM on Dragon Range... or installing their drivers on Windows server editions! (a penny pinching aspect of AMD I've hated since the first ever APUs they sold! Teams green and blue don't do that!)

It's still a giant waste in terms of what bandwidth is available in total, but a much better use of otherwise inaccessible resources.

Adding this type of bifurcation support via M.2 connectors should really be default, it's extra cost is pennies.

But unfortunately customer support issues could be giant, because everyone would blame the GPUs for any stupidity behind them... even if it's essentially entirely a mainboard+M.2 peripheral issue and the GPU would only provide circuit board traces.
 
Last edited:
IMO GPU makers should move from 16 lanes to 8 lanes as a standard .

with Gen5 PCIe bandwidth and soon GEN6 to come , it is a good idea to spare 8 lanes for other cards .. and not waste the lanes.