Continuing to research the bandwidth situation:
Gigabyte Z390 AORUS Master
Here's some info from the AORUS Master manual (as referred to previously + MB layout showing location of the drives):
Whilst this looks pretty good, I've read reports of the 'P' slot running at x1 mode even if the PCIEX4 slot is empty, which is rather odd (posts by
7Stark):
View: https://www.reddit.com/r/gigabyte/comments/a1u57t/z390_auros_ultra_m2_question/
M2A looks perfect as it doesn't affect any connections (why is it only Gigabyte can do this?), but is situated right below the GPU...
The 'M' slot is standard fare in that it disables 2x SATA drives.
If the M2P x1 issue was a fault, then things appear pretty decent, allowing for a couple of NVMe drives + a full set of 6 SATAs, with the only 'cost' being PCIEX4 running in x2 mode which is no big deal for me as it'd be empty anyway. Even if one should wish to use M2M to avoid the unfortunately situated 'A' slot (GPU proximity), that's 2 NVMe drives at the cost of only 2 SATAs (+ the aforementioned, but in my case unused, PCIE4).
ASUS Maximus XI Extreme
Here's the board diagram showing 2x M.2 slots, lower right (22), and 2 further slots added via the DIMM.2 card (7), upper right:
Unfortunately ASUS do not include a table showing you what disables what and info in that regard is not easy to find. A reviewer states the following (translated):
"In addition to the mentioned ROG DIMM.2 card with a pair of M.2 Socket 3, the possibilities of organizing the disk subsystem are represented by two interfaces M.2 Socket 3 and six SATA 6 Gb / s ports. There are several limitations. One of the slots M.2 Socket 3 (“M.2_1”) shares the bandwidth with one of the SATA 6 Gb / s ports (“SATA6G_2”) when installing a SATA M.2 drive. In turn, the two SATA 6 Gb / s ports (“SATA6G_5” and “SATA6G_6”) share the bandwidth with the lower PCI Express 3.0 x16 (x4) slot, which by default runs in x2 mode. And the DIMM.2 slot itself is disabled by default. Its activation in BIOS will put the PCI Express x16 slot (“PCIex16_1”) into x8 mode." (
https://ru.gecid.com/prtart.php?id=54289)
Now, that's all well and good but...
"One of the slots M.2 Socket 3 (“M.2_1”) shares the bandwidth with one of the SATA 6 Gb / s ports (“SATA6G_2”) when installing a SATA M.2 drive." - So that's a single SATA drive disabled if installing a SATA M.2 drive in M.2_1, but what if it's a PCIe drive (as it would be)? I wonder if this is the same, functionality-wise, as Gigabyte's M2A slot, i.e. it'd be fine/no conflicts if a using a PCIe M.2 drive. As for the other M.2 slot (M.2_2), the manual states that whilst M.2_1 supports PCIe 3.0 x4 and SATA mode; M.2_2 is PCIe 3.0 x4 only. I'm not sure what would be disabled here, at a guess 2x SATA ports (equivalent to Gigabyte's 'M' slot)?
It seems pretty clear that making use of the DIMM.2 card downgrades the GPU (PCIEX16_1) slot to x8 mode, but I'm unsure as to whether that's the only thing it negatively impacts or whether any SATA ports are also sacrificed. Also unsure as to the effect in terms of running at x8 as opposed to x16; 2% reduction in GPU performance is OK with me, but results seem to vary and not sure whether this'll hold true for the future. I'm guessing that it's only the PCIe downgrade which would make it the equivalent to Gigabyte's 'P' connection, though irritatingly hitting the primary GPU slot. A shame it can't be configured to take out PCIEX16_2 and PCIEX16_3 instead and/or have the DIMM.2 run at x2 (which would still far exceed SSD speeds).
One final thing, in the ASUS bios you can select x2 or x4 for the PCIEX16_3 slot, running under x4 disables SATA ports 5/6 (not a problem for me as will not being using PCIEX16_3). The CPU PCIE can be configured as follows: [PCIEX16_1 + PCIEX16_2] [DIMM.2_1 + PCIEX16_2] [DIMM.2_1 + DIMM.2_2] though honestly I'm not quite sure what that means as still getting to grips with all this
.
In summary (my best guess!):
M.2_1: No conflict if PCIe; disables 1x SATA port if SATA
M.2_2: Disables 2x SATA ports (EDIT: I actually think there's a chance that the M.2_2 slot is clear as I can't find anything that states otherwise, whereas all the other limitations are specifically listed, but that's probably wishful thinking)
DIMM.2_1 & DIMM.2_2: PCIEX16_1 reduced to x8 mode (irrespective of whether one or both drives are installed)
PCIEX16_3: If run in x4 mode disables 2x SATA ports (no conflict at x2)
Conclusion
Well, that's where I am so far - sorry for the wall of text!
I'm leaning towards the Extreme board as I prefer the look and have the option of running 4x NVMe drives should I wish in future. That said the Master allows for 2x NVMe drives + 6x SATAs without really impacting on anything else (PCIEX4 only), which is nice. I prefer the locations of the Extreme's M.2 slots over those on AORUS board, though Gigabytes are probably easier to remove and also allow for 3rd party heat sinks (such as some WD drives have), plus I do worry that the DIMM.2 card might block the (otherwise excellent) airflow to the RAM sticks.
If my assumptions are correct, as regards the Extreme, I could put the OS in the M.2_1 slot and run 6x SATA drives for now, with the option of either replacing 2 of them with another (single) NVMe, and/or utilising the DIMM.2 card for a total of 4x SATA and 4x NVMe or 3x NVMe and 6x SATA, depending on the cost of running the GPU at x8.
Anyone still here...
Thoughts appreciated as always.