You have more money sunk into the storage, so why bother with desktop boards? Something like a Threadripper board would give you all the PCI lanes you want; you could also find a server board (even used) that will give you more lanes. Even better, why not go to a dedicated storage array (NAS or small SAN) with a 10GBe NIC, and just remove the higher storage requirements from your local desktop?
I am in a very similar situation. This system was actually built more or less from scratch for the purpose I'll expand on below, but that's rare for me. Most of the time it's a continous replacement of parts or rebuilding a rig from cast-off for kids & family.
And then it's rare that I have a single device for a single purpose. Quite often single devices serve multiple purposes (e.g. work & gaming) or even multiple devices a single one (cluster servers). But I prefer using standard parts to remain flexible and reuse things elsewhere: flexibility has been my main motivation of chosing PCs over any of the other personal computers since 1986.
Ever since both the home-lab and the family required some more permanent and higher-performance facilities than an older notebook could provide some ten years ago, I had to come up with a solution that wouldn't requrie too many machines yet provide all the facilities needed with as little power and noise as possible, because we can't just run network cables from the 2nd floor to the cellar in a protected building that is more than 170 years old. Also remote management capable hardware comes at a stiff premium of price, power and other constraints.
Kit description
I built on a Haswell Xeon E3-1276 v3 base (essentially an i7-4770 with ECC support) using an ASUS P9D WS board that supports PCIe 3.0 8+4+4 tri-furication. It is an all-in-one 24x7 server and desktop that combines near silent operation, "low" power consumption yet enough
OOMPH! to run ordinary desktop workloads directly or via terminal services, THD remote gaming via Steam Remote and space for a couple of VMs if need be.
It runs Windows 2019 server, sports 32GB of ECC DDR3 RAM, an Nvidia GTX 1060 6GB, LSI MegaRAID SAS 9261-8i and an Aquantia ACQ107 10Gbit Ethernet NIC. It boots of a SATA SSD, has a hot swap drive bay for a 3.5" backup HDD, 4x 1TB SATA SSDs as RAID-0 from onboard ports for a games cache and 6x6TB HDDs as RAID6 off the SmartRAID controller. The GPU isn't seriously slowed down by only having 8 lanes, the RAID controller and the NIC are well enough served with 4 lanes each.
It runs 24x7 since almost a decade using a Sharkoon tower with a huge but slow (430 rpm) side fan for the 5k HDDs, which are all in Sharkoon drive silencers (rubber band construction) and is completely unnoticable even at arm's length underneath the table, consuming perhaps 40-50 Watts fromt the socket on idle. You won't squeeze a Threadripper into the same power and noise envelope, nor most office SANs. CPU cooler is a big Noctua (940rpm), the MSI GPU keeps the fans off even during lighter gaming workloads and even at full power wouldn't distract from working.
Apart from being the home-lab & family file server, it also acts as a multi-user Windows terminal server for all those special apps I don't want to install and maintain on all the others machines (~20 desktops and notebooks in the house). The 10Gbit network connects to the much bigger workstations and a backup server, which don't run 24x7 but only when needed, as those tend to get far hotter an noisier (no air conditioning around here). It also connects to a couple of oVirt clusters, but that's a different story.
On the desk there are three monitors the principal being a 43" 4k screen, while the others are THD. A set of KVM switches allows drivings the screens both from the same machine or from different ones then using a light stow-away keyboard for "the other".
Ideally I'd have run Linux below with Windows in a VM using PCI-pass-through for the GPU to enable gaming. It would have enabled things like ZFS for the storage, but that's a setup that worked, but turned out too complex to maintain and in case of a "remote hands by kids" operation ("power off-wait-power-on" was their initial skill level), it wouldn't have agreed with my frequent business trips. That's why I settled for Windows as a base OS and a RAID controller to handle HDD failures.
It's been extremely stable, no hardware failures except for HDDs which were handled by the RAID. The most disruptive events have been OS upgrade from Windows 2013 as well as Microsoft force rebooting the machine while it was running VMs, even after I've tried just about any means of telling it not to do automatic reboots (that worked just fine with Windows 2013). BTW I am using VMware Workstation for the VMs, simply because I've been using their products since 1999 and never felt motivated to switch to hyper-V. Not sure that the forced patch day reboots would respect Hyper-V VMs any better...
How to upgrade?
There is nothing wrong with the current setup, no serious shortcoming. But current hardware offers twice the scalar speed, twice the cores and twice the RAM at similar Wattage. And at that point, there is a bit of an itch that's constantly looking for an excuse to shuffle hardware.
I have no idea if ECC ever saved my bacon, but it's given me peace of mind. Actually, Intel didn't really charge a big premium for E3 Xeons and even for the RAM it was ok. I had to pay disproportionally more for ECC DIMMs on the 128GB DDR4 workstations, but once you have a billion bits in a computer you use professionally, it just help to reduce stress. I don't overclock, either, except for a bit of burn-in testing for new hardware.
With AMD ECC is easy again, except for the APUs, which I was eying for a potential replacement: still am, even with Ryzen 7000 looming. But at €50 premium for an ECC APU that's still an option on the table... except that Ryzen 7000 also has an iGPU, which can help on idle power and PCIe pass-through experiments... once kernels and hypervisors catch up and know how to manage a mix of iGPU and dGPU from AMD and Nvidia. I do run lots of CUDA stuff, not that much on this machine, but an AMD dGPU is not an option.
With Intel the near complete lack of W680 mainboards, especially DDR4-ECC variants, money just couldn't be spent as
announced boards simply aren't available for purchase.
DDR5 with full ECC is new trouble, but I am close to giving into using sticking with the internal DDR5 ECC for this build. However, that's why the 5750G APU isn't quite off the table yet...
10Gbit nightmare revisited
Gigabit networking was pretty cool, when it became available in the last millenium. But even a single bit of spinning rust can saturate that bandwidth. The wait for affordable 10Gbit switch hardware was far too long, but with near noiseless desktop NBase-T switches at >€50/port the biggest hurdle seemed gone And an Aquantia AQC107 based NIC isn't too pricey at €90 when it saves you from having big Steam games on a distinct SSD on every PC instead of the network. I also keep pushing big VMs around and backups run much faster on 10Gbit/s Ethernet, too.
But where in pre-NVMe times there was hardly any other meaningful use for the 4 extra lanes of PCIe the usual South bridge provided to the single non-GPU slot, today sacrificing 4 lanes of PCIe 4 or 5 for a 10Gbit NIC is horrible, when a single 4.0 lane should suffice. And there is that AQC113 chip, which will use 1x4.0, 2x3.0 or 4x2.0 lanes if stuck and wired on an add-in card or a motherboard.
Except that nobody is selling matching add-in cards and motherboard vendors only add the ($20?) chip on $500 mainboards. And for some reason now even the single-lane PCIe slots that would just be perfect for an AQC113 card are gone. I can't plug my AQC107 PCIe 2.0 x4 cards into a PCIe 4..0/5.0 x16 slot without hurting, especially if there is still a RAID controller that needs similar bandwidth and likes to use 8 lanes. Of course the RAID controller is actually more than a decade old, but it's still perfect at managing HDDs with fault tolerance (and I got spare controllers). Perhaps I'll have to switch to a RAID-10 with 4 Helium HDDs for the file server part of the system, but managing drive faults won't be nearly as comfortable I'm afraid. And the 6TB drives had just been replaced
for free by Seagate last year, after the previous 8x 4TB 2.5" drives were discovered to be shingled media, who don't play nice with RAID6.
Insane Expansion
The current breed of mainboard look quite "unusual": while they are supposed to have more lanes than ever, your options in managing the bandwidth to suit your needs are getting smaller. Most lanes seem tied to M.2 slots on the mainboard, which are about the least flexible medium for anything including storage expansion.
The block diagram on Angstronomics puts the x1 link intended for onboard Ethernet at PCIe 4.0 in the B650, which would be good enough for 10Gbit Ethernet, but the X670 variant has it pegged as PCIe 3.0 x1, which would be a disastrous limitation for 10Gbit Ethernet. There is around 80Gbit/s of collective USB bandwidth on the ASmedia chip, but you can't network with that.
That leaves bifurication, which is a bit painful when you put a PCIe 2.0 controller into 8 lanes of PCIe 5.0 goodness. Unfortunately it seems that 8+4+4 or even 4+4+4+4 splits aren't supported, which I've learned to appreciate almost as much as true PCIe switches.
We know that the Zen 4 CCDs will speak PCIe 5.0, Infinity Fabric and CXL either directly of via the IODs and that CXL 1 server processors will be built from those CCDs. What's missing is switching on the motheboard end. For Zen 4 AMD evidently decided to go cheap (note that early Ryzen 7000 customers might never notice). The ASmedia chips are completely PCIe 4.0 generic, ignorant of IF or CXL, quite unlike the cut down IOD chips in the X570.
GPUless variants of those IOD perhaps with 1/4 fan out into PCIe, SATA, USB or even Nbase-T Ethernet ports with a PCIe 5.0 uplink could be fitted to any of the 7 groups of PCIe x4 lanes that I see leaving the Ryzen SoC. And they could even speak CXL!
They could be put on extension board themselves instead of the mainboard, so you can chose between a 4x M.2 board with easy access for swapping media or other variants more USB or SATA heavy or a CXL uplink.
Currently mainboard are running out of slot space because of M.2 sprawl, running into cooling issues for high performance drives. A dual width slot M.2 expander with such a switch can run with a quiet fan and still provide much easier storage upgrades than what you squeeze on mainboards today.