A chaotic motherboard design would be to support bifurcation to the point you can connect 16 SSDs as PCIe 4.0 x1 devices; I wonder where things would break if you tried to do that.
As is, mobos with bifurcation + the four-drive cards + PCIe 4 SSDs having turned into the less expensive not-cutting-edge option => some pretty wild and economical-for-what-they-are setups are possible.
I believe rigs with nearly all lanes exposed as x1 slots were a somewhat popular niche during GPU crypto-mining, very little "chaos" about them, just an option that perfectly conforms to PCIe standards.
Now would that break things? There is a good chance of that, because vendors often just test the most common use cases, not everything permissible by a standard.
So I'd hazard that a system with 16 M.2 SSDs connected via x1 might actually fail to boot, even if it really should.
But once you got a Linux running off SATA or USB there is no reason why this shouldn't just work. And with a 4 lane PCIe slot often left for networking on standard PC boards, a 25Gbit Ethernet NIC on a PCIe v3 PC otherwise destined for the scrap yard could recycle a bunch of older NVMe drives, with a more modern base 40Gbit might make sense, too.
For building leading edge new stuff this isn't very attractive, I'd say. For using getting some use of out of stuff that's fallen behind in speed and capacity on its own, there may be potential, if time, effort and electricity happen to be cheap around you.