So, PCIe 5.0 ended up being worse than a pointless waste of money - it's downright harmful!
I wouldn't say that PCIe v5 is pointless.
Yes, using 16 lanes of PCIe v5 on an RTX 5080 for gaming may well be pointless, but that's not how I'd use it, if I got one of them.
Instead I'd do what I do already on my Minisforum BD790i, which features a Ryzen 9 7945HX with 24 PCIe v5 lanes, 16 in one x16 slot and dual x4 in M.2.
Since I use that machine currently as a µ-server, it's iGPU-only at the moment. And for that use what it lacks is fast network and SATA ports for big chunks of spinning rust.
For initial testing I put one of my Aquantia AQC107 10Gbit NICs into the x16 slot, because that's the only one it has.
With that the x16 PCIe v5 slot is blocked via a card that runs on PCIe v2 x4 or PCIe v3 x2. It's younger brother, the AQC113 could actually run 10Gbit Ethernet on a single PCIe v4 lane, so that's 15 or even 15.5 PCIe v5 bandwidth wasted. Rather sad, when you'd really want them elsewhere.
But I bought it with an eye on bifurcation and knew I'd need SATA ports, might add further storage, or even faster networking, or even eventually a dGPU, should it move to a different use case: I try not to get boxed in on my boxes!
So I split the lanes, got myself a bifurcation board which splits it into 8+4+4 lanes via two extra M.2 slots below a low profile x8 above, which fits the Mini-ITX form factor of the mainboard just fine, one on each side.
It gets me six SATA ports off 2 PCIe v3 lanes via an M.2 form factor ASMedia controller on the outward facing M.2 (where I need to run the cables), another M.2 slot for NVMe storage facing inward (enough for NVMe and cooled by the CPU fan), while the x8 slot on top is currently filled by the AQC107 NIC, wasting only 4 PCIe v5 lanes.
I could still use 4+4+4+4 bifurcation to make that better, or put someting x8 there...
Like an RTX 5080, which at PCIe
v5 x8 should in theory operate just like with PCIe
v4 at x16.
So PCIe v5 in that case makes tons of sense, because it openes up that flexibility.
But only if it actually were to actually work as you'd expect.
As PCIe releases are moving forward, what makes ever less sense is the fixed allocation of PCIe lanes to slots. Not only are mainboard traces at increased speeds much harder to do than on cables, it's getting ever more wasteful and inflexible.
(Unfortunately PCI v5 capable cables and connectors may not be cheap either)
Instead those GPUs should really just get M.2 connectors themselves, so they don't get connected via slots but use a variable number of M.2 cables to their mainboard. After all that so called PCIe connector already is actually using a ribbon cable to the central mainboard on the GPU.
Of course switches would be nicer yet, but nobody would want to pay for those, not at PCIe v5, because those make EPYCs look cheaper.