"So you wanna double bandwidth huh?
No problem, we simply reduce trace lengths & ...
No, no, we cant be bothered discussing that ...
Ah, well thats different."
I dunno at what point, but eventually the laws of physics must come into play, and hence the delay I suspect.
In amdS fabric, what we have seen to date is a miniaturised sub bus linking for key HB resources independently of pcie. The traces we see are a fraction the length the pcie bus must work with on a mobo. The mcm/interposer has the footprint of a card deck, yet could stretch to; hbm2 ram, nvme, multiple gpu, multiple cpu on one ~Epyc size mcm. System memory is the only important HB resource missing.
Fabric's most advanced manifestation is on the radeon vega pro ssg gpu card, where Fabric hosts the; gpu, hbm2 gpu cache/ram and 2 TB of raid nvme ssd drives,all interacting independently of pcie, and much faster. The imminent zen/vega apuS will take the further fundamental step of combing a cpu & gpu on a Fabric bus.
I think a mobo w/ pcie4, will have to similarly adapt to much shorter traces for the faster links.
If the system bus does become balkanised, as some here suggest (nvlink/fabric etc.), then intel/nvidea would be playing a dangerous game with closed off environments, as only amd can bring both vital cpu & gpu processors to their Fabric.
Arguably, amd are offering the benefits of pcie4 now. At many price points, amd offer 2x+ the pcie3 lanes, ie. 2x+ the link bandwidth.
NB btw, Fabric is pcie3 compatible. Pcie3 devices can connect (eg. the ssdS on the above vega ssg card are stock pcie3 panasonic ssdS), so amd is fine with it, just not beholden to it.
As others say, intel just want to give you the same bandwidth, but using fewer of their meager lane allocations. Nvme ssdS are still 4GB/s max, but use 2 lanes, not four.