how about the premiums you will have to pay for new motherboards that have this proprietary nvlink. When PCIe 4.0 specs are already out with of course backwards compatibility. I can't stand companies that block other companies by strong holding developers like Intel did Nvidia is guilty of this as well I guess it is a Oregon way of thinking as their HQ's are both located in Oregon.
NVLINK is for HPC workloads only. It is extremely doubtful we'll see NVLINK on anything less than a workstation or server class motherboard.
The reason NVLINK exists is for many GPU configurations.
A 2 socket E5 2600v3 or v4 server has 80 total PCIe lanes, 40 per socket.
Something like the Dell C4130 has 4x GPUs and the ability to add two additional PCIe cards for high speed connectvity on top of a RAID card and 2 integrated 1gbe ports.
RAID Card is a x16 slot, 4 GPUs is another 64 lanes and the two additional PCIe cards are x16 each. That's a total of 112 PCI lanes.
Dell had to use a PCIe 3.0 switch to make it work.
NVLINK is a separate interconnect with none of the latency issues of a switch and is a vasty superior option for HPC.