I can't look at PCIe standards without wishing the M.2 standard had zero provisions for SATA and support for as many as 6-8 lanes. Especially for high performance laptops that don't have the luxury of full size PCIe ports.
But at least PCIe 5.0 will buy us a lot of breathing room.
[quotemsg=21050869,0,60597]Intel is suposed to use 4.0 but if Nvidia and amd will pass 4.0 and move to 5.0 with their products... well 4.0 is only usefull in pci based storages...[/quotemsg]
I don't understand why so many people think 4.0 is going to be skipped over... It's here, it's a significant improvement, and it's forward & backward compatible. There's literally no reason not to adopt it.
What I think will happen is that 4.0 will get adopted pretty quickly. Initially, 5.0 will appear mostly in servers, taking a little while to trickle down to the mainstream. Sort of like 10 Gigabit Ethernet (except I hope not nearly so long).
Anyway, AMD is said to use 4.0 in their 7 nm EPYC, which makes sense given that the Infinity Fabric is PCIe-based. So, adopting 4.0 should significantly improve inter-processor communication, as well as with peripherals.
[quotemsg=21053419,0,2071048][quotemsg=21050633,0,2381310]But 5.0 will be good for us, Intel being all stingy with the pcie lanes in coffelake S[/quotemsg]
5.0 wont look so good when Intel cuts the number of lanes down further saying less are needed now that each lane is faster. [/quotemsg]
True. But they won't cut it too much, since it'll hurt their own dGPU business, as well.
The other thing is that they can't afford for AMD to offer too much better, so that will also keep them from cutting down their mainsteam desktop to just x4 lanes. I think we'll probably get x8 + DMI5 (x2 or x4). For storage, their line is going to be that we should use Optane DIMMs or go through chipset. But you know they'll be pushing Optane.
[quotemsg=21052495,0,328798]Infinity Fabric is PCIe-based. So, adopting 4.0 should significantly improve inter-processor communication, as well as with peripherals.[/quotemsg]
I'm pretty sure they only use PCIe links for IFIS multi-socket communication, and even then I can't remember if they actually use PCIe protocol. I don't think PCIe matters at all for the purposes of IF... CAKE runs at memclock and IFIS/IFOP are synced to CAKE.