and modern GPUs on PCI-e 5 still don't even use close to the bandwidth that it offers.
Until now there were no consumer chips for the 5.0 bus. Where did you get this from, can you provide a link to such a dgpu chip in 2024?
If you want to understand why 5.0 pci-e and especially 6.0+ are pointless in consumer chips x86 (but not server ones with HBM3+ memory and 1024-bit memory bus) - read the discussion and especially my comments in
this news topic.
In short:
A simple example of Zen4 HX - it is already more than 2 years out of date (but it is still the fastest mobile processor in the world in the x86 camp - 7945HX). It has 28 5.0 lanes (24 are free). The video card is connected via 4.0 x8 most often, at best x16, i.e. it is only 8 5.0 lanes. M.2 5.0 x4 slots are pointless in laptops - too high power consumption of controllers and heating of nand chips with disgusting air draft around M.2 slots - a priori incorrect architectural planning of the case and motherboards of laptops, plus the impossibility of installing effective large-sized radiators (or there will be increased noise).
In fact, Zen4 HX uses less than half of the 24 available lanes in 5.0 mode. In addition, 28 5.0 lanes are more than 100GB/s, if all at the same time (the limiting case). And the real memory bus bandwidth of this series is 60-65GB/s. This is 2 (two) times worse than the full load on 28 lanes of 5.0 requires. Now imagine what will happen with 6.0 28 lanes or even worse 7.0 (even if you remove the factor of poor technical process and heating of controllers).
To service such requests, you need a memory bus with a real bandwidth several times higher.
And what do we see? Zen5 Halo for the first time in consumer x86 gets a 256-bit controller, which in reality should have been in Zen4 HX 2 years ago, which would allow you to easily service all 28 lanes of 5.0 with a reserve for OS/software - they also need to work ...
To service 7.0, you need RAM with a bandwidth of 1TB/s, and only HBM3 controllers with a 1024-bit bus provide this.
Forget about 6.0 devices in PC/laptops until the RAM can easily pump 300GB/s+ in the mass case - at least in the "gaming" segment - this is the level of 512+ bit memory bus at current frequencies of memory modules
5.0 is just coming into its own with the 256-bit Zen5 Halo controller. The 128-bit ArrowLake HX is simply not suitable for this - its throughput is much lower than that of the AMD chip.
--
5.0 SSDs are not needed by anyone in the consumer segment at all, even in the gaming segment - they are too hot and still too slow, especially in 4k IOPs (the difference with RAM in Apple M4 Pro when exchanging in random 4k blocks is more than 100 times - 120MB/s(SSD) vs 12000MB/s(RAM)).
Until SSD controllers (NAND chip consumption) drop below 5-6W in total, which will allow them to be easily installed in laptops without radiators or with very small ones - they do not matter.
In desktops, they are only interesting to those who are engaged in non-linear editing of video in RAW quality and similar loads. In games, 5.0 SSDs are of practically useless.
For video cards, a thicker bus matters if the RAM is approximately (empirically) about 3 times faster than this channel. That is, if you want to download data to vram dgpu at a speed of 128GB/s, i.e. pcie 6.0 x16 (5090 has 1700+ GB/s VRAM bandwidth), you will need a 350GB/s memory bus at least for general tasks, so that everything runs smoothly and efficiently in parallel with dgpu. For 7.0 (256 GB/s x16), you will need RAM performance in the x86 controller already closer to 1TB/s, so that everything runs smoothly in parallel.
This will not happen in the next 10 years in x86.