Or a lot in the RX6500's case.
With 256GB/s of PCIe bandwidth, UMA/tiny-VRAM graphics could climb many rungs up the performance ladder.
Even the 6500 XT is only like 15% on average IIRC. I did see a 28% gain in Borderlands 3, but other games were low single digits. And really, when you're in a situation where PCIe bandwidth matters that much, you're already usually running settings that the card can't handle.
While a Gen7 PCIe interface will do 256GB/s on x16, let's also be realistic: by the time we have consumer hardware with Gen7 support, and budget cards with Gen7 support, I suspect we'll be looking at 16GB for midrange and higher GPUs, and the cost of 8GB of memory will be relatively small. We'll probably be doing GDDR7 or maybe even GDDR8 by then, which means a 128-bit interface might be running at something obscene by today's standards, like 35-40Gbps, which would be 560-640 GB/s -- more than double the PCIe speed yet again. Also, the cost of a motherboard and graphics card that can both support Gen7 speeds will not be trivial.
So you're suggesting people will buy a high-end motherboard, only to pair it with a crappy graphics card that maybe will make use of the added PCIe bandwidth. More likely will be that the future "RX 9500 XT" with a 64-bit memory interface will also have an x4 Gen6 PCIe interface, just to save costs.