Well, why does Strix Halo need a powerful IGPU with a 256-bit memory bus if it will never be used in reality, because there is a dgpu?
But in the younger lines - especially the mass R5/I5 - the fastest IGPU is just the most needed and with 256-bit or even 512-bit memory. After all, what I constantly see - discussions of buyers of such laptop lines - how fast are they in games? What's the point of playing on such a laptop if it was not originally designed for this?! That is, they want a universal all-in-one solution, without a dgpu, which also reduces overall reliability and increases overall consumption and weight of the laptop, and also reduces autonomy several times. This is roughly what happened to sound. Separate "discrete" audio cards were once popular. Now almost no one remembers them. If someone needs a high-quality analog output (for example, for headphones), he will buy a high-quality (and therefore expensive) DAC + headphone amplifier. receiving a signal from a laptop or PC digitally. But sound processing has long been done by the processor - that's the main thing and no one needs a separate powerful audio chip anymore. Approximately the same will eventually happen with dgpu - its fate will be sealed if the speed of increasing performance by 1W is preserved (but this is not the case, as everyone has already understood) it will simply be excluded from PC/laptops, and all the work will be taken by the processor cores(igpu cores). Having received, from the point of view of these tasks, excess performance with a "reserve" for everything else. Once upon a time (I remember this in the 90s) even decoding mp3 was a rather complex task for the processor, as was sound processing in games. Now the load on the cores is vanishingly small, it is practically invisible in the task manager...
But again we return to the silicon dead end - the growth of performance in recent years has been mainly due not to improvements in the technological process, but with the help of the consumption race. I call this energy "cheating" started many years ago by Intel. Which NVidia later joined - the 5090 looks especially shameful against the 4090, if you normalize their results in tests for the level of consumption, i.e. calculate the performance per 1W of consumption. And then, after Intel, AMD was forced to join this cheating - after all, they started to overtake it in performance (starting with Alder Lake, where they sharply increased performance through exorbitant consumption and it still remains the "norm" now in the x86 camp, unlike Apple) by wicked methods, diligently hushing up the topic of consumption growth, especially for laptops (and desktop processors/video cards) in bought reviews in order to hide from the most ignorant part of buyers (and they are the majority) the unfortunate fact - a sharp slowdown in the performance growth curve per 1W of consumption during the transition or the impossibility of switching to a finer technological process (and they are increasingly more expensive and complex), which required raising consumption. Which brought the consumption of laptops to insane levels for their case volume and weight/size of radiators - 200W, or even 250+. And all this, with the supposed trends in developed countries towards a "green policy". Moreover, "gaming" laptops are quite a mass series, they are often even bought by companies as a replacement for "mobile workstations", where the manufacturer's markup is much higher, and measures to improve quality (in terms of long-term ownership) are insufficient compared to the increase in price for this. In addition, "gaming" series with clearly higher performance can be made much quieter at the required level of performance, thanks to much more powerful cooling systems, plus they are much easier to upgrade in RAM and disks, and also have a more convenient location, number and quality of ports compared to regular "business" series.
But let's return to the problem of increasing chip consumption (which immediately crosses out real progress in electronics):
Even what is happening with the consumption requests of the latest "AI" data centers clearly shows that humanity has reached another dead end with technologies, in this case, again tied to the "silicon dead end". Figuratively speaking, we are desperately increasing the number of "locomotives" and the costs of their maintenance, trying to get to where it is better to fly on a "rocket" (into space, to the Moon), which no one has yet managed to create or invent...