A big problem in why things are stuck at the point they are is...
backwards compatibility.
yes, people still want to and can run things from early 2000's
another reason why motherboards have all kinds of useless features like NVme slots (often 2 or 3) or network interface or sound card is convenience.
most users don't want to buy separate network card and/or sound card since motherboard one is good enough for them.
we have bigger manufacturers out there that cut out pcie ports to get motherboard to be smaller to fit into smaller case, which is good in a way but they also often in same way stop you from using discrete GPU and/or upgrading said GPU since there is no space for it.
they have also done same on PSU's with seeing "this will never use more than 250W so why put in bigger?" and optimizing that side too, getting it smaller and/or wonky shaped to fit in case better.
connectors from PSU to MB and other peripherals are tricky due to current power needs, CPU can take from 30 to 250W and PSU must be able to supply that, as was mentioned, more power needs more current which needs bigger cable.
you could circumvent that with the mentioned 12VO setup where you supply just 12V to MB and say "you figure it out" which works but.... for components with bigger power draw, it also becomes a problem.
on MB or any other PCB, the lane/trace thickness is usually quite small (to save space vertically since many PCB's have multiple layers) and so to carry more power you need to make said lanes/traces wider. This needs a lot more physical real estate on the PCB leaving less room for components.
So why do that? why not just add power plug next to memory voltage regulators and supply it from there? less room taken from PCB, PCB also heats up less and.. it's all win-win.
as already mentioned, connectors are bigger than necessary but at same time, they are usually designed so that they have key shapes in them so that you usually can't plug in wrong connector to wrong place. (modular PSU's supposedly have similarly keyed connectors so take care there) to prevent mishaps.
All this could still be done in smaller form factor but... it would also hugely impact backwards/sideways compatibility.
PSU's would have to be tied to Motherboard formfactor for right connectors (or they'd have them all and be mess of cables you didn't want) and so you'd have one for ITX, mATX, ATX and so on.
speed has gone up a LOT in 20 years, in 2000, it could take old laptop an hour (yes, 60 minutes) to boot windows. Yes, you could use it but.. who would want to if background loading was taking up 100% of cpu?
new one could boot up and be usable in two or three minutes.
now? windows is bigger, better and ever so more versatile, taking more and more into account and as such, needs WAY more processing power and checking so... while CPU is faster, it feels "not so much faster"
also comparing TV's power button (sleep, display off) to booting on computer is wrong, it's like comparing race cars where other is already at almost max speed at start line.
Also TV's as mentioned, are more specialized things, you power then on, it's broadcasts or some other video-in option or inbuilt app usage, that is about it. TV has nothing else to do.
On PC's you can do more, they can do more and OS has to assume user might do something else than browse web with default browser, look at files in file explorer or play solitaire or minesweeper.
The speed of CPU has risen, benchmarks show that, the problem is that programming has yet to use more than average number of threads available. (4 by now, I think) if even that.
so if app you use uses only one core/thread, then what matters is the GHz and IPC, extra 15 cores wont help much there, except keeping other things from taking 10 to 15% slice of the sweet cpu usage pie. Backwards compatibility kind of hurts there since modern CPU's also need to handle old things that do not know more than one core.
average thread count is pretty much what developers are aiming their things for, so that as many as possible can play/use their things. Bigger market and all that.
that draws the minimum requirement level. (usually often similar to popular gaming consoles since that market is HUGE for games.)
As for why have they not gone much past 4GHz? it's physics, higher frequency tends to not work in small scale, causes interference and so on, old single core Pentium 4's ran into this block. Going faster also needs more power to be stable, which also drew the line where you could go.
(overclocking with liquid nitrogen is another matter it's not feasible long term plan)
And back to GPU side, in my opinion, things have stagnated for last two years or three. Main reason is that huge majority of users are still fine with 1080P@60hz usage and as such, they can do all they want with 5 years old mid-line GPU (1050TI or 1650) and even play most recent games at bit reduced detail levels just fine.
now.. if you want more than that, yes.. RTX spearheaded the big speed jump but also due to other uses for GPUs (mining) the availability of them was not as good as people could want. Since average consumer doesn't have above X level GPU, developers don't aim for higher, which means most wont needs faster GPU, which slows speed gains since makers start to branch to other new things that GPU could do. Like Raytracing and/or crypto mining.
here only sub-200eur costing GPU on market in stock is pretty much ancient GT730's
mobile phones have gone forward a LOT in 20 years, no doubt. Unlike with PC's instruction sets on these were created/thought up at that point, not in 50's. and as such they are more efficient in that regard.
if you compare the basic function of "I want to call xxxx" they still work pretty much same when you compare it to 90's phone which was not smart phone.
if you compare other smart uses, there is difference which is also shown on their CPU speed benchmarks (and GPU too) but again, at their creation point, whole idea of "hey, more cores at smaller frequency use less power, lets use that to keep battery lasting longer" was already known and phone OS and apps were from get-go programmed to utilize more than one or two cores as power needs come and go.
To fix the problems you describe, one would need to throw away any and all backwards compatibility and design new things from scratch, you could use different cabling, get things done faster and all that if you just say "nope, nothing from pre-2023 works on this and also nothing from other manufacturers fit in this" (Apple vs Android vs windows phone)
but.. by now, Apple and Android are stuck enough in their ways that they can't really do drastic innovations on hardware side, their benefit is that their starting get-up already knew that multi-core/threading existed and took that into plans from the start.
in addition to being pretty much limited to smart phones.
yes, there are some laptops running android but... yeah, why have they not gotten as popular?
I think I've rambled on and on too much.