No CPU/Silicon/Transistor should be given away for free.
It costs money to R&D / Manufacture everything.
In an ideal market, that would probably be true. But whatever the reason, Intel has given away transistors for free.
And so much of it seems to have been for anti-competitive reasons, that they pretty much permantenly have to prove, that they aren't artifically culling chips just to fit every niche where a competitor might have product. That came from a couple of comments on Anandtech some time ago.
It's the bigger iGPU SkyLake models which got me puzzled first: an Iris Plus iGPU with 48 EUs was about half the die area of then then dual-core mobile chips. And then they required an 64 or 128MB eDRAM chip on the die carrier to provide the extra bandwidth these extra EUs needed.
Yet Intel sold these vastly bigger and more complex SoCs at nearly the same list price as the normal chips... which would be a dramatically better deal in terms of transistors for the buck!
The only problem was that you either had to be Apple to buy them or you'd have to land a clearout deal on the left-overs.
The only other way to get them (or similar gen-8 SoCs) was to buy a NUC: Intel would put these special chips into NUCs even before they reached sunset.
If you compare die area/$, Intel has always charged much less for GPU than for CPU real-estate. IMHO the main reason was to kick out ATI and Nvidia way back then when those still sold chipsets often with integrated graphics: Intel wanted all that money for themselves and the only way to get there was to charge for the CPU and give iGPU and Northbridge away for free.
That's why I enjoyed the AMD Ryzen revenge so much, AMD took the iGPU real-estate to double the cores and kick Intel where they had hurt them as ATI.
Now Intel can fit about 4 E-cores into the real-estate of a single P-core. Defect logic dictates that the likelyhood of an E-core being hit by a defect should be vastly inferior to a dead P-core. Yet binning culls E-cores by 8, 4 or none, while P-cores rarely seem affected by more than a speed reduction.
That's pure market segmentation, nothing technical that I can believe and exactly what Intel is prohibited from doing: artificial culling of chips to flood all market segments: those deactivated E-cores on those i7 (4 instead of 8) and lesser i5 (0 out of 8) have very little chance of actually being defective (or pushing the chip beyond its TDP limits).
And if Intel charged per die area I would swap the deactivated iGPU from the "F" parts against 16 or so E-cores instead.
But they aren't charging according to the principles you quoted so no chance I'd get a chips like that for what it costs them to manufacture.
It's how they segment their technology that is the problem.
Hybrid setups make ALOT of sense for Mobile where you're Battery & Thermal constrained.
On DeskTop, you're not Battery constrained, only Thermal constrained.
I would prefer a "Pure P-Core" or "Pure E-Core" CPU and let me make two different PC's.
Each one specializing in different tasks and focus.
Well that's how I might use Alder-Lake basically. But I don't run things in bare metal boxes any more, I use VMs or containers. And for those I could partition my cores into less latency sensitive and more interactive workloads effectively creating the two types of machines you mention as VMs or CGROUPs, with quite a bit more flexibility and reconfigurability in case workloads change.
With numactl you can already very easily switch your hybrid system into only P or only E parts or control the mix. Cool for experimentation, the practical advantages would require cloud scale or severe energy constraints to pay off the effort of management, I'm afraid.