I'd also point out that most multithreaded workloads don't work by spawning N threads to do the same work, then wait on barrier before exchanging results and continuing. That's basically the main way that a performance disparity between the cores would be the kind of liability you're talking about and it's a pattern I've rarely seen in practice. For most other cases, you should see at least some benefit from some of the threads running on faster cores.
If your multithreaded application spawn threads that do their work in their corner then infrequently synchronize to exchange data (like for example zipping a large file), thread speed disparity is not a problem, granted.
When all those threads synchronize frequently, like every frame in a game, you will encounter difficulties. The principal being that you have a hard time determining the size of the workload to give each thread when execution speed may vary from 50% to 100%.
In my experience, the OS trying to automagically decide stuff without all the relevant data always lead to subpar performances and crappy behavior.
The fact is that this e-core business (and c cores from AMD) is a specialized hardware, and as such its use may be useful but in my mind it has also drawbacks, that some people just don't want to understand.
Just like Apple having memory on-chip, which is great for performances but is not exempt from drawbacks if you suddenly find yourself without enough memory.
Or SMR or QLC, all specialized hardware that have advantages and disadvantages.
For me General > Specialized, even if it means losing some performance in some cases (who does compress files all day long ?). I'll take the option to change my RAM, thank you.
And laptop sales being 60% higher than desktop play a major role in this "optimization" coming to dektop.