Agree on all but the "dark ages" of C code were born from the hardware of the time.
Well, C was a big step forward, at the time. Computers of the era didn't have enough memory or fast enough CPUs to compile modern languages like Rust. One reason C remained so popular is that its runtime doesn't do stuff without you knowing about it. There's no garbage collector, no hidden function calls, no hidden heap allocation, etc. That made it good for when you wanted tight control over what's happening or when you have very little headroom.
The factor that you do not consider, in what you wrote, is that with the extremely powerful hardware we have today, nobody cares to choose the optimal way to code from a performance perspective,
It depends, of course. These days, programmers have more luxury not to care much about performance, but it's still quite possible to stumble into a pitfall or do something boneheaded. However, I think game programmers spend a lot of time on code optimization and obviously the AI frameworks are optimized to the gills. There are plenty of niches where people do still spend considerable time & effort on code optimization.
but the developers choose always the most easy way (often also the quickest) to solve a problem. The result may be ten times slower, but who cares, running in a browser constrained by network latency and bw or on a machine with 10000 MIPS the difference may be hardly noticeable.
I've used Electron apps, too. And sat there scratching my head at why Adobe Acrobat (PDF reader) somehow always seems a little sluggish, no matter how fast my PC, not to mention MS Office apps.
I think a lot of that is due to successive rounds of architects who looked at security, localization, portability, GUI-skinning, and other requirements and decided the easiest way to address the requirement du jour was by adding yet another layer. By the time you reach the poor application programmer, there's not always much they can do about it.
Now, I don't mean to suggest that modern software stacks couldn't be optimized. I'm sure there a lot of room for improvement, if the incentive would be there, without many compromises on the requirements. It's just a combination of factors (deadline pressure, laziness, ignorance, powerlessness) that conspired to create the situation we're in. Not only because your average programmer doesn't know much about hardware.
Speaking of which, have you ever heard the phrase "knowing just enough to be dangerous"? Having
a bit of hardware knowledge can potentially lead one down the path of premature optimization, which can make a big mess, without yielding much (if any) improvement.
When it comes to code optimization, always measure first! That way, you can make sure you're solving a real problem, and then quantify the effect of your changes to be certain you made a worthwhile (and positive!) change.
Beyond that, you can find guidelines and tips for writing efficient code. When it's easy and doesn't increase complexity to take the more efficient option, it should be a developer's default choice.