True, but a good scheduler would also be able to shove a 4+3+2+1 in right after the 1,2,3,4,5.
5+5+5+5+5=5 vs ((1+2+3+4+5)/5)+((1+2+3+4)/5)=5. Same 5 seconds, same amount of work. The difference being that the 1GHz would require less power as it wouldn't need to maintain such high speeds, which means less heat. That's essentially how the Playstation platform works, using higher core cpus to spread the load wider, parallel, instead of concentrating the workload in less cores at faster speeds like the pc did.
This is on the assumption that an ideal situation like this will happen more often than not. But the reality is, the ideal situation is just that: ideal.
Also power consumption is an instantaneous measurement, which isn't really that useful. Energy consumption (power used over time) needs to be looked at. For example, let's say I ran a task one hour at 75W. If I dial the processor down to 45W, the performance loss is such that the task needed 72 minutes to complete. Now from an energy consumption standpoint, the 45W scenario wins (75W-sec vs 54W-sec). But if we include 12 minutes of downtime onto the 75W scenario, the normalized energy consumption for the 75W scenario becomes 60W-sec. Sure the 45W scenario still wins overall, but if your goal is energy efficiency, it really depends on what you're after. For sustained loads, yes the lower power spec would be better. But for periodic bursty loads, a higher power spec is better, because the work gets done sooner and the chip goes to sleep for longer.
Games like CSGO were written back when 1/2 Pentium was popular, pushing 3.4GHz etc, but written on the PS would have used all 8 of its 1.8GHz Amd/jaguar cores.
It would've by necessity to extract all of the power out of what was basically a netbook CPU core. Even if CS:GO was made with the PS4/XB1 in mind and used all those cores, the faster cores of Intel Sandy Bridge would've more than made up for the lack of cores.
Which is why I said intel/amd is stuck in a rut they can't get out of, because it'd cost them far too much to flip to PS way of thinking, it'd make most current apps and software obsolete, who'd buy an intel cpu that didn't run any games because they weren't yet written to run that way or needed an emulator to transpose the older game code into something the cpu could use.
With regards to what most people use computers for (web browsing, watching videos, typing up documents, etc.), most of these tasks are I/O bound (and one of them doesn't even run on the CPU anymore), not compute bound. Adding more cores does nothing for I/O bound scenarios.
It's why the FX bombed, software was written in Intel, highly serial, didn't make use of multiple cores, just 4 at most, but also why years later Intel finally got off its duff and started making higher than 4/8 cpus, because software was complex enough to be finally written to take advantage of multiple cores using more than 8 threads.
The FX bombed because of a bad design.
Also continuously repeating "software is serial" only tells me that you don't work in software. But if you want to try and convince me that you do know something, here's a code snippet:
JavaScript:
function changeTab(e) {
if (e.which == 37 && e.ctrlKey) {
let prevTab = $('ul.chat-tabs>li.active').prev();
if (prevTab.hasClass('thumb') === true) {
prevTab = $('ul.chat-tabs>li.active').parent().prev();
if (prevTab.length > 0) {
$(prevTab[0].children[1]).click();
}
}
else {
prevTab.click();
}
setTimeout(() => {$('textarea.active').focus();}, 100);
return false;
}
else if (e.which == 39 && e.ctrlKey) {
let nextTab = $('ul.chat-tabs>li.active').parent().next();
if (nextTab.length === 0) {
nextTab = $('ul.chat-tabs>li.active').next();
if (nextTab.length > 0) {
nextTab.click();
}
}
else {
$(nextTab[0].children[1]).click();
}
setTimeout(() => {$('textarea.active').focus();}, 100);
return false;
}
}
If you want the summarized version, this is an event handler for a button press that checks to see if CTRL + Left or CTRL + Right was pushed to switch tabs on a UI left or right. But it also has to make sure that there's another tab to actually go to. There's some other stuff to check based on how the GUI was designed. So tell me, how can you break this up so that it can run on multiple cores at the same time? Or in an extreme case, how can
every instruction run at the same time?
Oh, and the funny thing about GPUs and graphics rendering, since that's a commonly brought up example of "embarrassingly parallel" type operations: the thing is, the actual process of determining a pixels' color is
highly serialized. The only thing that makes it "embarrasingly parallel" is you can perform the same operation for each pixel at the same time. So theoretically it scales with n pixels.