DISCLAIMER: This post contains abstract and imaginary situations and/or statements and not-exactly-accurate information to illustrate the reasons behind it on a not-so-technical level.
No, cores work differently. 1 core at 10 GHz will be faster than 2 cores at 5 GHz or 4 cores at 2.5 GHz. Each core has a thread, that thread executes instructions, and brings back the result. But multi-core systems have many threads which have to be executed in parallel or sequentially. If the instructions have to be executed sequentially, then the imaginable 12 GHz core will be significantly faster. If the instructions have to be executed in parallel, then things get complicated.
The 12 GHz core will just execute instruction by instruction, while the multi-core system will have cores working on different threads at the same time. The multi-core system has to communicate on-the-fly which thread is executing what, lock that thread to other cores so the cores don't mess in each others work, what are the results, combine the results and etc, then comes predictions to what work is coming up next and etc. If, lets say, an I7 is running 4 cores at 4 GHz and it is compared to an Imaginable I7 (II7) that runs 1 core at 16 GHz, all the resources like L3 cache, branch predictors will be dedicated to the single II7 giving it advantage.
Computational parallelism in the new I processors or FX has come a long way and give us near 100% scaling in some cases, compared to old architectures like Pentium D for example. But it will never be 100%. There is always a % lost somewhere. Either to communicate or share results between cores, exchange threads, fetch new data or whatever-you-may, there is always a delay somewhere, even if that delay is so insignificant, there is never 100% scaling.
Due to the rules of physics, having a consumer CPU with single or multiple cores at 10+ GHz is near impossible. We witnessed this with the failed NetBurst architecture of Pentium 4. Power consumption compared to MHz multiplied by the number of cores does not work in linear way. You also have to take the CPU architecture into account. If I can illustrate this in a very ugly and not-accurate way:
Linear Scaling:
Power Consumption: MHz * Core Count / Architecture
Actual Scaling:
Power Consumption:
( MHz * MHz Range ) * Core Count / Architecture
// MHz range is an extra multiplier. Imagine it as 1 for 1 GHz, 2 for 2 Ghz and etc. So having 1 core running at 10 GHz will be a number of 100, where as 4 cores at 2.5 will be a number of 25.
In other words it is far more efficient to run 10 cores at 2 GHz than 1 core at 20 GHz (if possible).
Also the materials inside CPUs can't withstand that much electricity passing through them. Not to mention the heat density on our imaginable II7 will be impossible to cool down, since the heat is building so fast, that the cooling solution can't handle it.
Hope I have been helpful. I have to also confess that I do grasp the basics behind electronics and hardware, but I am not a scientist or an engineer so I can not give you exact, 100% accurate answer. Cheers.