Discussion Why are we stuck at 5 GHZ for almost 18 years?

Jacob 51

Notable
Dec 31, 2020
555
20
915
5 GHZ on a processor was a big thing in 2003 but so is till date.

Tom's Hardware overclocked a CPU to 5 GHZ in 2003 with liquid nitrogen. They set the record. (Which is broken now.)

In the latest Core i9 11900K has frequency with turbo boost of 5.3 GHZ.

So that is a big number, considering that this core i9 is the most powerful INTEL chip Rn. (Maybe I'm wrong. Correction would be appreciated).

Why aren't we increasing this number to 6 GHZ?

I understand that some games require GHZ, while some require cores.
 
it's about the limit of the frequencies. The higher the frequency you need an insane amount of more power.

For example, when you push a CPU from 4GHz to 4.1GHz, you need (say, 10 more watts, just an example), and from 4.1 to 4.2 you need 15 more. The higher you go, the more inefficient it becomes, and power will be insane. This is one reason you don't see higher clocks.

The other is the limit of the CPU itself. Some CPUs just won't run at a higher frequency no matter how high you push the voltage. It just simply isn't possible.

Finally you said "fastest Intel CPU" I say it's the 10900K. Two more cores and basically identical (if not superior in places) than the 11900K
 
  • Like
Reactions: Krotow
Frequency is only one of parameters and not even most important one. High frequency uses more power and produces more heat for diminishing gains and only for single threaded applications that are now in great minority.
More cores/threads, better IPC and memory controllers take precedent over frequency any day.
 
  • Like
Reactions: Krotow
Older CPUs had lower IPC as someone above said. Increased CPU performance was possible only by raising CPU clock frequency. However higher clock frequencies increase power draw and heat generated by operated circuit. Also circuit designers become stuck into whole bunch of physical limitations related to high signal frequencies like parasitic transmissions even between short signal lines, signal skewing, unwanted capacities etc. etc. If is possible to improve CPU instructions or execute them in parallel, then CPU designers will use this approach. This is why in last few years CPU clock frequencies stopped around 5 GHz. However CPU instructions and way how instructions are executed, improved in great degree. Another big improvement was increased parallel calculations by having more physical cores in CPU and hyperthreading. New CPUs also have more cache. Operating system and software developers also started to benefit from new CPU design. That is why complex software user interfaces, fast video encoding, fast huge software project building and realistic looking 3D games became possible.
 
5 GHZ on a processor was a big thing in 2003 but so is till date.
....
The imutable laws of physics, with a bit of economics mixed in, perhaps?

The way they could keep raising clocks was through architecture improvements that required more transistors. More transistors means more die space is required.

That meant they had to reduce geometry to fit a reasonable package, make enough CPU's per wafer to be economical.

Once geometry got small enough they simply could not remove the heat from all those transistors across the tiny surface area of the die that resulted.
 
Last edited:
  • Like
Reactions: RodroX and Krotow
Because clock speed isn't as important as architecture. Hence why Athlon X2 were beating Pentium D, while having lower clocks. Same for when Core 2 came out, and was walking all over their previous netburst arch, at much lower clock speeds. Same for core i* series vs faildozer. An FX9590, at 5.0ghz, could lose to a much slower clocked i3 6100, in games.
 
Because clock speed isn't as important as architecture. Hence why Athlon X2 were beating Pentium D, while having lower clocks. Same for when Core 2 came out, and was walking all over their previous netburst arch, at much lower clock speeds. Same for core i* series vs faildozer. An FX9590, at 5.0ghz, could lose to a much slower clocked i3 6100, in games.

Indeed CPU performance in real world applications very much depends from architecture support in applications themselves. "Faildozer" turned into what it was in great degree because at 2011 most of software was still compiled for single-threaded use.
 
Because clock speed isn't as important as architecture. Hence why Athlon X2 were beating Pentium D, while having lower clocks. Same for when Core 2 came out, and was walking all over their previous netburst arch, at much lower clock speeds. Same for core i* series vs faildozer. An FX9590, at 5.0ghz, could lose to a much slower clocked i3 6100, in games.
That's irrelevant to the discussion.
The superior arch at 5Ghz would be 25% "even more superior" at 6Ghz, same goes for the less superior arch.
Unless your argument is that they are holding back.
 
...
Unless your argument is that they are holding back.
That's a question I have...are they holding back because they'd rather focus on IPC improvements?

Another way of asking it: if the power-density issue is the problem, does optimizing architecture for IPC at lower clocks result in less waste heat that has to be discarded across such small contact patches while a net increase in performance is still achieved?

Whichever way you look at it... whether raising IPC or raising clocks... improving performance requires more transistors. More transistors mean smaller geometries so they can fit it in the package....and wafer economics. So waste heat has to be dealt with.
 
Last edited:
I think the OP is asking, why are we hitting a 5GHz barrier, not talking about architectural improvements for more CPU performance alone.

Honestly its a great question OP. I don't know exactly why its so difficult, but hitting higher frequencies is always going to be significantly harder by an exponential number.

Memory i'll take as an example, to make a motherboard capable of running DRAM at 4000-5000Mhz instead of just 3200-4000MHz, you need to make the traces in the board as high quality as possible, and route them absolutely perfectly to insure that frequency remains stable at all times.

But, we all know that those whopping high frequencies, actually don't really make CPUs faster in gaming and most other applications, thanks to timings.

I bet its similar with CPUs, hitting 5.5GHz or 6GHz i bet is possible, but it would be so absurdly expensive to do so, it wouldn't be worth it. Especially when frequency isn't the main provider of performance anymore these days.
 
  • Like
Reactions: NiallUK and Krotow
So many replies, not 1 with anything to do with the question... Now I'm no electronic engineer but AFAIK cycles per second relies on

* The amount of pipeline stages such as Fetch Decode Execute Store. You can have a larger amount of pipelines that each do a simpler job and that can result in a faster speed, but the trade off is if you have data dependencies or branches then the pipeline could drain and be forced to start over. For example, if you have information coming down a 20 stage pipe, the latest piece says what does X= but the answer hasn't yet been computed, the pipeline drains of what it was doing and what it was going to do next and the instruction to calculate X is sent down the pipe. Back in the mid 2000s Apple was showing the world it was faster overall to have less pipeline stages running at a slower clock rate than vice-versa.

* Parallelism = running concurrent code. Also branch prediction and out-of-order execution.

* Cache design. Not only architecture eg. L4, L3, L2 etc, but the physical size.
The larger the cache is, the further away from the "core" the farthest part of the cache is. Electricity is not instantaneous in transition, so the further it has to travel, the longer it takes, the less cycles per second. That is why the last level cache is the slowest, it is furthest from the "core". If you double the size of the L1, it's going to be twice as slow. As the process shrinks, the transistors shrink, they get closer together and can run faster.

* A million other things I have no clue about

On a 5950x, latency for L1 is 0.8ns. Latency through L3 from thread 1 to 2 (or core to SMT) is 6.5ns, from 1 core to another in the same CCX is 20ns, and from CCX1 to CCX2 takes 84.5ns. DRAM measure 79ns (not to be confused with the time it takes to actually send and receive the data). At 5GHz, that's a full cycle every 200,000ns. The entire process can only be as fast as it's slowest point, and there is a lot going on. Unfortunately though, all of the data is stored on a hard drive, which is basically the furthest physical component in a PC from the CPU.

I guess that there is a much higher benefit from enlarging the cache and having fewer cache misses (and not hitting ultra slow RAM) and having a slower operating frequency, vs speeding up the core and keeping cache at a tiny size and doing other things to optimise for cycles per second only. Hope this helps and gives you something to think about.

PS, Nothing matters more than single core work-per-second, and clock speed a is a great way to go about it. The reason is, not many problems can be solved concurrently. You can't work out what goes next without knowing what just happened. Try solving 10,000 decimal places of Pi concurrently. Out-of-order execution exists for some things, but there is a performance penalty, it is far from a a cure-all.
 
  • Like
Reactions: Howardohyea
PS, Nothing matters more than single core work-per-second, and clock speed a is a great way to go about it. The reason is, not many problems can be solved concurrently. You can't work out what goes next without knowing what just happened. Try solving 10,000 decimal places of Pi concurrently. Out-of-order execution exists for some things, but there is a performance penalty, it is far from a a cure-all.

You are correct about that. However luckily there is not much real life uses like Pi calculation up to 10000th decimal point for common folk. This is why parallel computing as performance improvement nowadays works just fine.
 
If the sole aspect of the question is to get over 5Ghz then his premise is wrong: we do see clocking over 5Ghz, and regularly.
I recall a 5950X breaking 6Ghz. I recall other CPU's posting upwards of 8Ghz.

Yes, it's using LN2. The point is: modern CPU architectures are capable of it if you can deal with the waste heat!

This discussion seems pointless if you rule out the sole reason we can't (regularly) exceed 5Ghz on conventional cooling.
 
If we can get more work done (more IPC) at 5GHz, then bumping that GHz up to "6" would incur even more development expense.

Some of the later Pentium IV's were at 5GHz, in 2005.
Current chips in 2021 are also at 5GHz.

Which one gets more work done?

Bumping up the GHz also incurs a power and heat penalty.

If an increase in GHz were the main thing needed to see more performance, then we'd probably see that.
It is not.

5GHz seems to be the sweet spot, until some major architectural change is brought forth.
Cost, power, heat, performance.


Just like airliners. Why is the typical max speed topped out at around 0.85 MACH?
Sweet spot.
We could build airliners that are faster, and we did. Concorde.
Not financially viable.
 
I thought main reason we haven't gone past 5ghz was the materials we are using? I thought it was something to do with silicon. We stuck at 5 so instead we gone sideways and in other directions to get improvements instead. Going faster only gets you so far.
 
everything have it's limits, Intel back in the late 1990s was decreasing the IPC of a CPU just so it can reach 5GHz, that project (sorry I forgot the name) was discontinued in light of parallel computing (multi core systems).

Intel also predicted that we can reach 10GHz by the 2010s, and well, they would be correct... if we hadn't run into issues with the silicon's maximum.
 
everything have it's limits, Intel back in the late 1990s was decreasing the IPC of a CPU just so it can reach 5GHz, that project (sorry I forgot the name) was discontinued in light of parallel computing (multi core systems).

Intel also predicted that we can reach 10GHz by the 2010s, and well, they would be correct... if we hadn't run into issues with the silicon's maximum.
What is the problem with silicon?
 
It's not only silicon (Si), it's also other materials used to make semi conductors which are actually impurities added to silicon like for instance, Galium arsenide, Germanium etc) which change electrical properties of silicon crystals. They are actually ones that do all work.
Another problem is closeness of conductors connecting transistors in a chip as well as closeness of transistors, diodes etc to each other and can cause EMP at higher frequencies. There are also other things like resistance (mostly capacitive) which raises super proportionally with frequency.