I see what you mean, but that is WAY theoretical. Before considering the speed of the electron, there are numerous other factors, such as the heat dissipation caused by their movement. Also, Hertz means 1/sec (or per sec), as in cycles per second. The number of cycles per seconds depend on what you call a cycle. There is a big difference between going from 0V to 5V then back (common logical cycle) and going from 0 to 1.3V (common CPU clock cycle).
As for the limitation, maybe the most important factor, is the switching time of the transistor. When you are working on very high frequencies (GHz), parasite capacitance will affect your system's performance (every electronic component stores a little bit of electric energy in form of electric field, acting like a capacitor). That stored energy needs to be discharged for the transistor to go back to its ideal LOW state, and that takes a little bit of time. If you set it too fast, it won't be able to go back at all, causing the system to fail.
Also, the electric rails on a CPU are tiny (i mean nanometricaly tiny), and cannot transfer a huge amount of charge at once. Too much current would cause them damage.
Intel and AMD engineers have to consider all this and a whole lot more when building a system, setting an ideal environment for the unit to work in (electrical parameters, heat condition, frequency), just so we can screw everything later.