This is what happens when "marketing" departments take over technical information. 32 / 64 "bit" on a CPU just refers to the maximum size of its internal address registers, typically the largest amount of memory addressable. A CPU with "32" bit sized registers is incapable of understanding a 33 bit or larger memory address and must instead use some sort of page mapping technique to address memory (PAE anyone). A 64 "bit" sized register is capable of address an incredibly large number.
18446744073709551616 to be precise (2^64) which equals 2,097,152 Terabytes.
Considering the size of this number, we won't be moving to anything bigger then "64 bit" anytime in the next few decades if not century.
That being said, CPU's also use different sized "bits" for different calculations. There exists an 80 bit floating point calculation often used in mathematics or physics. That has recently been replaced by a 128 bit floating point calculation due to increased precision required. These large numbers are requires because your not using them to address memory and are instead using them to count molecules, or calculate the speed of an electron relative to the speed of light and derive its potential mass energy. That type of stuff requires very large very precise numbers, and thus you have special calculations just for them.
The you have the memory interface bandwidth. A single SD/DDR/DDR2/DDR3 DIMM chip has 64-bit memory interface. Each cycle transfers 64-bits to the CPU. If a CPU supports dual channel configuration, then you can have two channels of 64-bits sending to the 128-bit memory controller on the CPU. Internally the CPU may have 256 or 512 bit interfaces to its L2 cache memory.
As you can see there are many different "bits" in reference to a CPU, the industry standard is to reference the one dealing with the largest amount of directly addressable memory. In this case "64-bit" CPU's will be around a very very ~very~ long time. There will be no "128-bit" CPU's being produced as it would be just a marketing term.