Some people dont understand how big the numbers can get.
We have binary computers, so talking about binary data is the only meaningful discussion. Light computers etc, who cares since they would be designed from the ground up to support whatever they are capable of.
Anyway, lets assume for moment we could figure out a way to extend our binary processors/computers down to the the atomic level. 1 atom = storage of 1 bit. 1 atom = 50 to 600 ish picometers wide, the crystal lattice spacing of silicone is 192 picometers. Current process tech is just moving to 32 nanometers, or 32,000 pico meters per side, So we are just now making the smallest components on chips with about 27,000 silicone atoms, not counting the dimension of height, and not counting the fact that the transistor and capacitor in each dram cell is more then that as well. Say 1,000,000+ atoms per dram cell. So, this assumption is FAR beyond our current level of technology.
Anyway consider a computer operating on the atomic level...
Number of atoms in the known visible universe = ~10^80th.(its likely much larger) How many bits of address space would it take if you used every atom in the known universe as memory storage, answer 266. Number of atoms in our solar system = about 10^57th or 2^190th, or a 190 bit address space could account for every atom in the solar system.
Number of atoms in the earth = about 10^50th = about 2^166 . To address every atom in the earth would take a 166 bit address space.
Number of atoms in an olympic swimming pool ~10^32 < 2^107. So 107 bits for that.
Or 128 bits could address every atom in 2 million olympic swimming pools.
Do some people understand how big 2^128 is now?
Sure for some things we need more bits, like floating point, 128 bits isnt all that precise. But for integers...128 bit integers are WAY bigger then we will need 100 years out.
A real world example is 16 bit computing to 32 bit computing to 64 bit computing. 16 bit is nothing, im sure scientists would have loved to have much larger when computes were invented, but we couldnt physically build something bigger. Even 32 bit doesnt meet the needs of scientists, 4 billion isnt that big of a number. 64 bit integers do meet the needs of science for the foreseeable future. INTEGERS, floating point numbers 128 bit is not percise enough for current science, 256 is much better, and we use double precision floats now for science, so having 256 bit hardware for floating point still makes lots of sense. Not for your average gamer, but for scientists absolutely, they could likely use a higher order then that.
The only real wall that desktop users run into with 32 bit is memory space for integers, we have more ram then we can address with 32 bits now, and processes need more then 32 bit memory spaces now. But we wont run into that limtation again for 64 bits for a long time yet.
Im not saying that no one will ever need more then 64 bits, im staying we dont need more then that NOW or in the next 10 years, or in the next 20 years. After that it gets more fuzzy.
The difference between 64 bit windows and 32 bit, is pretty much strictly addressing. How much memory a process can have. 32 bit pointers were too small. WE dont need 128 bit points tho, so whats the point of 128 bit windows....there is none. Not now anyway, maybe in 20 years for super computers at the earliest.