Microsoft Working on 128-bit Windows

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Dudes, IT's A JOKE.

32-bits address 4GB of storage
33-bits address 8 GB
34-bits address 16 GB
64-Bits address so much storage that they have to invent funny names for it like exabytes. 64-bit address 16 exabytes of memory. It'll be a while before we need to address more than that. Even if windows 8 or 9 comes REAKKY slowly it'll be a while before you need to worry about greater than 64 bit addresses.
 
[citation][nom]tayb[/nom]Oh man, 64 bit was considered the "holy grail" and that was all that we would need. No more limitations. 128 bit is crazy. I don't think it will be necessary or relevant really until Windows 9 or 10. 64-bit on the Windows side still doesn't see mass adoption because of all the older computers that aren't 64-bit capable. I wonder how long until we see 128-bit processors? AMD64 was a hit and they've been underwhelming since then maybe they'll make a comeback with AMD128 lol.[/citation]

Really depends. IntelMight do it first. But if they are saying IA-128 I would think it will be Intel since IA-64 is their true 64bit in Itanium.

Besides right now AMD needs to think of survival more than anything.
 
Hmm.. I can see how there would be a need for apps to be able to access more than 16 million TB of ram in the foreseeable future...

I don't wanna have a "you'll never need more than 640k" moment, but this does seem a little excessive at the moment, unless there is some major technological breakthrough we are not aware of.

on the processing side, I think most things that needed that much accuracy would use floating point. also, you don't need a 128bit app or OS to do 128 bit operations, just need a 128bit instructions in the x64/ x86 instruction set, which I think they have, even though they can't be done with one physical operation on 32bit or 64bit processors.
 
[citation][nom]matt87_50[/nom] also, you don't need a 128bit app or OS to do 128 bit operations, just need a 128bit instructions in the x64/ x86 instruction set, which I think they have, even though they can't be done with one physical operation on 32bit or 64bit processors.[/citation]

You're halfway good, then you mess up and crash. "32-bit" or "64-bit" has nothing to do with it. Core 2, Phenom, PhenomII, AthlonII, Core i7, Core i5 all have multiple 128-bit wide hardware execution units, supporting multiple pipes at the same time, in each core. And soon they (X86-64) will get 512-bit wide registers and vector instructions (LRBni & Fusion).

We've been 128-bit wide since the Pentium III. With efforts at parallelism even wider: So for all those so keen to compare bits with game consoles and graphic cards, the processing width of a Core2Q is four cores * four pipes * 128-bit = 2048 bits. Now suck on this – all you > “why are we still on 32-bit”! And it runs just fine in 64-bit Windows.

"32-bit" and "64-bit" has nothing to do with width of computing. It's the length of the address pointer used by a machine instruction to refer to data.
And while 32-bit addressing have been good for more than a decade, 64-bit addressing will be good for at least 50 years, - EVEN IF MOORE'S LAW CONTINUE AT THE SAME BREAKNECK PACE AS IN THE PAST! Which, btw, for pure physical reasons, is highly doubtful.

There's one thing that is usually overlooked in posts that regard the span of 64-bit addressing. 64-bit pointers possess a space of 16 ExaBytes, yes, but that is not at all the limit of RAM that could conceivably be addressed by a 64-bit OS or processor. It is merely the size of flat address space that is available to one single software process. It is theoretically possible to build 64-bit processors and 64-bit OS that can address even more ridiculous amounts of RAM (that is not the way technology is heading, but it is perfectly possible).
 
going to 128bit will not really improve performance. if you compare many 32 bit apps to 64 bit apps, the 64 bit one generally runs the same fast with the exception of a few professional applications which seem to be like 1-2% faster

 
I think the best post on this article is the one that tries Moores Law on this. With the current speed of RAM increases we will hit the limits around year 2054. There, you got it.

In 2054 i will be an old man upgrading my laptop to 32 billion GB ram from my old 16 billion GB ram. And for that i need a 128 bit os.

lol
 
It is more for security encription then anything else. It is now offical, goverment black projects must now be at the level of 256, 512 and 1024 bit if we are that close to 128 bit commercial use.
 
This has been bouncing around all of last week at arstechnica. Read the comment section, this is entirely bogus
 
Like someone pointed out, unless we have computers as huge as a mountain, we will not consume the potential of 64 bit. Current manufacturing process has almost reach the limitation of the physics.
 
128-bit is crazy for a PC. I could really see the need for 128-bit in a future supercomputer but in a PC, no way. Not for a long long time. I have 64-bit with 8GB of RAM and have no problems at all. 16GB should be more than enough for the next few years at least. A 128-bit supercomputer is great to think about but how many supercomputers run Windows?
 
What people don't know is that 128-bit processing has already been around for many years. You can't google it or search it as it is censored by the US government. 128-bit is computing at the server level. (I will venture to say that there's also 256-bit)

Again you have to have government security access to get more information. A former professor of mine question in one of my projects about 5 years ago why we weren't recommending 128-bit servers. To which the entire class replied there was no information out. He said check the IBM site and we did but nothing there... at that point in time he realized remembered that he has government security clearance and that information was readily available to him out others like him.

Technology we get is always about 5-10 years behind the government at the higher security clearance level.
 
[citation][nom]thaisport[/nom]What people don't know is that 128-bit processing has already been around for many years. You can't google it or search it as it is censored by the US government. 128-bit is computing at the server level. (I will venture to say that there's also 256-bit)Again you have to have government security access to get more information. A former professor of mine question in one of my projects about 5 years ago why we weren't recommending 128-bit servers. To which the entire class replied there was no information out. He said check the IBM site and we did but nothing there... at that point in time he realized remembered that he has government security clearance and that information was readily available to him out others like him. Technology we get is always about 5-10 years behind the government at the higher security clearance level.[/citation]
- Of course "128-bit" "processing" has been with us for years!
All kinds. But that is not what this bogus bullcrap is about.
First of all, a "128-bit server" simply implies a FTPS server that uses 128-bit SSL encryption. And guess what? In terms of ISA that may even run on a "32-bit computer", and 5 years ago definitely did.
And in any way the government have different processors than the rest of us, it's obsolete stuff, in no way comparable.
 
64 bit is an improvement because there are data types being used that are larger than 32 bits. The largest common data type is 64 bit though, so I really don't see 128 bit providing nearly as big of a performance increase.
 
[citation][nom]daneren2005[/nom]64 bit is an improvement because there are data types being used that are larger than 32 bits. The largest common data type is 64 bit though, so I really don't see 128 bit providing nearly as big of a performance increase.[/citation]
Read the thread. Long datatypes have been crunched in 64 and 128 bit long segments in "32-bit" cpus for a rather long time. Not only that, but we also crunch multiple shorter datatypes, 8, 16, 32, 64, in 128-bit long fields, in 128-bit long vector-registers and 128-bit wide hardware units. And we even do it in multiple parallel pipes as well. And soon, some cpus might feature 512-bit long vector registers. Those cpus will still be "64-bit"
When we today talk about 16-, 32- and 64-bit computing, OS or CPUs, it has nothing to do with width of computing, width of registers or width of datatypes. Width of computing increase all the time, with each generation of CPUs and greater transistor counts. That is what all this SSE, SIMD, MMX -things (which you may have heard about) is all about.
Articles, dictionaries or other books that tell you 16/32/64 is about the length of registers, are simply WRONG.
The bitness, of a CPU, OS or software, is about how long the adress, which refers to the virtual adress of the data, is in the machinecode instructions. This adress is used by the OS and CPU to map to a hardware location, usually a physical adress, which in a 16-bit computer is always longer than the virtual adress, in a 32-bit is typically longer, and in a 64-bit is always shorter.
And this "128-bit" Windows nonsense is either total bogus or a severe misunderstanding. OS'es will remain 64-bit for decades.
 
Status
Not open for further replies.