Microsoft Working on 128-bit Windows

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]izliecies[/nom]What is the point of going higher than 64 bit?[/citation]

I assume its the same as going from 32 to 64, you can have more memory. We seem to be stuck at 2gigs per stick right now though, so I don't see the rush.
 
Hey!!! 128bit would be the coolest of it all..... but a reverse compatibility will be overkill...... even with todays best processors..... I guess, the hardware comps should get onto 256bit computing....... and one things for sure.... we all know it's just as simple as changing from 32 to 64 was for them, it was just the R&D costs that need to be equated now..... then it's all a matter of time.....the next time you hear about OCing...... it going to be OCing a 64bit to a 265bit processor..... opening lasercut pipes, building new pipes and all the LOLS....
But really , 128 bit sounds cool
 
a lot of you people would have fit right in with all these other idiots (at least Bill Gates saw the error of his ways)

"Computers in the future may weigh no more than 1.5 tons."
* Popular Mechanics, forecasting the relentless march of science, 1949

"I think there is a world market for maybe five computers."
* Thomas Watson, chairman of IBM, 1943

"I have traveled the length and breadth of this country and talked with
the best people, and I can assure you that data processing is a fad that
won't last out the year."
* The editor in charge of business books for Prentice Hall, 1957

"But what ... is it good for?"
* Engineer at the Advanced Computing Systems Division of IBM, 1968,
commenting on the microchip.

"There is no reason anyone would want a computer in their home."
* Ken Olson, president, chairman and founder of Digital Equipment
Corp.,1977

"640K ought to be enough for anybody."
* Bill Gates, 1981, commenting on size of RAM in computers
 
[citation][nom]zerapio[/nom]I was going to explain why but JasonAkkerman did a fine job at it.[/citation]

No he didn't. He spewed out a lot of assumptions that he carries around, rather than real knowledge. For that he got a lot of thumbups. I already once before pointed out the errors, for that I got a lot of thumbdowns.
I suppose that well reflects the general level of clue exhibited all throughout this entire thread.

Way back, in the stone age, when the microprocessor had just been invented, and before that, terms like 8-bit and 40-bit computing really did concern width of registers and width of data. But that was a long time ago now. Width of data is something very flexible these days. It only requires a very simple instruction extension to go farther. And that instruction extension will be 100% compatible with your current 32-bit architecture. We've had 128-bit registers and 128-bit instructions since PentiumIII. Remember SSE2? And each Core 2 core can crunch four 128-bit fields at the time. A quad, sixteen 128-bit fields.

16-32-64-bit computing concerns something else: The width of an address pointer!
This is an architecturally very fundamental attribute. If you change this, you completely change your computer architecture, an entirely new and different system!

Now there are a few things that are important to understand: It is tempting to assume that address pointer width concerns how much RAM we can have. But NO! Not directly.
Strictly, it's only the limit of size of flat address space that ONE SINGLE SOFTWARE PROCESS can have. The actual rules after that is up to the OS to decide.
But: The address pointer width DOES NOT LIMIT RAM!
Since the 80286 and Windows16, the software's address space is decoupled from RAM by a mapping process. And every other modern computer and OS also does this.

A 32-bit address pointer's space is 4GB. But that doesn't stop Windows 32-bit server OS'es from coping just fine with using 8, 16, 64 GB RAM. The ~3.2 GB limit of WindowsXP is just an implementation issue. But there is not much point in doing anything about it, because we have a much more serious problem with 32-bit: The flat space of each individual single software process is limited (2Gb by rule)! On a server, a collective of many processes can run fine together on 64GB. But Battlefield 2142 will crap out no matter how much RAM you have, or whatever OS you're running – because the process runs out of it's 32-bit address space! (and for various reasons, it will do that at about ~1.7 GB allocated memory)

: THAT – is why we need 64-bit software, processors and OS!

Question: Do we need 128-bit software?
Answer: No we bloody well don't!

Let's try to get some perspective on this: Let's say the total amount of memory available for all the concurrently running processes on the worlds largest supercomputer is 8 TeraByte (actually I think it's < 6 TB, but never mind)

8 TeraByte = 8 * 1024 *1024 *1024 *1024 Bytes.
Address space of 64-bit: 16 ExaByte = 16 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024 Bytes.

Finally, please note that this is for one single software process. It doesn't matter what fantasies you have about cloud computing or gargantuan servers, it doesn't matter if the entire world population is 10 billion and everyone log on at the same time – Until the moment a single process requires a wider address space than 16EB, we're just fine with 64-bit.

Conclusion: IA-128 is either just a hoax, total bullcrap, Or it doesn't really mean anything like the meaning everyone here seem to give it, i.e. IA-128 is NOT a 128-bit ISA.
 
[citation][nom]demonhorde665[/nom]actually video cards and game systems (excluding the xbox and the 360 the xbox was a 32 bit chip and the 360 has a dual core 64 bit chip) have been ahead on the "bit" ammount for ages , 128 bit video cards are like long ago yesterdays news , nvida's current top of the line of video chips are 486 bit (not sure the actual number but i think they are 486 i could be off a bit though) hell 256 bit video card chips are even old news (the first GF and radeon's were 256 bit)[/citation]

...And that is just width of data paths, and has nothing to do with the bits we talk about when the subject is OS or CPUs.
 
[citation][nom]demonhorde665[/nom]this is stupid and a waste of stock owner's money ... i mean seriously there isn't even any 128 bit processors yet , hell 32 bit processors lasted from about 1990-2004 before you really saw 64 bit processors take over the market (though of note amd had there 64 bit in 03 i think) i'dsay it's waaaay to early to plan a 128 bit OS. i only have one question , why keep doing incremental menial steps ? i think the next "bit change: should NOT be to 128 but should instead look at a 256 or 512 target these small steps are a waste of time , not to mention research money given that we could just as easyily spend that money research adn developing 256 bit or 512 bit for the next ten years and come off with far more gains computer wise[/citation]
Some estimates put the number of atoms in the universe at:
40 * 1000^26
The flat address space of a 256-bit ISA would be
64 * 1024^25
I'm sorry but I can't really see a major part of all the mass in the universe turned into computer RAM in a ten-year period.
 
I agree with Vermil. This thread is turning into an idiocracy with the blind leading the blind and giving each other thumbs up for their mutual misunderstanding. Sure, lots of processors and GPUs can handle wide DATA sets but that is NOT the same as having a wide address bus. Even the x86_64 processor doesn't yet have a true 64-bit address bus. It has 64-bit registers for virtual address pointers but even then the current implementations don't allow the full address range to be used, because it would be a waste of silicon to provide address mapping resources for the full 16EB range. Likewise, the physical address bus width is restricted because it would be a waste of pins, a waste of power to drive them and add unnecessary complexity to the motherboard circuit track layout. These restrictions are at the hardware abstraction layer, so any software written for today's 64-bit systems will not need to be changed if and when newer implementations of x86_64 come out with slightly wider addressing capability.
Anyway, interesting to see how much comment was driven by a hoax story.
 
what a joke. All machines capable of running windows vista or 97 are 64 bit compatible, but MS still decided that they needed a 32bit version. I think they need to change their policy before bothering to investigate 128, 256 or whatever bit software. C'mon, even Apple dropped 32 bit years ago.
 
Let's apply Moore's law on this. Or rather on the amount of RAM. (strictly address width only concerns one process).
Moore says a doubling every 18 months. So that is a quadrupling every 3 years. 2009 I'm on 8GB ram. I should have been on 2GB three years ago, 2006. Yup, fits! And back in 2003 I should have been on 512MB. I think that fits too, though I can't remember too well. 2000 - 128MB, can't remember. 1997 - 32MB, well I think I upgraded to 80MB about then, but I'm quite sure I ran 16MB back in 1996, so it still fits pretty well.
1994 - 8MB. 1991 - 2MB. 1988 - 512KB, still fits reasonably well, doesn't it. Weren't most PCs on 640KB about in those days?
Now take it into the future:
2012 - 32GB. 2015 - 128GB. 2018 - 512GB. 2021 - 2TB. 2024 - 8TB. 2027 - 32TB. 2030 - 128TB. 2033 - 512TB. 2036 - 2PB.
The current X86-64 ISA is good for 4 PetaBytes, I think. So we should be fine until 2037-2038. But 64-bit addressing itself, we could keep until
2039 - 8PB. 2042 - 32PB. 2045 - 128PB. 2048 - 512PB. 2051 - 2EB. 2054 - 8EB... Ok, somewhere then we have to start thinking about a 128-bit ISA.
 
You guys are all retarded except for Vermil and nforce4max, x64 have an address limitation of 17.2 Billion GB, but its artificially limited in operating systems. BTW Windows 7 Ultimate has a physical memory limit of 192GB. So this is clearly a hoax. As the most consumer motherboards can support right now is 24GB. So for us to reach the max that windows support it would mean that they would have to develop 32GB RAM sticks for triple channel memory or 48GB sticks for dual...UMMM ya I don't think that's gonna happen for a long while...and all you that say this is for the long term and Itanium. How long term 50years from now? Maybe by then ALL the bugs and security vulnerabilities will be straightened out and Windows will actually be perfect.
 
[citation][nom]dirtykid[/nom]I am sure some of the largest supercomputers are already nearing the limits of 64-bit addressable RAM, or will soon also.[/citation]

I doubt this - a quick back-of-the-evelope calculation suggests that the 18,446,744,073,709,551,616 bytes of memory addressable by 64-bit processors would weigh in around 18,000 metric tonnes :).
 
Yes the new 128 bit OS will use the web to tie all the RAM and processing power together into a virtual AI on a cloud computing platform called Skynet.
 
Or another way to illustrate how much memory 64 bit can address. At 1 dollar per gigabyte(its a lot more), that would cost you 17 billion dollars to buy enough ram, and if each motherboard in your super computer had 128 slots for ram you would need to buy 134 million motherboards and house and wire all that together.

So ya, current super computers are NOWHERE near the address limit, and wont be for ATLEAST 20 years. And thats assuming we get some major breakthroughs during that time frame. It will likely be much longer.

---------

So many people saying why fight new tech, more is better. Well more isnt better in this case.

If they changed all personal computers to 128 bit, or even just super computers...here is the down side today.

Every program you run now will require more ram for the exact same performance. IE your computer will run slower on the same memory. Or you will have to buy more memory to do the same job

CPU complexety will go up drastically, means they will cost more OR, they will have to reduce the number of cores to handle it.

CPU clock speeds will go down, the 'wider' a processor is the slower you can clock it.

In the end you will have reduced performance just so you can say you have a fully 128 bit wide cpu.

-----------

In the future will we need it....sure.....but they shouldnt start production on it for at least 10-20 years, and likely, especially for personal computers much longer then that say in 30-40 years.
 
I say bring it on, the faster the better. While your at it work long and hard on boot times while your at it. Yes they will be backward compatable, did you not read the artical????
 
I find this all very funny. I don't think we should stop development on software because we can not think of what we might use it for. Things are changing always. i am sure they thought the same thing when they had 16 instead of 32. How will i scan in my brain into a computer and have on board super security. you never know what they might build, could help with developing chips. i know i would like 64 windows and more ram now. I just think we could do a lot better. what about fiber optic and no ram? just think going all the way is better then waiting to be to late. just go for it find out what happens they have the money.
 
20 years ago Windows 3.0 would see up to 16mb RAM now Windows 7 can address 192~ GB of RAM with the upper echelons of consumer motherboards supporting 16 24 and even 32GBs of RAM. My point is that over all the arguing over the usefulness of 32 64 or 128bit operating systems, the silliest argument is that it will be 20 years before we see the benefits of the new architecture. Not 5 years ago memory amounts over 4GBs was seen as server only, 5 years from now home computers will be able to do what servers today are capable of, and 20 years from now we will look back on 128bit processing as a relic of the past. 20 years of development brought us exponentially higher RAM totals do not underestimate the R&D departments of Intel, AMD, IBM, MS, and all the others, unless you like sounding like a jackass.
 
what i really wish is for microsoft to research how to get the prices of windows and office down, way, way down. don't care much about 128-bit now. limiting factor is my typing speed, my reading speed, by brain-processing speed. what am i going to do with all those RAM if i can only read one screen at a time.
 
Some people dont understand how big the numbers can get.

We have binary computers, so talking about binary data is the only meaningful discussion. Light computers etc, who cares since they would be designed from the ground up to support whatever they are capable of.

Anyway, lets assume for moment we could figure out a way to extend our binary processors/computers down to the the atomic level. 1 atom = storage of 1 bit. 1 atom = 50 to 600 ish picometers wide, the crystal lattice spacing of silicone is 192 picometers. Current process tech is just moving to 32 nanometers, or 32,000 pico meters per side, So we are just now making the smallest components on chips with about 27,000 silicone atoms, not counting the dimension of height, and not counting the fact that the transistor and capacitor in each dram cell is more then that as well. Say 1,000,000+ atoms per dram cell. So, this assumption is FAR beyond our current level of technology.

Anyway consider a computer operating on the atomic level...

Number of atoms in the known visible universe = ~10^80th.(its likely much larger) How many bits of address space would it take if you used every atom in the known universe as memory storage, answer 266. Number of atoms in our solar system = about 10^57th or 2^190th, or a 190 bit address space could account for every atom in the solar system.

Number of atoms in the earth = about 10^50th = about 2^166 . To address every atom in the earth would take a 166 bit address space.

Number of atoms in an olympic swimming pool ~10^32 < 2^107. So 107 bits for that.

Or 128 bits could address every atom in 2 million olympic swimming pools.

Do some people understand how big 2^128 is now?


Sure for some things we need more bits, like floating point, 128 bits isnt all that precise. But for integers...128 bit integers are WAY bigger then we will need 100 years out.

A real world example is 16 bit computing to 32 bit computing to 64 bit computing. 16 bit is nothing, im sure scientists would have loved to have much larger when computes were invented, but we couldnt physically build something bigger. Even 32 bit doesnt meet the needs of scientists, 4 billion isnt that big of a number. 64 bit integers do meet the needs of science for the foreseeable future. INTEGERS, floating point numbers 128 bit is not percise enough for current science, 256 is much better, and we use double precision floats now for science, so having 256 bit hardware for floating point still makes lots of sense. Not for your average gamer, but for scientists absolutely, they could likely use a higher order then that.

The only real wall that desktop users run into with 32 bit is memory space for integers, we have more ram then we can address with 32 bits now, and processes need more then 32 bit memory spaces now. But we wont run into that limtation again for 64 bits for a long time yet.

Im not saying that no one will ever need more then 64 bits, im staying we dont need more then that NOW or in the next 10 years, or in the next 20 years. After that it gets more fuzzy.

The difference between 64 bit windows and 32 bit, is pretty much strictly addressing. How much memory a process can have. 32 bit pointers were too small. WE dont need 128 bit points tho, so whats the point of 128 bit windows....there is none. Not now anyway, maybe in 20 years for super computers at the earliest.
 
I think what their implying by 128 bit windows is a completly x64 SSE(which is 128bit) optimized kernel. This is a enormous task. Basically today their are already a huge amount of SIMD instructions and removing all the non SIMD legacy calls which be a good performance increase.

Having said that no word of any graphical improvements over Windows 7 such as improved compute shaders(>1 teraflop throughput on the graphics card with very little branching was 10 gigflop per core on the cpu with lots of branching). Also no mention of any new SIMD improvements such as 256 bit simd.

So in sum, we are talking abut a SIMD optimized windows kernel.
 
Status
Not open for further replies.