IA32 and 64 bit precision

Mephistopheles

Distinguished
Feb 10, 2003
2,444
0
19,780
This might be a stupid question, but I've been asked and I didn't have the appropriate answer...

This <A HREF="http://www.llnl.gov/computing/tutorials/linux_clusters/" target="_new">site</A> seems to indicate that you can use 64-bit FP precision on IA32 processors, or 80-bit precision at that. Is that correct? What is it that cannot be done in 64 bit? 64-bit control words are emulated or what? What is going on?

Or am I just saying nonsense here? Ahhh! 😱
 
64-bit refers to the size of the general purpose registers (GPR's). The GPR's in modern x86 MPU's (and most MPU's) store integer and memory address information. Floating point data have their own set of registers (the x87 register stack in the case of x86 MPU's) and are 80-bits in length.
The move to "64-bit" processors mean that the GPR's are 64-bits wide which means you can natively do 64-bit logical and arithmetic on 64-bit integers. You can also (theoretically) have 64-bit memory address pointers (although most modern 64-bit MPU's have 48-bit memory address pointers).
FP is not affected by the definition of being "64-bit". In extensions such as x86-64, however, there can be some changes that affect FP performance as well (well, in the case of x86-64, SIMD performance). Like I've explained in other threads, x86-64 brings a lot more to the table than simply "64-bit".

"We are Microsoft, resistance is futile." - Bill Gates, 2015.
 
Ah, so you mean that 64-bit floating point operations is natively supported... Interesting. Just out of curiosity, if logical and arithmetic ops on 64-bit FP is supported, then who has interest in ops with 64-bit integers? Address pointers, of course, are another thing.

But seriously, doesn't that really just mean that the main advantage of moving to what is called "64-bit processors" is the added address pointer freedom, i.e. more memory capacity? Is that really such a big advantage?... For the average user, it's not really relevant... right? Maybe overall bandwidth increases due to doubled control word size or something...

Architectural differences can affect FP performance, of course, but the extended registers don't have anything to do with that, if I understood you correctly.

Just trying to understand everything here, sorry.
And thanks a lot, imgod2u.
 
Ah, so you mean that 64-bit floating point operations is natively supported... Interesting. Just out of curiosity, if logical and arithmetic ops on 64-bit FP is supported, then who has interest in ops with 64-bit integers? Address pointers, of course, are another thing.

Logical operations on FP data is usually a bad idea. FP is great for math, but for logical operations, the high latency of FP operations make it ill-suited. Programs such as encryption algorithms or scientific calculators rely on low-latency logical operations that only 64-bit integer operations can allow.

But seriously, doesn't that really just mean that the main advantage of moving to what is called "64-bit processors" is the added address pointer freedom, i.e. more memory capacity? Is that really such a big advantage?... For the average user, it's not really relevant... right? Maybe overall bandwidth increases due to doubled control word size or something...

Ding ding ding. What have people been saying about the whole 64-bit hype since day 1?

Architectural differences can affect FP performance, of course, but the extended registers don't have anything to do with that, if I understood you correctly.

No, but x86-64, again, doesn't just extend the GPR's to 64-bit, it also adds 8 more SSE/SSE2 (XMM) register, and 8 more GPR's. This can potentially speed up SSE/SSE2 applications by 10-15% and normal general purpose operations that uses the GPR's by 10-15%, maybe more.
That is assuming, of course, that your code is recompiled to make use of the extra registers.

"We are Microsoft, resistance is futile." - Bill Gates, 2015.
 
Logical operations on FP data is usually a bad idea. FP is great for math, but for logical operations, the high latency of FP operations make it ill-suited. Programs such as encryption algorithms or scientific calculators rely on low-latency logical operations that only 64-bit integer operations can allow.
Not only are IOps faster than FLOps, but integer logic is pretty straight forward and accurate. FP logic can get a bit messy thanks to the nature of FP's rounding near the least significant digit. With an integer, 1 = 1 always. With FP 1.0 might not always equal 1.0. Code to check for this kind of stuff is always annoying. 🙁

And finally there's just plain old bit manipulations using integers. :) Use the integer as an array of bit flags. Use shift left and shift right operations for encryption. So on and so forth. This kind of stuff doesn't work out so well with FP.

Ding ding ding. What have people been saying about the whole 64-bit hype since day 1?
Not just is it useless to most people, but code that's poorly converted to 64-bit (or pure 64-bit code) is going to suck up tons more memory now because your standard integer is going to be set to 64-bits instead of 32-bits, so these variables take up twice as much RAM! :O Better stock up on that PC3200.

No, but x86-64, again, doesn't just extend the GPR's to 64-bit, it also adds 8 more SSE/SSE2 (XMM) register, and 8 more GPR's. This can potentially speed up SSE/SSE2 applications by 10-15% and normal general purpose operations that uses the GPR's by 10-15%, maybe more.
That is assuming, of course, that your code is recompiled to make use of the extra registers.
You know, I was thinking about this in an earlier thread somewhere... I bet if one tried they could write an OS that uses the extra registers kind of like Intel's hyper-threading when running only 32-bit apps in some sort of a special '32-bit compatability mode'. It'd make for faster thread switching and better CPU utilization if the OS could mimic the registers of a second processor. Granted, there would be a lot of details to work out, but it'd be kind of cool for people who only run 32-bit apps anyway.

"<i>Yeah, if you treat them like equals, it'll only encourage them to think they <b>ARE</b> your equals.</i>" - Thief from <A HREF="http://www.nuklearpower.com/daily.php?date=030603" target="_new">8-Bit Theater</A>