sminlal :

"__q4quality wrote :__ All your pointing out is the inconsistency in the windows display method for binary number display."

There's only two ways to make them consistent, either eliminate the multipliers altogether and just show the size as a long, comma-separated decimal value so that mismatched multipliers can never arise - or use decimal-based multipliers.

I guess I need to clarify this for you.

I will repeat a few things, but here I go:

It is easy to keep things straight:

1) When talking in computer terms, numbers are in "binary" units (with multipliers K = 1024, M = K*K = 1024*1024, etc...). And,

2) When talking about non-computer terms numbers are in decimal (with multipliers K=1000, M= 1000*1000, etc...).

And,

**3)** when displaying "binary" numbers, to avoid confusing people that are use to decimal number representation, simply display them in this format: # S (Bytes or whatever)

- where # is a number from 0 - 1023 (or 1024, if you like), and

- where S is a multiplier like K=1024 or M=1024*1024 or G, or T, etc... . Powers of 1024, instead of powers of 1000 that SI multiplier notation uses. # is 'basically' a base 1024 number, and well suited to represent a true binary number.

For example:

- 1023 B or

- 1024 B or

- 1 KB (here #=1 and S=K=1024) or

- 962.89453125 MB or

- 963 MB (here #=963 and S=M=1024*1024) or

- 1000 MB or

- 1023 MB or

- 1.0009765625 GB.

All the above numbers are easy to compare with each other.

You can even convert 962.89453125 MB to decimal notation without multipliers if you want to (its 1009668096 Bytes), but its pretty obvious that this number is less then 963 MB when displayed correctly.

This is just an appropriate and consistent method for displaying "binary" numbers to people use to a decimal number system.

And

**that** is the whole point.

Well that, and that we should keep a consistent number representation language between the whole chain from low-level designer to end user (i.e. We should keep KB, MB, GB, ... as there "binary" meanings of 1024 B, 1024^2 B, 1024^3 B, ... instead of using the SI notation of 1000 B, 1000^2 B, 1000^3 B, ...).

What windows does is an inconsistent display of "binary" numbers,

*and for you this inconsistency is the problem.*
p.s. We are talking about a number representation "language" for numbers used in computers v.s. other areas of life.

And yes I do think we should keep consistent language between designers and end users, in this respect at least.

That does not mean that if a designer decides to represent 35 decimal as "AHR" for other purposes, that they should not do so.

But God forbid a low-level software designer that decides to represent 0x0023 as 35 decimal in the situation you described above.