G
Guest
Guest
Archived from groups: misc.consumers,comp.sys.ibm.pc.hardware.video (More info?)
> Ummmm...you know of a display technology that
> inherently speaks floating-point notation? 🙂
Since you fail to quote the specific part of my post I must extrapolate that
you mean this snip:
"24 bit color has 8 bits per component which means 256 different
levels of intensity. This is adequate (not enough, IMHO; but adequate.. in
near future we'll be using floating-point color everywhere but not just
yet..)."
When you put it into context of memory organization you realize I was
talking about framebuffer, not display technology. Do you want another
lengthy lecture why floating-point framebuffer is advantageous in image
generation? I really don't have as much time currently as I had when made
the previous post.
I will be brief:
- floating-point framebuffer has more stops for intensity ramps in
low-intensity range which is crucial for good looking gamma correction: when
you "zoom" (luminance wise) into limited numeric range with fixed integer
steps from one intensity to another banding will appear, especially on
really dark areas of generated image.
- we can store intensity values over 1.0 maximum, even though the DAC will
clamp each component to 1.0 when we do post-processing and multipass
rendering we will find out that more vibrant and bright images are possible
to generate compared to color blending system where everything is saturated
to range from 0.0 to 1.0
- more precision will mean less loss of precision, example:
If we have 4 layers we are blending together we lose lowest bit to round-off
error every time, in this case it means maximum error will be 4. this means
two bits. Out of 8 bits of precision means we have only 6 significant bits
left so we only have 64 distinct luminance values we have control on how
they are computed.
- 8 bits per primary component is too little for any serious work, the next
logical step is 16 bits per component because it adds up to a nice
power-of-two sum (64 bits). Silicon Graphics used 12 bits (integer) per
primary on their workstation chipsets for years. This was a compromise we
don't have to do anymore because we are beyond the point where it is too
cost inefficient to implement floating-point arithmetic at affordable price.
The reason for this is that the we can now pack more transistors into
smaller area which means we can make chips smaller which means higher yields
which means products are cheaper to manufacture so we can finally get this
incredible performance and feature-rich hardware for peanuts.
It really doesn't take a rocket-scientist to figure this out. The Display
Technology doesn't have to keep up even, 8 bpp or even 10 bpp per component
displays are alright. More would be better obviously but the more serious
quality bottleneck of the framebuffer has atleast been overcome in
higher-end hardware. Now there is loss only at end of the pipeline: not in
every single step, add increased range for color components for the image
generation and it's a no-brainer. So the increased precision helps in more
ways than one.
Did you know that is is very difficult to get "perfect gray" on RGB565?
Interested in why? Even if not, the point is that 888 is just split of hair
away from 565, barely above the treshold of adequate. I would expect
intelligent beings to aim higher. And we are.
> Ummmm...you know of a display technology that
> inherently speaks floating-point notation? 🙂
Since you fail to quote the specific part of my post I must extrapolate that
you mean this snip:
"24 bit color has 8 bits per component which means 256 different
levels of intensity. This is adequate (not enough, IMHO; but adequate.. in
near future we'll be using floating-point color everywhere but not just
yet..)."
When you put it into context of memory organization you realize I was
talking about framebuffer, not display technology. Do you want another
lengthy lecture why floating-point framebuffer is advantageous in image
generation? I really don't have as much time currently as I had when made
the previous post.
I will be brief:
- floating-point framebuffer has more stops for intensity ramps in
low-intensity range which is crucial for good looking gamma correction: when
you "zoom" (luminance wise) into limited numeric range with fixed integer
steps from one intensity to another banding will appear, especially on
really dark areas of generated image.
- we can store intensity values over 1.0 maximum, even though the DAC will
clamp each component to 1.0 when we do post-processing and multipass
rendering we will find out that more vibrant and bright images are possible
to generate compared to color blending system where everything is saturated
to range from 0.0 to 1.0
- more precision will mean less loss of precision, example:
If we have 4 layers we are blending together we lose lowest bit to round-off
error every time, in this case it means maximum error will be 4. this means
two bits. Out of 8 bits of precision means we have only 6 significant bits
left so we only have 64 distinct luminance values we have control on how
they are computed.
- 8 bits per primary component is too little for any serious work, the next
logical step is 16 bits per component because it adds up to a nice
power-of-two sum (64 bits). Silicon Graphics used 12 bits (integer) per
primary on their workstation chipsets for years. This was a compromise we
don't have to do anymore because we are beyond the point where it is too
cost inefficient to implement floating-point arithmetic at affordable price.
The reason for this is that the we can now pack more transistors into
smaller area which means we can make chips smaller which means higher yields
which means products are cheaper to manufacture so we can finally get this
incredible performance and feature-rich hardware for peanuts.
It really doesn't take a rocket-scientist to figure this out. The Display
Technology doesn't have to keep up even, 8 bpp or even 10 bpp per component
displays are alright. More would be better obviously but the more serious
quality bottleneck of the framebuffer has atleast been overcome in
higher-end hardware. Now there is loss only at end of the pipeline: not in
every single step, add increased range for color components for the image
generation and it's a no-brainer. So the increased precision helps in more
ways than one.
Did you know that is is very difficult to get "perfect gray" on RGB565?
Interested in why? Even if not, the point is that 888 is just split of hair
away from 565, barely above the treshold of adequate. I would expect
intelligent beings to aim higher. And we are.