What Is 10-Bit (And 12-Bit) Color?

Status
Not open for further replies.

abtocool

Commendable
May 7, 2018
8
0
1,510
I find this article of poor quality, not-clear-enough, too many words were used to express few things.
 

bit_user

Polypheme
Ambassador
The point about underflow is an unhelpful digression. If we're talking about unsigned, unbiased integers, then no amount of bits will avoid that problem. An example using overflow would've been slightly more relevant.

Why not just use the stair step analogy? It's simple and easy to understand.

The example that uses 8-bit indexed color is misleading. Most people don't know about indexed color and would confuse 8 bits per pixel with 8 bits per channel (i.e. the 24-bit example).
 

cia1413

Distinguished
Mar 4, 2012
22
5
18,525
I don't think you are understanding the use of a look up table, LUT. A LUT is used to correct an the color of an image. The reason you use a LUT on an Eizo is because you can easily add color correction, used a lot of the time to just change the feel of a image, and quickly apply it to the output of the monitor. Its not used as a "trick" to make the image look better, just a tool to change it.
 
  • Like
Reactions: coolitic

abtocool

Commendable
May 7, 2018
8
0
1,510
Hey, isn't the LUT table basically a database of colors (running into billions) and the whole idea is that the monitor processor doesn't need to process which color to produce each time, and just look it up (recall) from the LUT table?

a 14bit LUT (Benq/Viewsonic) table generates 4.39 trillion colors, which are then used to cherry pick 1.07 billion colors (10bit), and produce appropriate colors.


 

koga73

Distinguished
Jan 23, 2008
405
0
18,780
Another benefit to 8-bits per channel is it fits nicely into a 32-bit value. ARGB = 8-bits per channel x 4 channels (A is Alpha for transparency). 10 or 12-bit color values wouldn't work with 32-bit applications. Assuming your applications are 64-bit you could go up to 16-bits per channel (16 x 4 = 64).
 

cia1413

Distinguished
Mar 4, 2012
22
5
18,525


No, its a way to modify color to display them in a new way. You make a LUT to correct for an error in a displays light curve or to add a "look" like make it blue and darker to emulate night time.
 

bit_user

Polypheme
Ambassador

That's a separate matter from what the display actually supports and how the data is transmitted to it.


Of course it can work. It just doesn't pack as densely or easily as 8 bits per channel.

Moreover, 3D rendering typically uses floating-point formats to represent colors. It's only at the end that you'd map it to the gamut and luminance range supported by the display.
 
  • Like
Reactions: coolitic

TripleHeinz

Reputable
Aug 29, 2014
106
3
4,715


I was about to answer something similar but you nailed it first and perfectly.

I understand that all this is about the convenient packiness of a color in a 32bit value.

D3D9 was able to work extensively with 32bit unsigned integer formats for color representation. But current APIs use normalized (between 0 and 1) 32bit floating point units to represent a color, then internally the GPU uses the floating point data to convert and pack to the output format. I think this is an awesome way to do it and virtually compatible with any format.

A 32bit application can perfectly work and represent a 10 and 12 bit color value. A 32bit system can perfectly work with 64 bit values. Even more, in the case of the x86 architecture, it supports an 80bit extended precision floating point unit in hardware.
 

TripleHeinz

Reputable
Aug 29, 2014
106
3
4,715
There is this option in the nvidia control panel to output 8bpc or 12bpc to the display. The 12bpc option is available when using a fairly new panel but it is not when using an old one. Does it mean that my monitor has a panel that supports 12 bit colors or is it something else?

I honestly doubt that my panel is a super duper 12bit one. It came from a questionable OEM and brand, but color and contrast have always looked really well though.
 

bit_user

Polypheme
Ambassador

Ah, sad but true. Support for this exists in the x87 FPU, but not in SSE or AVX. I thought I ran across 128-bit float or int support, in some iteration of SSE or maybe AVX, but I'm not finding it.

The other thing that x87 had was hardware support for denormals. If you enable denormal support in SSE (and presumably AVX), it uses software emulation and is vastly slower. Also, x87 had instructions for transcendental and power functions.

Lastly, sort of one cool thing about x87 is that it's stack based - like a RPN calculator. I think that's one of the reasons it got pushed aside by SSE - that it's probably not good for pipelined, superscalar, out-of-order execution. But, still kinda cool.

If you're fond of over-designed, antique machinery, the nearly 4 decade-old 8087 is probably a good example.

https://en.wikipedia.org/wiki/Intel_8087
 

bit_user

Polypheme
Ambassador

My guess is that they just used a commodity interface chip that supports newer versions of the HDMI and DP standards which include that bit depth. Whether it does anything interesting with > 8-bpc content is another matter.

One thing to try would be to find a game or something that supports HDR monitors. Then, try to find a good scene that demonstrates banding at 8-bit and compare it with the output mode set at 12-bit.

Based on my TV watching, I find banding is most frequently visible in skies. However, I'm not sure it's not due to bad authoring (poor quality conversions between 0-255 and 16-235 ranges), rather than really seeing the limit of the signal's precision.
 

TripleHeinz

Reputable
Aug 29, 2014
106
3
4,715


That's a good idea, i was having a hard time trying to find 16-bit JPEG images online. More desperately I was thinking in making a D3D program to display gradients in 8bpc and then in "deep color" to see if there was any difference.

So, the nvidia cpl option was indeed for real 12bpc support. I find it very interesting that GeForce cards support it and also my display. I'll get to the bottom of this and will see how deep the rabbit hole goes.
 

bit_user

Polypheme
Ambassador

Do let us know what you find.

BTW, I'm annoyed that my 2013-era TV supports 12-bpc "deep color", yet nothing ever utilized it. Even a much-vaunted firmware update to the PS3 to enable deep color wasn't much use as I'm not aware of any games that ever took advantage of it. As for gamut, a handful of Sony blu-rays supported xvYCC, but (original) blu-ray is limited to 8-bit. xvYCC is a cool hack, but you really want to use it with 10-bit or 12-bit, or else you get banding.

Of course, we now have UHD blu-ray, but that goes straight for Rec. 2020 color space and I would have to upgrade all my equipment for it. I was waiting for OLED to get cheap, but then we learned it has burn-in issues...
 
Status
Not open for further replies.