Re: [csswg-drafts] [css-color] Discussion of Conflicts & Resolutions: D50/D65, LAB/LUV, ICC/OCIO (#6061)

This is a response to a portion of the thread in in #5883 that Chris noted was drifting far off topic, but is on topic here.
-----

Hi Chris @svgeesus  Thank you for the comments, they stimulated additional thought (and memories) that helped me to clarify some things below.

> > I do rely a bit on R.W.G. Hunt's The Reproduction of Colour where he says:
> 
> Hunts' work is well regarded and was seminal in it's day, but the first edition dates to 1957. The fifth edition, which I have, is from 1995 and is a somewhat light re-warming of the 1987 fourth edition. So in terms of evaluating the impact of the 1976 color models, it necessarily misses a lot of later work.

I'm referencing the 6th, 2004 edition, which includes CIECAM02, CAM97, CIEDE2000. I happen to like Hunt's writing style as it fits well with how I assimilate information.

Nevertheless, I did not cite Hunt "in support of LUV" so much as to indicate an independent statement regarding the usefulness on self-illuminated displays that was, uh, less handwavy.

But I am also thinking about some work I did nearly two years ago related to color vision deficiency that is directly related to the concept of "what is the best space" for color mixing or prediction relative to a tristimulus monitor.

### CVD Simulators

I created two CVD simulators.[ The first was based on the Brettel model,](https://www.myndex.com/CVD/) which goes into LMS space to determine the deficiency. [The second is based on my theory ](https://www.myndex.com/CVD/sRGBCVD)that on a tristimulus display, it is only necessary adjust the luminance of the color primary(s) affected by the given deficiency (and combine), to predict the perceived outcome.

Here they are for protanopia, side by side. Which is which?

<img width="200" alt="Screen Shot 2021-03-03 at 4 21 05 PM" src="https://user-images.githubusercontent.com/42009457/109890726-a3a03680-7c3c-11eb-9bd7-fe14e619700a.png"> <img width="200" alt="Screen Shot 2021-03-03 at 4 20 35 PM" src="https://user-images.githubusercontent.com/42009457/109890732-a4d16380-7c3c-11eb-9979-53eae7552249.png">


### Eye stream cones

In reflected surface colors where factors such as metamerism are de rigueur, an appearance model pretty much needs to be in LMS space. Tristimulus monitors don't present such wide-spectral images. A tristimulus monitor only ever emits three narrow band (even sometimes monochromatic) colors.

In surface colors in a viewing booth, the color will always be of lower luminance than the light source (unless luminous) meaning the colors viewed will be within present eye light adaptation (at the viewing booth) which also goes for chromatic adaptation, and therefore color constancy. This requires the model be in LMS space with the appropriate light and chromatic reference for the adaptation.

The "standard" environment for a computer monitor is ambient at a 20% luminance relative to the monitor peak white, meaning peak white will be higher than the overall light adaptation. Most monitors today use the D65 whitepoint, And while the general guideline is that the ambient illumination be ~5500K, in fact this have very little effect on the display perception as self illuminated displays do not follow the same laws of color constancy relative to the environment as do the physical objects in that environment.


So, we don't need a complete model to account for phenomena such as metamerism. We need to model three narrow band or monochromatic lights, and their mixtures. And we need to model the perceptual lightness/darkness.

...which...

Is really not "traditional" Lstar, but different due to the position of light adaptation relative to stimulus. Not to mention "system gamma gains" and other factors.

A couple years ago I created a dynamic list of the 140 named colors, and was putting the text for the name and color metrics inside each patch. For darker colors I wanted the text white, for lighter colors I wanted the text black, and I wanted to set the programmatically. My first instinct was to switch when the named color patch's luminance was above or below 18.4Y — under 18.4, white text, black otherwise.

As it happens, this was much too dark to switch to black text. The switch point ended up being about ~36Y.

The live example is here at the bottom of the page, click "HTML Names sort by Luminance": https://www.myndex.com/SAPC/DEV98LABLUV


> _I mentioned the (limited) value of chromaticity diagrams as one use case. For stage lighting design, in particular, they were fairly useful This was in the days before personal computers or even programmable calculators, it was a benefit to plot light sources on a large chromaticity diagram drawn on graph paper, connect them with a straight line, and be able to directly read off the resulting mixture chromaticities to two or perhaps three decimal places._

Funny about the size of human memory, and the context dependence of memory retrieval. I did theatrical lighting design in the 80s and also Circa 1980 the computer club I was in just built a Sol-20, 8080 based with the D/A converters... the Little Theater had just bought an analog lighting controller, that had VCA inputs, an early coding project was a rudimentary scene preset controller, the lights were positioned/focused manually... 10 years later, the last theatrical project I ever worked on was at the TPA at the former Aladdin in Vegas, "VariLights" had just been introduced, servo motor controlled including changing gel colors etc, and controlled by an embedded microcontroller console...

I only mention because you jogged my memory and it reminds me of my life long fascination with light color and vision. LOL. 


> _...The MacAdam ellipses are still very elliptical and vary greatly in size._

Yes, and for LAB as well:

<img width="400" alt="Screen Shot 2021-03-03 at 6 56 34 PM" src="https://user-images.githubusercontent.com/42009457/109906543-8c217780-7c55-11eb-9582-83ec18c66b65.png">

The best example of uniformity with the MacAdam ellipses Is (I think) one of the DIN systems, will have to dig it up.


> _The flaws of the model are not suddenly cured because the light from a colored patch is directly generated rather than being the modified, ...._

Only three narrow band or mono illuminants are being used, not a continuous spectrum, so if grassman's laws are valid, then a simple model should be capable  — one thing I am "working on" is can better accuracy be achieved in the simple Luv by correcting the lightness axis, which in isolation does not align with certain data collected as I alluded to in a pervious post, but I am no where near committing to anything there.

> > " HIGH DYNAMIC RANGE IMAGING (2010)" 

> (I don't have that one yet, it is eye-wateringly expensive and I fund all my book purchases; W3C doesn't cover them.)

OpAmp technical books used to be a few blocks from where I live. I had to give myself a rule to leave my wallet at home before going in there.

> > CalMAN™ uses CIELuv

> No, CalMan used to use Lab, with deltaE2000 for all color differences, and now has [moved to using ICtCp, with deltaITP](https://kb.portrait.com/help/ictcp-color-difference-metric).

Well, I am citing my manual, but it is an older version (V2), and I haven't used it since I replaced my CRT projector with a DLP projector... which was... UGH a decade ago...  It did lead to some interesting threads and arguments, largely that the newer DE functions are intended for Lab and don't work with Luv.


> > I am not trying to dismiss LAB in any way, but I did add LUV to the models I'm working with, and I've found several aspects that make it favorable over LAB on self illuminated displays.
> 
> Thanks for the clarification. Some of your comments, and in particular the personal mails you sent to me earlier, gave a very different impression where you seemed to suggest that any use of Lab was entirely erroneous and purely due to commercial pressure from one or two companies. I'm glad to hear you state more clearly your position - thanks!

NOT AT ALL, and sorry if I wasn't clear.... What I am claiming is "erroneous" is the forced use of D50 when the standard is D65 (for all industries except print, and that is being discussed for a change, though doubtful).

### Here is the very short clarification of my position:

- Web content display space is D65, and an RGB model (default sRGB, but other possible).
    - Best practices for working spaces is *usually* to work in the same space as delivery, though often at a much higher bit depth and there is the useful case of linearizing the working space, at least for some operations, and applying the TRC on output.
    - I have some very bad experiences following some of A Dough Bee's very bad workflow advices, and learned the hard way many years ago that a company that knows print and pre-press should not automatically be trusted for film/television post. 
    - There is no justification to go into D50 when source and destination are D65, with the minor exception maybe of using a working space like ProPhoto, which is not a display space and requires gamut mapping for display, and as I said two years ago is a mistake to be implemented for web based content when its function is pre-press.
    - And I am fairly certain the error will become more apparent with HDR material. **DANGER WILL ROBINSON DANGER** 

- For **choosing** colors that will be rendered into an RGB model space, at the moment I much prefer Luv or one of the Luv variants over Lab or HSL. I have demonstrated why on other links.
    - Pairwise mixing, gradients, and color adjustments do not suffer the same problems as Lab,
    - the hue values are better spaced and more consistent
    - Can use saturation in addition to chroma
    - no blue purple shift (at least not that I've found yet)
    - everything is a straight line vector to the other color or white point, and for a tri stimulus space that only has three narrow band colors, that seems to obviate the need for LMS space. On this last point though, I am still working on some studies and modified models.
    - none of this means that Lab should not be available, only that it is trivial to add Luv or a variant, and there are advantages for the use case. Lab is useful as a PCS to transit from the RGB model to the CMYK model, no question. But how is that core to web content?

- The final "big" issue is ICC profile support, and again I am NOT saying that ICC profiles should not be supported, only that they should not be supported _to the exclusion of other open-source methods like OCIO_.
    - ICC CMS is a processor hog. Useless for streaming, harmful to mobile, and limited utility to the general use case.
    - Device manufacturers love things that force people to upgrade — IMO we should avoid promoting that.
    - plain 1D or 3D LUTs without a computationally expensive CMS are the best practice when source/destination are the same color model and the same white point. Especially having provision for a 1D simple fallback LUT for lower power mobile devices. The world is not rich after all.


### THAT'S IT
**My objections are to baking into a workflow that is closed and will have negative consequences in the future.**


> Lastly, while this is interesting and useful discussion, it is entirely unrelated to the topic of this particular issue "Don't force non-legacy colors to interpolate in a gamma-encoded space".

Moved here to the correct issue for continuance.


> I do hope, though, that your more wide-ranging views will be brought to bear during the [Color Workshop](https://www.w3.org/Graphics/Color/Workshop/). I would also encourage you to submit your research to a Color Science journal.

Yes, am working mainly on the accessibility and for the workshop, but intend to participate in the other discussions where appropriate.

And as to publishing: yes — however I am shocked and dismayed by the utter scam that is most of these journals. I'm supposed to pay $2000+ to publish? WUT? For work I am doing pro bono? So definitely I will not be publishing to any open access — that's insane.

In Hollywood, I often could not publish due to NDA. And what I did publish were industry articles in publications that *paid me* to write them. How wrong has our society gone that we force researchers to pay to play??   UGH! 

I will be publishing after certain IP filings are made....

### Parting Shot:
Try using Lab but STAY in D65, then see if the blue-purple shift is reduced or eliminated.  😎


Thank you!

Andy




-- 
GitHub Notification of comment by Myndex
Please view or discuss this issue at https://github.com/w3c/csswg-drafts/issues/6061#issuecomment-790304661 using your GitHub account


-- 
Sent via github-notify-ml as configured in https://github.com/w3c/github-notify-ml-config

Received on Thursday, 4 March 2021 05:31:14 UTC