Re: [csswg-drafts] [css-color-4] Conversion precision for hue in LCH model creates false results (#5309)

> _One thing I have noticed with sRGB to Lab/LCH conversion as it produces chaotic and very incorrect hue value in LCH color model. Converting any shade of gray to Lab/LCH will result in components `a` and `b` very close to zero, but still **not zero**. It is absolutely fine for calculating chroma, as square root of those numbers will still result in number very close to zero, however calculating hue with `Math.atan()` gives very high range of (falsy) values whenever there is any difference between `a` and `b`_

Hi Igor @snigo 

**Here's the solution I'm using in [SeeLab:](**

      // Send either a*b* of LAB or u*v* of LUV to create LCh

     function processLCh(au,bv) {
            var Cabuv = Math.pow(au * au + bv * bv, 0.5);
                   // If Cabuv's less than 0.01, set hue to 360, 180, or Nan, or whatever as you need.
                   // Here it's set to 0 because I wanted to return a number and also be falsy.
            var habuv = (Cabuv < 0.01) ? 0.0 : 180.0 * Math.atan2(bv,au) * piDiv;
                habuv = (habuv < 0.0) ? habuv + 360.0 : habuv;
        return [Cabuv,habuv];

I'm using the _Chroma_ value to determine if the _hue_ should be clamped. And `C < 0.01` is well below the 8bit quantize level. I don't have to round a* b* or C at all this way, though of course when sending C to a string to display, I'll add a .toPrecision(4) or fixed, etc.

ALSO: I stay in D65 because I am not doing anything related to print, CMYK, ProPhoto, or comparing to D50, etc. I'm only working with RGB image, color, or display spaces that are D65 so that's all that's needed, which helps reduce noise/errors. In addition I've pre-calculated all the constants and rounded to 20 places which had a great effect on reducing the noise for grey sRGB colors, and the pre-calcs improve performance too.

       // Lab/Luv constant pre-calcs to 20 places:
    const CIEe = 0.0088564516790356308172;      // 216.0 / 24389.0
    const CIEk = 903.2962962962962963;          // 24389.0 / 27.0
    const CIEkdiv = 0.0011070564598794538521;   // 1.0 / (24389.0 / 27.0)
    const CIEke = 8.0;
    const CIE116 = 116.0;
    const CIE116div = 0.0086206896551724137931; // 1.0 / CIE116
    const pi180 = 0.017453292519943295769;      // Math.PI / 180 (pi divided by 180)
    const piDiv = 0.31830988618379067154;       // 1/pi to use n*piDiv instead of n/Math.PI
    const cubeRoot = 0.33333333333333333333;    // Math.pow(n, cubeRoot)
                                                // Instead of Math.cbrt()

You can see that `CIEk * CIEe` equals exactly 8.0 as it should, but if k was rounded wrong or to fewer places, an error would be introduced (especially if 'CIEke' was being calculated at runtime instead of being a static constant of 8.0).

I think I could get even lower noise in C if I recalc the sRGB -> XYZ matrix to a higher precision too.

GitHub Notification of comment by Myndex
Please view or discuss this issue at using your GitHub account

Sent via github-notify-ml as configured in

Received on Wednesday, 25 November 2020 21:41:29 UTC