Re: [csswg-drafts] [css-color-4] Serialization/normalization of color() (and other advanced color functions) (#4826)

> Going from 8 bits to 12 bits: 1 / 16 = 0.0625 which is much bigger than 0.001. So maybe 0.05?

If they're using bit channels like that, then we don't have to worry about rounding; you get an integer when it's exactly an integer value (last four bits are zero), and don't otherwise.

What we have to worry about is things like using float-channel extended sRGB, where it's not guaranteed that we'll get back to an integer value. (Chrome's current plan is to just upgrade its color handling to always be 32-bit float channels in extended sRGB. Our textures will be f16, to reduce memory usage, but CSS colors can afford to pay for the convenience of f32.)

A .001 epsilon is probably fine for a float 32, but .01 is likely fine as well.

> No, I'm asking what is the minimum bit depth that we need to preserve. Since Rec 2020 allows 10 and recommends 12, we need to preserve at least 12 bits.

Still a little confused. Are you thinking that browsers would round to a particular bit depth, *then* write it out in decimal?

They'll just do the conversion to sRGB with floats and serialize the result, using the normal serialization rules.

> That seems to be comparing two floats, alpha and 1.0, which is typically seen as poor practice. Shouldn't that be updated to say that if 1.0 - α > ε for some suitably defined ε?

Yeah, probably. Lower priority since 1 is exactly representable and alpha isn't usually mathed at, but might as well make all these changes together.

-- 
GitHub Notification of comment by tabatkins
Please view or discuss this issue at https://github.com/w3c/csswg-drafts/issues/4826#issuecomment-600724751 using your GitHub account

Received on Wednesday, 18 March 2020 16:16:21 UTC