Re: [w3ctag/design-reviews] Canvas 2D color management (#646)

> Relative Colorimetric is essentially a set of rules for how gamut mapping should happen, not a gamut mapping algorithm. The per-component clamping you describe does conform to RC, but is a very poor implementation of it. E.g. consider the sRGB color `rgb(100% 200% 400%)`. Using per-component clamping, it would just be converted to achromatic white.

Yes, good point. And yes, particularly when extended into HDR, per-component clamping can create pretty poor-looking results.

> That said, Canvas is not the place to define how gamut mapping happens in the Web platform, and there are plans to flesh this out more in CSS Color 4. Meanwhile, please avoid prose that renders implementations non-conformant if they don’t use naïve clamping in the spec (in case there was any).

Thanks for the heads-up. We can be softer on the language with respect to the particular gamut mapping algorithm in the canvas section (I had been trying to get that variable nailed down, but if that's getting taken care of in a more central effort, that would be better).

FYI, a related topic, HDR tonemapping -- mapping from a larger luminance+chrominance range down to a more narrow one, comes up periodically in the ColorWeb CG HDR discussions.

> But beyond _how_ gamut mapping happens, there's also the question of _whether_ it happens. The current behavior of restricting everything on a canvas to the gamut of the color space it's defined on is reasonable. Using the `colorSpace` argument to just specify a working color space, and allowing both in-gamut and out-of-gamut colors on the canvas also seems reasonable. What was the rationale of going with the first, rather than the second, option? Did you find it satisfies more use cases?

With respect to Display P3, most (perhaps all?) users and use cases we encountered wanted the gamut capability of Display P3, rather than having Display P3 as a working space (they didn't mind having Display P3 as the working space -- it's "sRGB-like" enough that it comes with no surprises compared to the default behavior, but that wasn't the part of the feature they were most after). 

Allowing in-gamut and out-of-gamut colors requires having >8 bits per pixel of storage. That isn't much for a moderately-powerful desktop or laptop, but it is quite a burden (especially with respect to power consumption) for small battery-powered devices, and so most (I'm again tempted to say all?) users that I've encountered wanted Display P3 with 8 bits per pixel.

(The rest of this might be getting a bit ramble-y, but it also might be some useful background on how we ended up where we did):

In some of the very early versions of the canvas work we tried to separate the working color space from the storage color space. That ended up becoming unwieldy, and we discarded it -- it ended up being much more straightforward to have the storage and working space be the same. In practice, having a separate working space meant having an additional pass using that working space as a storage space, and so having the two not match ended up being downside-only. (There was one sort-of-exception, sRGB framebuffer encoding, which is useful for physically based rendering engines, but is very tightly tied to hardware texture/renderbuffer formats, and so we ended up moving it to a [separate WebGL change](https://github.com/KhronosGroup/WebGL/pull/3222), and those formats will also eventually find their way to WebGPU's GPUSwapChainDescriptor).

We also discussed having some way to automatically allow arbitrary-gamut content that "just works", without having to specify any additional parameters, and without any performance penalties. One of the ideas was to automatically detect out-of-gamut inputs and upgrade the canvas. This one was discarded because it would add performance cliffs, would have a complicated implementation, and might not be what an application wants (e.g, if just 1 pixel is 1 one bit outside of the gamut, they may prefer it to be clipped rather than pay a cost). Another idea could be to use the output display device's color space, but that would then become a fingerprinting vector (and would also have the issue that the output display device is a moving target).


-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/w3ctag/design-reviews/issues/646#issuecomment-864436206

Received on Monday, 21 June 2021 03:57:58 UTC