Re: [whatwg] Canvas and color colorspaces (was: WebGL and ImageBitmaps)

On May 1, 2016 1:08 PM, "Rik Cabanier" <cabanier@gmail.com> wrote:
>
> Great to hear!
> Are there minutes posted?

As far as I know, the minutes and mail archives are visible to members
only. I'm working to capture what we have so far and to move the discussion
to a broader and more visible forum.
>
>
> On Sunday, May 1, 2016, Justin Novosad <junov@google.com> wrote:
>>
>> There is currently an ongoing discussion with the Khronos Web3D group to
develop a proposal that solves these problems in canvas, over the past few
weeks we have converged on a solution that I think is pretty solid. I am in
the process of writing-up the HTML (non-WebGL) part of the proposal and I
intend to post it to the WICG shortly so that we can incubate it further,
with a broader audience.  When that happens, I will update this thread.
>>
>> On Sat, Apr 30, 2016 at 2:07 PM, Rik Cabanier <cabanier@gmail.com> wrote:
>>>
>>> [Sorry to revive this old thread]
>>> All,
>>>
>>> with the advent of DCI-P3 compliant monitors and Apple's Safari doing
color managing to the device, we're seeing some issues in this area.
>>>
>>> - Currently, WebKit sets the profile of the canvas backing store to
sRGB regardless of the output device. Because of this, high gamut images
are always clipped to sRGB. [1]
>>> It would be ideal if we can specify that the canvas backing store is in
the device profile.
>>> Alternatively, we could add an API to attach a color profile to the
canvas.
>>> - The spec currently states that toDataURL should not include a
profile. However, if the backing store is in device pixels, the generated
image should include the correct profile. Otherwise if you draw the bitmap
in a compliant browser (ie Safari), the colors will look too saturated.
>>>
>>> If we agree that canvas is in the device space, I'd like to see the
spec [2] clarified to state that compositing on the canvas should match
compositing on the HTML surface.
>>> Specifically:
>>>>
>>>> The canvas APIs must perform colour correction at only two points:
when rendering images with their own gamma correction and colour space
information onto a bitmap, to convert the image to the colour space used by
the bitmaps (e.g. using the 2D Context's drawImage() method with
an HTMLOrSVGImageElement object), and when rendering the actual canvas
bitmap to the output device.
>>>
>>> Becomes:
>>>>
>>>> The canvas APIs must perform colour correction at only one point: when
rendering content with its own gamma correction and colour space
information onto a bitmap to the colour space used by the bitmaps (e.g.
using the 2D Context's drawImage() method with
an HTMLOrSVGImageElement object).
>>>
>>>
>>> ToDataURL and ToBlob [3] should also be enhanced so they include the
device profile if it is different from sRGB.
>>>
>>> It would also be great if the browser could let us know what profile
(if any) it was using.
>>>
>>> 1:
https://github.com/WebKit/webkit/blob/112c663463807e8676765cb7a006d415c372f447/Source/WebCore/platform/graphics/ImageBuffer.h#L73
>>> 2:
https://html.spec.whatwg.org/multipage/scripting.html#colour-spaces-and-colour-correction
>>> 3:
https://html.spec.whatwg.org/multipage/scripting.html#dom-canvas-todataurl
>>>
>>>
>>>
>>> On Thu, May 22, 2014 at 12:21 PM, Justin Novosad <junov@google.com>
wrote:
>>>>
>>>> tl;dr: The color space of canvas backing stores is undefined, which
causes problems for many web devs, but also has non-negligible advantages.
So be careful what you wish for.
>>>>
>>>> I saw some confusion and questions needing answers in the "WebGL and
ImageBitmaps" thread regarding color management. I will attempt to clarify
to the best of my abilities. Though I am knowledgeable on the subject, I am
not an absolute authority, so others are welcome to correct me if I am
wrong about anything.
>>>>
>>>> Color management... To make a long story short, there are two types of
color profiles : input profiles and output profiles for characterizing
input devices (cameras, scanners) and output devices (displays, printers)
respectively.
>>>> Image files will usually encode their color information in a standard
color space or in a an input device dependent space. If colors are encoded
in a color space that is different from the format's default, then a color
profile or a color space identifier must be encoded into the image
resource's metadata.
>>>>
>>>> To present color-managed image content on screen, the image needs to
be converted from whatever color space the image was encoded into into a
standard "connection space" using the color profile or color space metadata
from the image resource. Then the colors need to be converted from the
profile connection space to the output space, which is provided by the
OS/display driver. Depending on the OS and hardware configuration, the
output space may be a standard color space (like sRGB), or a
device-specific color profile.
>>>>
>>>> Currently, many color-managed software applications rely on the codec
to take care of the entire color-management process for image and video
content, meaning that the decoded image data is in output-referred color
space (i.e. the display's profile was applied).  There are practical
reasons for this, the most important ones being color fidelity and memory
consumption.  Let me explain. The profile connection space is typically CIE
XYZ or CIE L*a*b. I wont get into the technical details of how these work
except to say that they are device independent and allow for an accurate
representation of the whole spectrum of human-visible colors. This makes it
possible to map colors from a wide gamut camera to a wide gamut display
with high color fidelity for all the colors that are located in the
intersection of the color gamuts of both the input and output devices. If
we were forced to convert the image to an intermediate sRGB representation,
the colors in the image would be clamped to the sRGB gamut (which is
narrower than the gamuts of many devices). Currently, most browsers avoid
doing that for <img>, and therefore provide (more or less) optimal image
and video color fidelity for users of wide gamut devices. Also, an
intermediate representation in 8-bit sRGB means loss of precision due to
rounding errors, as opposed to the profile connection space which uses
higher precision registers for intermediate color values to avoid precision
issues caused by rounding.  To avoid perceptible precision issues in an
intermediate sRGB representation, we'd have to increase the bit depth and
therefore use more RAM for storing decoded image data.
>>>>
>>>> All of this is to say that there are good reasons for the current
situation where we deal with decoded images that have the output device's
color profile pre-applied: color fidelity and memory consumption.
>>>>
>>>> In the case of 2D canvas, the color space for the backing store is
unspecified, and many implementations have chosen to use the output
device's color space, which has many advantages:
>>>> * images and videos are already decoded directly into that space
>>>> * no color conversion is necessary when presenting the canvas on
screen (good for performance)
>>>> * there is no loss of precision due the use of a limited-precision
intermediate color space.
>>>> * the color gamut is not constrained by an intermediate color space
(like sRGB).
>>>> And disadvantages:
>>>> * Compositing operations produce incorrect results because most of
them (including source-over) are affected by the color space.
>>>> * direct pixel manipulation using put/getImageData exposes data in a
color space that is undefined, making it extremely challenging to perform
many types of image processing and image generation tasks in a
device-independent way.
>>>> * The device-dependent behavior of a drawImage/getImageData round trip
is a known fingerprinting vector.
>>>>
>>>> Right now, I am hearing a lot of complaints regarding the lack of a
standardized color space for canvases, and in particular the impact this
has on applications that try to do cool things with put/getImageData, or
generate images procedurally.  I want to make sure everyone understands
there is a trade-off to fixing this, so be careful what you wish for.
>>>>
>>>> I am especially concerned about the issue of color gamut clamping,
which will increase in relevance as wide gamut devices become more
widespread.  If we were to decide that canvas backing stores had to be in
sRGB, that would mean that a wide gamut image viewed on a wide gamut
display would look best when displayed in an <img> and would be duller when
drawn through a <canvas>, whether it be 2D or WebGL.  Is that something we
are willing to live with in the name of standardizing the color space of
ImageData?  I must admit I was in favor of moving canvases to sRGB until I
reviewed some of Noel Gordon's recent work which brought the gamut and
precision issues to my attention.
>>>>
>>>> Rik: to answer your question about your experiment: there is no issue
with a put/getImageData round trip. You will get back the same color you
put in (at least for opaque colors, but that is another story).  The issue
is with a drawImage/getImageData round trip.  For the same source image,
getImageData will return different values depending on the display profile
of the system you are running on.
>>>>
>>>> Ken: You mentioned to me off thread that built-in support for sRGB
render buffers in OpenGL ES 3 may make it easier to move rendering to sRGB
on next gen devices. I tend to agree, and I think it will also mitigate the
loss of precision issue, but it still implies clamping wide gammut media to
the sRGB range.
>>>>
>>>>     -Justin
>>>>
>>>>
>>>
>>

Received on Sunday, 1 May 2016 17:47:55 UTC