W3C home > Mailing lists > Public > whatwg@whatwg.org > May 2014

Re: [whatwg] Canvas and color colorspaces (was: WebGL and ImageBitmaps)

From: Justin Novosad <junov@google.com>
Date: Fri, 23 May 2014 10:16:04 -0400
Message-ID: <CABpaAqRRD2nmEOQhgKBF-jWm++ZNQw1Qy5hnA75NrrnpCF4zOA@mail.gmail.com>
To: Rik Cabanier <cabanier@gmail.com>
Cc: Noel Gordon <noel.gordon@gmail.com>, Glenn Maynard <glenn@zewt.org>, Katelyn Gadd <kg@luminance.org>, Kenneth Russell <kbr@google.com>, "whatwg@whatwg.org" <whatwg@whatwg.org>, Noel Gordon <noel@google.com>, Ian Hickson <ian@hixie.ch>
On Thu, May 22, 2014 at 4:09 PM, Rik Cabanier <cabanier@gmail.com> wrote:

> Well, the issue with not having a standardized intermediate colorspace, is
> that output will look different on different devices.

I theory, that should only be the case when drawing raw values through
putImageData. Everything else (fill styles, stroke styles, images) could be
color-managed and mapped from CSS (sRGB) to the canvas's color space when
drawing to a canvas. Right now, this is not the case in Chrome, which is a
bug. IMHO.

> For most cases, people don't really care about that and are happy will
> vibrant colors and deep blacks. Worse, if you limit the gamut to sRGB on
> wide gamut devices, you lose gray values which will result in banding
> (unless you have a high bit monitor) [1]
> However there are use cases where it is certainly important that what you
> see on screen reflects the actual color. I've certainly try to order
> clothes, paint and furniture online where I did care about the exact color.
> There's currently no way to accomplish this.

It may seem that a proper color managed pipeline may solve that problem,
but it will not.  we cannot solve the problem completely through web
standards for many reasons:
* Most people do not have calibrated monitors, or have only coarsely
calibrated monitors.
* A textile or paint sample viewed on screen will have been photographed or
rendered under illumination conditions that are most likely very different
from the lighting where the output device is located, making it almost
impossible to match colors even with perfectly calibrated devices.
* There is inter-human variability in color perception. The primary colors
we use were chosen based on the average human's perception of color. Even
with perfectly matched illumination conditions and calibrated devices, many
humans will perceive noticeable differences because the models used to
represent colors in computers is not a perfect match for each individual.
 In particular tetrachromats and certain types of color blinds have color
responses that poorly match standard primary colors.

The best we can do is minimize differences caused by inter-device
variability. How well we can do that depends on how well the devices are

>> Rik: to answer your question about your experiment: there is no issue
>> with a put/getImageData round trip. You will get back the same color you
>> put in (at least for opaque colors, but that is another story).  The issue
>> is with a drawImage/getImageData round trip.  For the same source image,
>> getImageData will return different values depending on the display profile
>> of the system you are running on.
> I still am not able to reproduce: http://jsfiddle.net/Dghuh/14/
> Is it just when you have a tagged image that is drawn and is then color
> mapped to the device profile?

Yes exactly, you will only see the problem with a tagged image. When
drawing from a canvas to a canvas, there are no color space conversions in
play. Similarly, there are no color space conversion at play in a
getImageData+putImageData round trip.
Received on Friday, 23 May 2014 14:16:35 UTC

This archive was generated by hypermail 2.4.0 : Wednesday, 22 January 2020 17:00:20 UTC