- From: Joe Drago <jdrago@netflix.com>
- Date: Wed, 28 Apr 2021 09:45:18 -0700
- To: Lars Borg <borg@adobe.com>
- Cc: Christopher Cameron <ccameron@google.com>, Simon Thompson-NM <Simon.Thompson2@bbc.co.uk>, "public-colorweb@w3.org" <public-colorweb@w3.org>
- Message-ID: <CAL4YVO-J82NTPY0O2trtChCR245_o9gG2OeNzOdTo8YgHcABFA@mail.gmail.com>
(tangent / off-topic) -- I see these use the A2B* tags from the old ones you provided to me years ago for colorist. It is interesting to see the same technique applied from those but with a 203 nits reference white: >colorist identify "Colorbars in PQ 203 display.png" [ action] Identify: Colorbars in PQ 203 display.png [ decode] Reading: Colorbars in PQ 203 display.png (23191 bytes) [ identify] Format: png [ image] Image: 1920x1080 8-bit [ profile] Profile "Rec.2100 PQ W203" [ profile] Size: 30780 bytes [ profile] Copyright: "Copyright 2019 Adobe Systems Incorporated" [ profile] Primaries: BT.2020 (r:0.708,0.292 g:0.17,0.797 b:0.131,0.046 w:0.3127,0.329) [ profile] *Max Luminance: 203 - (lumi tag present)* [ profile] Curve: complex(-1) [ profile] *Implicit matrix curve scale: 49.26* [ profile] *Actual max luminance: 9999.78* [ profile] CCMM friendly: false [ profile] MD5: 5491e070ece4a532bdabbd9ee9b2d0c6 (I'm sure my colorist is making all kinds of awful assumptions here, but this result seems to make sense to me, which is neat.) I suppose browsers implement A2B* tags? Is that done via a slow path / software, or are their chains viable to implement in a shader? On Wed, Apr 28, 2021 at 9:32 AM Lars Borg <borg@adobe.com> wrote: > Enclosed is a set of images in various color spaces, including HDR, with > ICC profiles. > > They all look the same in today’s browsers (tested Safari, Chrome, Firefox) > > > > These images can be used for validating Chris Cameron’s concepts. > > Color matching without ICC profiles will require display-referred > conversions for HLG. > > Please try. > > > > Lars > > > > *From: *Lars Borg <borg@adobe.com> > *Date: *Thursday, April 1, 2021 at 11:52 AM > *To: *Christopher Cameron <ccameron@google.com>, Simon Thompson < > Simon.Thompson2@bbc.co.uk> > *Cc: *"public-colorweb@w3.org" <public-colorweb@w3.org> > *Subject: *Re: HTML Canvas - Transforms for HDR and WCG > *Resent-From: *<public-colorweb@w3.org> > *Resent-Date: *Thu, 01 Apr 2021 21:51:37 +0000 > > > > If you are doing an image browser for arbitrary images and thus arbitrary > color spaces it would be awkward to have to locate a proper image-specific > color transform. > > Can’t we just use the info that comes with the image? > > If blending with text “Buy Now” the blending should be in the same color > space for all images, or else we would need unique text color values for > each image. > > > > Lars > > > > *From: *Christopher Cameron <ccameron@google.com> > *Date: *Thursday, April 1, 2021 at 8:13 AM > *To: *Simon Thompson <Simon.Thompson2@bbc.co.uk> > *Cc: *Lars Borg <borg@adobe.com>, "public-colorweb@w3.org" < > public-colorweb@w3.org> > *Subject: *Re: HTML Canvas - Transforms for HDR and WCG > > > > > > > > On Thu, Apr 1, 2021 at 3:35 AM Simon Thompson-NM < > Simon.Thompson2@bbc.co.uk> wrote: > > Hi, > > > > One further thought from me, the proposal last night depended on using a > certain image import function which allowed the user to dictate a target > colour space and transform set. Does a similar video import function exist? > > > > Yes! It's the same function, createImageBitmap > <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdeveloper.mozilla.org%2Fen-US%2Fdocs%2FWeb%2FAPI%2FWindowOrWorkerGlobalScope%2FcreateImageBitmap&data=04%7C01%7Cborg%40adobe.com%7C51627b89412443af86db08d8f539c7a8%7Cfa7b1b5a7b34438794aed2c178decee1%7C0%7C0%7C637528975802643088%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=SG%2F3IEUmOUzFfgfZcUj2dVsKuxC97aRKcydTLNL9XvQ%3D&reserved=0>, > and it takes as input: images, SVG, video, canvas (so you can draw your > canvas into your canvas), and blob (not-yet-decoded image). The options > include a "colorSpaceConversion" option, which is currently "none" or > "default". This is where I think we should consider adding a well-defined > perceptual colorimetric intent (and this intent wouldn't be > path-independent). > > > > When the input is a blob (a not-yet-decoded image), the color space > conversion can happen simultaneously with image decode. > > > > >
Received on Wednesday, 28 April 2021 16:45:59 UTC