RE: Proposal for HTMLCanvasElement HDR compositing modes

Hi all,

If the definitions are changed from those published by the ITU, then there’s a great risk of confusion.  You would have a document referencing and defining how a video standard should be implemented using different terminology to the video standard itself.

The definitions are agreed at ITU and if a change is needed, then an ITU liaison to the relevant group should be made.

Best Regards

Simon


Simon Thompson
Senior R&D Engineer

BBC Research & Development

From: Christopher Cameron <ccameron@google.com>
Sent: 16 March 2021 06:30
To: Lars Borg <borg@adobe.com>
Cc: public-colorweb@w3.org; Pierre-Anthony Lemieux <pal@sandflow.com>
Subject: Re: Proposal for HTMLCanvasElement HDR compositing modes



On Mon, Mar 15, 2021 at 8:26 PM Lars Borg <borg@adobe.com<mailto:borg@adobe.com>> wrote:
Chris C,

One concern I have is the term scene light.
If this means scene-referred, then that’s a problem as sRGB is defined only for display light, not for scene light.
So then additional conversions would be needed to convert sRGB to canvas.
And if scene light doesn’t mean scene-referred then what does it mean? It causes confusion.
Please find a way to stay in display-referred color space.

I'm quite happy to change the terminology -- I made a point of staking out definitions of the terms I would use as I would use them to ensure that these concerns would jump out immediately. Success!

(This is in a break with the custom of not defining terms in specifications and hoping that all readers and authors have the same definition in mind).

I see the point here. Indeed, "scene light" is a bad term for this.

We cannot work in true display light, because the true display is unknown and unknowable. The term "reference display light" would  have been nice, but it's slightly different (or at least more general) than what I'm aiming at here.

Maybe we can call it "canvas reference display light" or "device-independent display light".

The key properties I want this space to have are:

  *   PQ signal is mapped to the space by applying the PQ inverse-OETF (and maybe a linear scale)

     *   This is true for any linear light space, so it's not an onerous demand

  *   HLG signal is mapped to the space by applying the HLG inverse-OETF (and maybe a linear scale)

     *   If we define our reference display to have a maximum luminance of 334 nits, then this is the case
     *   Applying and then un-applying an HLG OOTF in various parts of the pipeline feels gross (maybe it shouldn't?)

  *   sRGB signal is mapped into the space by applying the ordinary piecewise-with-gamma-2.4 transfer function (and maybe a linear scale)
The first and second points (especially for HLG, not wanting any OOTF applied) made this feel like "scene light" to me. With respect to sRGB and scene light, BT2100 itself is already sloppy with the definition of scene light, so extending that definition to sRGB doesn't feel like doing anything more untoward to the concept.

But, I see that "scene light" has a specific meaning (particularly with respect to being "before artistic changes have been made"), which makes it inappropriate as a thing to recover from a signal.

Received on Tuesday, 16 March 2021 12:10:25 UTC