Re: Incorrect Algorithm and mDCv for HLG (From last night's meeting)

Hi

Apologies for the brief reply, I’m on my phone.

BT.709 and HLG are not display referred by the definition that ISO uses, which is the one that almost all other standards bodies reference.

For a signal to be display rereferred, the signal has to represent the exact expected output of a display.  Neither HLG nor BT.709 do that – they refer to the scene and the display is free to apply its EOTF (complete with adaptations for screen and ambient luminance) and in the case of a TV, the manufacturer’s “look”.  So you can put the same HLG signal in to 500, 1000 and 4000 nit monitors, they’ll look perceptually similar, with similar levels of detail in the shadows and mid-tones and saturation, but they will be different brightnesses.

HLG is designed to be a very natural looking image – a bit like looking out of the window.  This does not match what sports or drama productions use, hence the in-camera artistic controls.  However, the signal is still referred to the scene rather than a single monitor and can be adapted for any display.  There are use cases where no artistic controls are used and accurate colours are required – such as medical imaging – so it’s important that any software implementation can handle this.  (ISO split this into something like “scene referred” and “scene referred with scene chromaticies”)

Simon



On 29/09/2023, 13:51, "Pekka Paalanen" <pekka.paalanen@collabora.com> wrote:
On Thu, 21 Sep 2023 10:14:39 +0000
Simon Thompson - NM <Simon.Thompson2@bbc.co.uk<mailto:Simon.Thompson2@bbc.co.uk>> wrote:

> Apologies for the delay, I've been at the International Broadcasting
> Convention and SMPTE meetings.

Hi Simon,

thank you very much, I think this was really helpful for me.

I'm perhaps getting a bit philosophical below, so I don't mind if they
go unreplied. I try to generalise things.

>
> I'll try and deal with a number of emails in one.
>
> To understand the design of the 2 HDR systems it’s worth looking at
> the block diagrams from ITU-R BT.2100.  The PQ system includes the
> OOTF within the encoding module.  The OOTF maps the scene light to a
> display light signal that represents the pixel RGB luminances on the
> display used (the EOTF and Inverse EOTF are to prevent visual
> artefacts such as banding in a quantised signal).  This means that
> the target display OOTF is burnt into the signal and metadata (e.g.
> HDR10, HDR10+, SL-HDR1, DV) is used to help adjust the signal to
> match the target display and ambient lighting conditions.

Right. I haven't really managed to find how the metadata should be used
to guide the adjustment to actual target display colorimetry and
viewing conditions. All explanations of the PQ system I've seen forget
to mention that there even is any adjustment at the display, and some
computer PQ mode monitors didn't get the memo either.

>
> [A diagram of a signal  Description automatically generated]
>
> In the HLG system, this OOTF is present in the decoding module, which
> is usually present in the display.  The OOTF is a function of peak
> screen brightness and ambient illumination (both of which are either
> known by the display or can be set by the end user).  The signal
> represents the light falling on the sensor, not the light from a
> display.  The adaptation for the screen all happens within the
> decoding block with no reference to another display..
>
> [A diagram of a signal  Description automatically generated]
>
> This is similar to ITU-R BT.709 (which only defines the OETF for an
> SDR scene-referred signal).

Yet BT.709 also says that the OETF is adjusted in order to create the
intended appearance on a reference display (BT.1886, BT.2035). Does
that not make it a display-referred signal? Or that one can choose
whether to produce a display- or scene-referred signal?

The BT.2100 diagram of HLG forgets to point out where the artistic
adjustments are done. Surely those are done in production and not in
the receiver side?

When artistic adjustments are done to HLG material, how does the
resulting HLG signal not become referred to the grading/mastering
display? The HLG OOTF considers only luminance.

> We do not limit the signal to prevent filter ringing and encoding
> problems.  Please see: https://tech.ebu.ch/docs/r/r103.pdf
>
> When converting from HLG to BT.709 to create a second stream/channel,
> most conversions on the market will leave the majority of colours (up
> to ~95% luminance) that are within the target colourspace alone, and
> then scale the highlights and out of gamut colours in a hue-linear
> colourspace so they are within the target video range (quite often
> EBU R103 -5% to 105%).  The mappings available are all designed to
> convert a compliant HLG signal to a compliant BT.709 signal (i.e.
> they are designed to deal with any input). See for more detail:
> https://www.color.org/hdr/03-Simon_Thompson.pdf

By hue-linear do you mean a hue-preserving mapping?

If BT.709 signal can be either display- or scene-referred, does
converting HLG to BT.709 with the methods you refer to always produce
scene-referred BT.709? Where and how does that turn into
display-referred?

> For reference monitors, the preferred mode is to display the entire
> luminance range of the signal.  Chrominance is shown only up to the
> device gamut and signals that are outside are hard clipped to the
> display gamut.  However, the signal is not limited to this gamut in
> any way and overshoots are expected, especially with uncontrolled
> lighting.  See: https://tech.ebu.ch/docs/tech/tech3320.pdf

This essentially answered all the remaining questions I had about how
PQ MDCV maps to the signal color volume. I hope these are so widely
used conventions that we can simply use them in Wayland compositors.

Since signal encoding and MDCV (and "the default" adapted) white points
are expected to be the same, I can choose what to do if they are not
the same when delivered to a Wayland compositor:
- adapted white point with an achromatic picture equals the MDCV white point
- chromatic adaptation is applied between signal encoding and MDCV

Since MDCV and signal encoding white points are expected to be the
same, the chromatic adaptation is a no-operation.

In Wayland, I believe we will generalise the PQ MDCV with the above
conventions to everything:

- If MDCV is not indicated, it is assumed equal to the color
  volume bounded by the signal encoding primaries and luminance
  parameters.

- Otherwise MDCV provides the imagery targeted color volume which may
  be bigger (xvYCC, scRGB) or smaller (BT.2100 PQ) than the color
  volume bounded by the encoding primaries and luminance parameters.

Would be nice to hear if anyone sees anything wrong with this idea.

> For televisions, whilst the luminance performance is defined, the
> mapping from BT.2100 primaries to display gamut is unspecified –
> television manufacturers usually want to differentiate on “look”
> between brands and price points, so implement their own processing,
> have their own colour modes etc.
>
> This is the workflow that has been used in many large productions.


Thanks,
pq

Received on Friday, 29 September 2023 15:28:54 UTC