Re: Incorrect Algorithm and mDCv for HLG (From last night's meeting)

On Wed, 13 Sep 2023 10:36:53 +0000
Simon Thompson - NM <Simon.Thompson2@bbc.co.uk> wrote:

> Hi Sebastien
> 
> >The Static HDR Metadata's mastering display primaries are used to
> >improve the quality of the correction in the panels by limiting the
> >distance of the mappings they have to perform. I don't see why this
> >would not equally benefit HLG.  
> 
> >Regarding composition: the metadata has to be either recomputed or
> >discarded. Depending on the target format, the metadata on the
> >elements can be useful to limit the distance of the mappings required
> >to get the element to the target format.  
> 
> I’m not sure that the minimum distance mapping is necessarily the
> best, it would depend on the colour space in which you’re performing
> the process.  In dark colours, the minimum distance may lift the
> noise floor too.  Chris Lilley has an interesting page looking at the
> effect in a few different colour spaces, I’ll see if I can find it.
> As I said in the previous email, a significant proportion of HDR is
> currently being created whilst looking at the SDR downmapping on an
> SDR monitor – the HDR monitor may be used for checking, but not for
> mastering the image. When an HDR monitor is used, there are still a
> number of issues:
> 
>   *   The HLG signal is scene-referred and I would expect that there
> will be colours outside the primaries of this display as it’s not
> usual to clip signals in broadcast workflows. (Clipping leads to
> ringing in scaling filters and issues with compression – this is why
> the signal has headroom and footroom for overshoots).  The signal
> does not describe the colours of pixels on a screen.
>   *   In a large, live production, the signal may traverse a number
> of production centres and encode/decode hops before distribution.
> Each of these will need to have a method for conveying this metadata
> for it to arrive at the distribution point. As the transform from
> Scene-referred HLG signal to a display is standardised, I don’t think
> additional metadata is needed.  If included, the software rendering
> to the screen must not expect it to be present.

Hi Simon, everyone,

I feel I should clarify that my original question was about everything
orthogonal to dynamic range or luminance: the color gamut, or the
chromaticity space. I understand that the size of chromaticity plane in
a color volume depends on the point along the luminance axis, but let's
try to forget the luminance axis for a moment. Let's pick a single
luminance point, perhaps somewhere in the nice mid-tones.

We could imagine comparing e.g. BT.709 SDR and BT.2020 SDR.

If your input signal is encoded as BT.2020 SDR, and the display
conforms to BT.709 SDR, how do you do the gamut mapping in the display
to retain the intended appearance of the image?

Let's say the input imagery contains pixels that fill the P3 D65 color
gamut.

If you assume that the full BT.2020 gamut is used and statically scale
it all down to BT.709, the result looks de-saturated, right?
Or if you clip chromaticity, color gradients become flat at their
saturated ends, losing detail.

But if you have no metadata about the input signal, how could you
assume anything else?

If there was metadata to tell us what portion of the BT.2020 color
gamut is actually useful, we could gamut map only that portion, and end
up making use of the full display capabilities without reserving space
for colors that will not appear in the input, while mapping instead of
clipping all colors that do appear. The image would not be de-saturated
any more than necessary, and details would remain.

If we move this to the actual context of my question, HLG signal
encodes BT.2020 chromaticity, and a HLG display emits something smaller
than full BT.2020 color gamut. Would it not be useful to know which
part of the input signal color gamut is useful to map to the display?

The mastering display color gamut would at least be the outer bounds,
because the author cannot have seen colors outside of it.

Is color so unimportant that no-one bothers to adjust it during mastering?

Or is chromaticity clipping the processing that producers expect, so
that logos etc. are always the correct color (intentionally inside, say,
BT.709?), meaning you prefer absolute colorimetric (in ICC terms)
rendering for chromaticity, and perceptual rendering only for luminance?

If so, why does ST 2086 bother carrying primaries and white point?


Thanks,
pq

Received on Friday, 15 September 2023 15:48:51 UTC