Re: Help with HLG profile

Hi Kevin and all,

Whilst not wishing to disagree with much of you sage wisdom, I think 
there is a very strong case to make that HLG is scene referred, at least 
for live TV production. For live TV there are multiple monitors with 
varying characteristics at varying brightness. There is no single 
monitor. For example some feeds will be shaded in one location on 
displays at one brightness and other feeds will be shaded elsewhere. It 
is in the final mixer that feeds are combined and have to be mutually 
consistent. With HLG, for production purposes, there is a defined 
rendering for different display brightnesses which allows this to 
happen. That rendering has been tested and found to produce consistent 
pictures over a wide range of display luminance. It can also be extended 
(info in ITU recommendations and reports) to include the surround 
luminance (though this is a second order effect). So with HLG you cannot 
say that a representation on one particular monitor is the absolute 
reference because the signal may be (and is designed to be) displayed at 
a range of display luminance. You cannot, for example, say that the HLG 
is implicitly display referred because the producer viewed and approved 
it on a 1000 nit display. One might argue that the defined HLG display 
rendering is not perfect. That is true, but no more true than saying 
that 3 colour primary (CIE 1931) is a perfect colour representation. 
Both CIE 1931 colour and HLG display rendering are good enough for 
practical purposes.

Best regards, :-)
Tim

On 18/06/2018 10:46, Kevin Wheatley wrote:
>
>
> On 18 June 2018 at 09:46, Tim Borer <tim.borer@bbc.co.uk 
> <mailto:tim.borer@bbc.co.uk>> wrote:
>
>     I have often heard it said that there is no standard Rec.709
>     production (because camera operators, it is claimed, universally
>     adjust their cameras). The claim is that by tweaking the camera
>     the picture somehow becomes display referred. Even if it were true
>     that cameras are always adjusted (not so) this would not make the
>     signal display referred. If you doubt this simply look at the
>     dimensions of the signal. HLG is dimensionless (a relative signal)
>     and PQ has dimensions of candelas per square metre (nits). All
>     that adjusting a 709/HLG camera does is to produce an “artistic”
>     modification to the signal. The signal still represents relative
>     scene referred light, just not the actual scene (but, rather, one
>     that the producer wished had existed). Adjusting the camera does
>     not convert a dimensionless signal into a dimensional one.
>
>
> However, you can turn turn a dimensionless representation into an 
> absolute display referred, simply by showing somebody important to the 
> content an image on a screen.
>
> I think the above is often where the assumption comes from. In some 
> parts of the media industry the 'look' of an image is effectively 
> defined by that display systems' interpretation of it (TV commercial 
> advertisements as an example, though the same can be said for many 
> other parts). So it doesn't matter what one might like to treat the 
> image as, but how the image looks on this device under these 
> conditions is what is important. As you suggest it is sometimes better 
> to treat images as transforming them to a scene referred 
> representation to maintain this look, in other cases people prefer to 
> lock the display referred image as being the essence of the image and 
> proceed from there, usually after lots of time and money has been 
> spent crafting/mastering an image to be "just what the director wants".
>
> To further what Tim is saying, I'd like to give my equivalent 
> statement from a Film VFX point of view. It is quite common for people 
> to assume that "Films are display referred" perhaps because PQ is 
> display referred, (or maybe because traditional film scanning a la 
> Cineon used printing density as a representation, or any number of 
> other context dependent reasons).
>
> That said, the key to modern CGI is having a good scene based 
> representation for all the rendering algorithms to 'work'. Ideally 
> we'd have an absolute scene referred model to make lighting 
> calculations work cleanly, but we settle for something close to sensor 
> plane relative exposure, along with various approximations to compute 
> a HDR representation of the world.
>
> At the same time, in the common case, to actually display the images 
> we have to transform to display referred representation, simulating 
> what might happen down stream of our work in a Digital Intermediate. 
> In the DI step it is possible to interpret the images in a way which 
> maintains some degree of scene reference, though it has been more 
> traditional to be display biased. The ACES workflow attempts to 
> maintain some degree of scene reference, though one could argue people 
> really only ever see the images via the intermediate representation 
> defined by the RRT output, I think that is a weaker argument as all 
> images in some sense are defined by what they look like when output on 
> some device as far as a viewer is concerned. A given productions use 
> of ACES in this case doesn't require heavy CGI.
>
> I think the key takeaway is that modern productions have to account 
> for both philosophies, some being more on one side, some on the other. 
> If I am a brand wanting to present my logo in the right colour, I 
> (should) care a lot about the display colourimetry and how the signal 
> will be interpreted, sometimes that might mean ensuring all the source 
> cameras reproduce similar values for the players' jerseys, other times 
> it means the graphic ident overlay should have some specific output 
> and thus be display referred or at least a well defined interpretation 
> should exist. (I'm ignoring the fact that people often can't actually 
> recall the colour of logos exactly).
>
> I'm also ignoring lots of complex issues about what should happen when 
> the two are displayed at the same time, if only I had the solution for 
> that!
>
> In my opinion, the lack of downstream capability in display systems is 
> part of what leads to display referred representations becoming the 
> preferred interpretation at some point in the chain. That plus the 
> problems of incomplete understanding of the HVS, means we have gaps in 
> knowing how to correctly reproduce the differences caused by other 
> factors such as viewing conditions, display gamut and luminance 
> differences, etc.
>
> Unfortunately, none of this is suggesting a solution, just that 
> reality can often get in the way!
>
> Kevin
> -- 
> FRAMESTORE
>
> Kevin Wheatley · Head of Imaging
>
> [ London ]  · New York · Los Angeles · Chicago · Montréal
> T  +44 (0)20 7344 8000
> 28 Chancery Lane, London, WC2A 1LB
> twitter.com/framestore <http://twitter.com/framestore> · 
> facebook.com/framestore <http://facebook.com/framestore> · 
> framestore.com <http://framestore.com>

Received on Monday, 18 June 2018 11:11:03 UTC