Re: Help with HLG profile

On 18 June 2018 at 09:46, Tim Borer <tim.borer@bbc.co.uk> wrote:
>
> I have often heard it said that there is no standard Rec.709 production
> (because camera operators, it is claimed, universally adjust their
> cameras). The claim is that by tweaking the camera the picture somehow
> becomes display referred. Even if it were true that cameras are always
> adjusted (not so) this would not make the signal display referred. If you
> doubt this simply look at the dimensions of the signal. HLG is
> dimensionless (a relative signal) and PQ has dimensions of candelas per
> square metre (nits). All that adjusting a 709/HLG camera does is to produce
> an “artistic” modification to the signal. The signal still represents
> relative scene referred light, just not the actual scene (but, rather, one
> that the producer wished had existed). Adjusting the camera does not
> convert a dimensionless signal into a dimensional one.
>

However, you can turn turn a dimensionless representation into an absolute
display referred, simply by showing somebody important to the content an
image on a screen.

I think the above is often where the assumption comes from. In some parts
of the media industry the 'look' of an image is effectively defined by that
display systems' interpretation of it (TV commercial advertisements as an
example, though the same can be said for many other parts). So it doesn't
matter what one might like to treat the image as, but how the image looks
on this device under these conditions is what is important. As you suggest
it is sometimes better to treat images as transforming them to a scene
referred representation to maintain this look, in other cases people prefer
to lock the display referred image as being the essence of the image and
proceed from there, usually after lots of time and money has been spent
crafting/mastering an image to be "just what the director wants".

To further what Tim is saying, I'd like to give my equivalent statement
from a Film VFX point of view.  It is quite common for people to assume
that "Films are display referred" perhaps because PQ is display referred,
(or maybe because traditional film scanning a la Cineon used printing
density as a representation, or any number of other context dependent
reasons).

That said, the key to modern CGI is having a good scene based
representation for all the rendering algorithms to 'work'. Ideally we'd
have an absolute scene referred model to make lighting calculations work
cleanly, but we settle for something close to sensor plane relative
exposure, along with various approximations to compute a HDR representation
of the world.

At the same time, in the common case, to actually display the images we
have to transform to display referred representation, simulating what might
happen down stream of our work in a Digital Intermediate. In the DI step it
is possible to interpret the images in a way which maintains some degree of
scene reference, though it has been more traditional to be display biased.
The ACES workflow attempts to maintain some degree of scene reference,
though one could argue people really only ever see the images via the
intermediate representation defined by the RRT output, I think that is a
weaker argument as all images in some sense are defined by what they look
like when output on some device as far as a viewer is concerned. A given
productions use of ACES in this case doesn't require heavy CGI.

I think the key takeaway is that modern productions have to account for
both philosophies, some being more on one side, some on the other. If I am
a brand wanting to present my logo in the right colour, I (should) care a
lot about the display colourimetry and how the signal will be interpreted,
sometimes that might mean ensuring all the source cameras reproduce similar
values for the players' jerseys, other times it means the graphic ident
overlay should have some specific output and thus be display referred or at
least a well defined interpretation should exist. (I'm ignoring the fact
that people often can't actually recall the colour of logos exactly).

I'm also ignoring lots of complex issues about what should happen when the
two are displayed at the same time, if only I had the solution for that!

In my opinion, the lack of downstream capability in display systems is part
of what leads to display referred representations becoming the preferred
interpretation at some point in the chain. That plus the problems of
incomplete understanding of the HVS, means we have gaps in knowing how to
correctly reproduce the differences caused by other factors such as viewing
conditions, display gamut and luminance differences, etc.

Unfortunately, none of this is suggesting a solution, just that reality can
often get in the way!

Kevin
-- 
FRAMESTORE

Kevin Wheatley · Head of Imaging

[ London ]  · New York · Los Angeles · Chicago · Montréal
T  +44 (0)20 7344 8000
28 Chancery Lane, London, WC2A 1LB
twitter.com/framestore · facebook.com/framestore · framestore.com

Received on Monday, 18 June 2018 09:47:34 UTC