RE: [EXTERNAL] Re: Proposal for HTMLCanvasElement HDR compositing modes


Neither ITU-R BT.709 nor ITU-R BT.2100 are defined by how an image looks on a reference monitor, as there are many occasions where the video is not checked on a reference monitor.

As an example, covering a live event in ITU-R BT.709, say a demonstration or interview at an event, the viewfinder would have Zebras set to a known level, say 75% for Caucasian skin tone or 45% for grass.  The cameraman would then adjust the Iris so that the relevant area of the image had Zebras illuminated.  The camera would have been set up previously – the only reference is the signal level that the Zebras are set to.  Quite often cameramen still use B&W viewfinders so there is no reference colour.

This camera signal will be transmitted to a broadcast centre and then directly on to a distribution network without any form of colour correction, shading or grading.

Best Regards


Simon Thompson
Senior R&D Engineer

BBC Research & Development

From: Kevin Wheatley <>
Sent: 16 March 2021 17:45
To: Todd, Craig <>
Cc: Simon Thompson-NM <>; Seeger, Chris (NBCUniversal) <>; Jim Helman <>; Lars Borg <>; Christopher Cameron <>;; Pierre-Anthony Lemieux <>
Subject: Re: [EXTERNAL] Re: Proposal for HTMLCanvasElement HDR compositing modes

On Tue, 16 Mar 2021 at 15:44, Todd, Craig <<>> wrote:
BT.2100 is a more complete specificaton as it includes specs on the reference display and reference environment.
BT.709 display was de-facto CRT. When new display technologies became available it was necessary to explicitly define the HDTV display. This was done in two separate Recs, BT.1886 spec'd gamma=2.4 and BT.2035 specs white at 100 nits and a viewing environment with a 10-nit background. I consider HDTV a display referred system, i.e. you create a pixel to produce known color/luminance on the reference display in the reference environment.

yes, I'd actually consider 709/1886 display referenced too because  the effective rendering transform from scene to display is defined in terms of the viewing conditions, most edits made to the signal are made on a monitor without transforming back to scene linear, and most outputs from cameras today do not encode using the Rec 709 OETF in the first place, at least not for any of the content we see.

Certainly when it comes to mapping content mastered in one set of reference condition to the specific use cases we always map to reference display colourimetry and then back through the new display/conditions on the assumption that the mastering environment needs to be preserved and is often unknown what the actual scene colorimetry might have been and what transformations have been applied as a rendering.

I'd do pretty much the same approach for HLG, even though it is possible to obtain HLG unedited from a camera and thus get some focal plane referred relative scene linear, people want images to "look the same" as they saw on some monitor, rather than some vague recollection of the scene. This basically makes images practically display referenced the moment you edit them whilst viewing on a monitor.


Kevin Wheatley · Head of Imaging

[ London ] · New York · Los Angeles · Chicago · Montréal · Mumbai
T  +44 (0)20 7344 8000
28 Chancery Lane, London WC2A 1LB<> ·<> ·<>

Received on Tuesday, 16 March 2021 18:09:16 UTC