- From: Lars Borg <borg@adobe.com>
- Date: Fri, 14 May 2021 03:02:06 +0000
- To: Michael Smith <miksmith@attglobal.net>, Simon Thompson - NM <Simon.Thompson2@bbc.co.uk>, "public-colorweb@w3.org" <public-colorweb@w3.org>
- Message-ID: <478E06CA-B98D-4225-A96F-39D8B15D165A@adobe.com>
Mike, Thanks for elaborating on the issues. As common with any gamut mapping, all of the discussed methods perform poorly for some content. Most are tuned to diffuse white at 200/300 nits, while iPain 12 in some tests put it at 800 nits. And max RGB mapping seems to amplify blue noise more than luminance mapping does. Given that there is no perfect method, what to do? Should we search for a good method, over some set of test cases? (This has frustrated me for some time) Or should we provide a simple, low-cost, basic method (how about none?), and advice devs that they can do better? Lars From: Mike Smith <miksmith@attglobal.net> Date: Thursday, May 13, 2021 at 2:12 PM To: Lars Borg <borg@adobe.com>, Simon Thompson <Simon.Thompson2@bbc.co.uk>, "public-colorweb@w3.org" <public-colorweb@w3.org> Subject: Re: Tone Mapping Mentioned in Monday's Meeting Unfortunately I missed the last call but it sounds like interesting topics were discussed. Perhaps now is a good time to mention that our SMPTE paper "Color Volume and Hue-preservation in HDR Tone Mapping" examined the performance of the different BT.2390 EETF tonemapping methods and proposed an additional variant based on a MaxRGB norm. I think the MaxRGB variant will be included in a future ITU-R BT report. SMPTE just granted open-access to our paper, here is the link: https://ieeexplore.ieee.org/document/9086648 My understanding is that the BT.2390 EETF rolloff is a hermite spline not a beizer. I've attached my excel sheets that perform the basic BT.2390 EETF and ST2094-10 tonemapping curve computations using the equations from those documents. The BT.2390 EETF takes luminance parameters of the mastering display (Lw,Lb) and the target display (Lmax, Lmin). If the MaxCLL value is known to be good and reliable, Lw should be set equal to MaxCLL instead of using the ST2086 max luminance value. This point about using MaxCLL is missing from the BT.2390 report. The EETF also does not alter the midtone slope or gain of the output, which could be important basic adjustment when viewing the content outside of a reference viewing environment. Other types of spline curves could be designed to take as input a simple slope (contrast) or gain adjustment as well. ST2094-10 tonemapping includes content-based parameters (min/average/max) in addition to target luminance min/max. The ST2094-10 framework also includes additional "potentially creative" adjustments corresponding to min/average/max offset values, tonemapping offset/gain/gamma and midtone slope/contrast n. A reference [1] to a display-referred format conversion algorithm describes SDR BT.709 to HDR10 conversion steps. References to scene-to-display referred conversions have been defined as the OOTFs in BT.2100 in addition to the advanced process defined in ACES CTL code [2]. I also want to provide some background on the topic of recommendation vs report vs example vs standard for tonemapping. There is perhaps an unspoken tension between content creators, product designers and engineers that may not be obvious to all. Below is a quick summary of my understanding of it from my point of view working with episodic television and movie content creators and the consumer electronics industry: a) Content creators want their content shown as it appeared to them when it was created and finished, usually in a dark room on a great display. This is the "display-referred" paradigm or the older what-you-see-is-what-you-get (WYSIWYG) paradigm. Acknowledging that the content will be shown on worse-performing displays and in less ideal viewing environments is an obvious commercial reality but is difficult to address for content creators, unless different versions are specifically created (dark room great display, dark room bad display, bright room with small display, etc.). There can be a lot of variation across different types of content, because content creators also don't want always use the same range of colors and luminance for all shows. For example, there is a creative desire to visually differentiate a depressing movie taking place in a low contrast desert from an energetic children's cartoon. This variation also exists in SDR but the variation is less because the palette and range are much reduced. b) Product designers want some flexibility to differentiate the way their products look from competitors, this is especially true for traditional television products and seems to be becoming more important for the computer/mobile display markets. To differentiate the products the designers often want to change the way the image looks on the television to make it "look better" or differentiated vs. their completion. If the products follow a standard tonemapping approach, the product designers feared their products will look identical to the competitors, and this leads to a fear of being marginalized by cheaper but otherwise identically looking products. During the introduction of HDR Video several years ago, television companies embraced the "scene-referred" paradigm because it was being promoted as "display-independent". The television companies thought the format was designed to give the display flexibility on how the image is displayed and which played into their desire to differentiate and control their products. I believe the original goal of introducing the "scene-referred" paradigm in HDR Video was to standardize on a high-dynamic range log-like camera output format that could be mixed and matched and inter-cut across different camera products and camera brands within a broadcast facility rather than dealing with the multitude of proprietary log formats (LogC, SLog, SLog2, Slog3, RedLog, CLog, VLog, FLog, etc), which seemed like a good goal to me but it seems that this message may have been lost somehow and instead HLG promotion turned into a "display independent" message. I don't know if today's cameras that output HLG can be mixed and matched and inter-cut with other brands and products that also output scene-referred HLG. c) Engineers that aren't attempting to differentiate their products basically want to be told what they should do, then do it, and then move on to the next engineering project but agreement between a) and b) about tonemapping was never reached so here we are. Display or format logo licensing programs often have some certification requirements that must be followed within some tolerance but those are usually private programs that don't openly share implementation details and performance tolerances. Companies offering logo/format licensing programs also have flexibility in what they require on a case by case basis and who they want to do business with. In this logo/format licensing case, implementors are generally guided about what to do based on what the certification program requires. The openly available reports and examples also can help guide an implementor into doing something that makes sense in different situations when logo/format licensing applications are not in play. Thanks, Mike [1] https://movielabs.com/ngvideo/MovieLabs_Mapping_BT.709_to_HDR10_v1.0.pdf [2] https://github.com/ampas/aces-dev/blob/v1.3/transforms/ctl/outputTransform/rec2020/RRTODT.Academy.Rec2020_1000nits_15nits_HLG.ctl https://github.com/ampas/aces-dev/blob/v1.3/transforms/ctl/outputTransform/rec2020/RRTODT.Academy.Rec2020_1000nits_15nits_ST2084.ctl On 5/12/2021 1:02 PM, Lars Borg wrote: Indeed. Note that BT.2390 is not an ITU recommendation, but a report. The mapping in 5.4.1 is stated as an example. This example does not desaturate colors to the display primaries, so is not practically usable. I’ve implemented the BT.2390 5.4.1 example, and found that the bezier was not easily adjustable for different roll-offs. BT.2408 is also a report, not a recommendation. The standard SMPTE ST 2094 parts 10, 20, 40 provide other tone mapping functions for PQ content. Yet again, only examples, but they include desaturation as well. It seems the industry is still searching for a best practice for tone mapping. Let’s hope for success. Lars From: Simon Thompson <Simon.Thompson2@bbc.co.uk> Date: Wednesday, May 12, 2021 at 3:02 AM To: "public-colorweb@w3.org" <public-colorweb@w3.org> Subject: Tone Mapping Mentioned in Monday's Meeting Resent-From: <public-colorweb@w3.org> Resent-Date: Wednesday, May 12, 2021 at 2:58 AM Hi Lars, all, In Monday's meeting a tone-mapping for PQ was discussed. It is available in https://www.itu.int/dms_pub/itu-r/opb/rep/R-REP-BT.2390-8-2020-PDF-E.pdf section 5.4.1 Note that the ITU updates from the Feb Meetings are appearing and I think in the newest version, this may have moved to BT.2408, but that is not available on the website yet. Best Regards Simon -- Simon Thompson MEng CEng MIET Senior R&D Engineer BBC Research and Development South Laboratory
Attachments
- application/pkcs7-signature attachment: smime.p7s
Received on Friday, 14 May 2021 03:02:23 UTC