- From: Seeger, Chris (NBCUniversal) <Chris.Seeger@nbcuni.com>
- Date: Sat, 4 Jan 2025 23:58:08 +0000
- To: "jbowler@acm.org" <jbowler@acm.org>
- CC: "Portable Network Graphics (PNG) Working Group" <public-png@w3.org>
- Message-ID: <BL0PR14MB37955EAEF183CD423FC6DA3FE6162@BL0PR14MB3795.namprd14.prod.outlook.com>
Hi John and all, Most importantly, I will emphasize first that cICP is not at all dependent on mDCV or cLLI. mDCV and CLLI are informative to a secondary target display AFTER the content has actually been created. It isn’t meant to in any way identify the contents characteristics at all. It is often discarded by many consumer displays. cICP is very different because it is required for proper display of video content. Content creators should use reference displays which support the contents format. HDR Reference displays are defined as any display that supports PQ/HLG ≥1,000 cd/m2(see ITU-R BT.2100) with BT.2020 color primaries. What that means is that mDCV and cLLI will affect HDR content mastered above 1,000 cd/m2, when a secondary content creator has an HDR display at or BELOW 1,000 cd/m2. Unfortunately, many of these (reference) displays don’t support mDCV or cLLI (only consumer displays currently support tone mapping). Bottom line: cICP is essential. mDCV and cLLI will not always be used. Other answers are inline below in red. From: John Bowler <john.cunningham.bowler@gmail.com> Date: Saturday, January 4, 2025 at 5:05 PM To: Seeger, Chris (NBCUniversal) <Chris.Seeger@nbcuni.com> Cc: Portable Network Graphics (PNG) Working Group <public-png@w3.org> Subject: Re: [EXTERNAL] Re: [PNG] Meeting topics - Jan 6, 2025 Ok, I misunderstood what you were implying by this: >Could you elaborate on why cICP is not independent of mDCV and cLLI? Most video has existed without both mDCV and cLLI for many years. What would the adverse effects of their absence be? cICP provides the essential information to display an image if the display supports it (the contents color primaries, transfer function, matrix coef, signal range). mDCV and cLLI provides informative information for optimizing target consumer display tone mapping. Normal contents focal range is between 0-400 cd/m2 (near SDR ranges). Above that in luminance are highlights where there are fewer and fewer pixels. Optimal tone mapping improves the preservation of detail in those ranges but isn’t essential. I've always assumed that the broadcast industry would want to ensure that original broadcast data was reproduced consistently across all devices so that two devices with the same capabilities would display the same picture. "Most video has existed without both mDCV and cLLI for many years." And inconsistent results existed even though the output device capabilities (television receiver and, indeed monitor) were substantially the same. So I jumped to the conclusion that the ITU had, in fact, formalised tonemapping because everyone can see that all current and, maybe, practical RGB devices can't handle the colour gamut of 2020 (colour primaries 9 in cICP). Unfortunately, HDR displays vary greatly in their luminance capabilities (color ranges close to P3 are more and more common on HDR displays). But this means that to prevent clipping of luminance, tone mapping can help. This brings me back to the definition of an “HDR Reference Display”. A broadcast HDR Reference Display is defined in BT.2100 as one that is capable of ≥1,000 cd/m2 and BT.2020 color primaries (where the display is at least following the correct direction of the specified primary coordinates 😊). BT.2446, while referencing examples of tone mapping curves is a REPORT and not a RECOMMENDATION. ITU Recommendations are similar to their standards. BT.2446 is very old and lacks many newer concepts in tone mapping. Ok, my bad. In fact that puts it back to exactly what I assumed at the start. cICP interpretation for those primaries that are wide gamut or don't exist in H.273 Table 2 can benefit from mDCV. The same applies to cLLI for dynamic range. I don't believe this differs from anything you said, before my misinterpretations. If I put sRGB data into a 2020 (colour primary) container and I don't write an mDCV chunk then I expect no detectable colour shift because the sRGB adapted white point, D65, matches the 2020 cICP(9,) D65 encoding/adopted white point. Correct. If I put wide gamut data with an adapted white point some way away from D65, e.g. D50, into a REC 2020 container I will certainly expect a bad colour shift if I don't include mDCV. In fact I would require the colour shift; the only adapted white the decoder can assume is the container one, D65. If I’m interpreting what your wrote correctly, if you place any content into a REC.2020 container, you would have to adapt the white point as well(chromatic adaption?). BT.2020 inherently defines a white point which uses D65. I think that’s what you’re saying in the last sentence. mDCV has nothing to do with all of this. Content still has to adhere to the standards otherwise it won’t preserve the original content creators intent. Why would I do this? Because I have no choice: We use the tools we are given, not the ones we want. There are only three ways to include higher dynamic range data in PNG just using publicly defined chunks: 1. Use a gAMA value of around 1/20 (i.e. a screen gamma of "20"). The precise choice depends on what the aim is. The problem is that while doing this produces a completely conformant PNG decoders (including libpng) typically cannot handle it. My long and detailed explanation with very precise numbers is here: https://github.com/pnggroup/libpng/issues/578#issuecomment-2330282514<https://urldefense.com/v3/__https:/github.com/pnggroup/libpng/issues/578*issuecomment-2330282514__;Iw!!PIZeeW5wscynRQ!qiyhPzmRiSYwT_DCT7DsjwNe2ZC5FO2gbTD_64ep9WoFG5SNQHgt1DyFjzhufncngrA3GVxkxBmPKVTiE4szG66Q9aHP$> 2. Use a cICP chunk with the transfer function set to 16 (PQ) and the original PQ depth (record it in sBIT) chosen carefully. If you go too high you gain no perceptual benefit but the noise zaps the PNG compression (not that 16-bit compression is that good in any case.) 3. Use a cICP chunk with the transfer function set to 18 (HLG) and sBIT to 10 - at least that seems to be the only defined choice at present. Be very careful to follow the mandatory instructions in H.273 about scaling to 16 bits. So then I have the transfer function in a chunk that is fully approved by the W3C, the broadcast industry and others! However now I have to chose from one of the numbers in Table 2. There is very little choice if the data is not only "HDR" but is also wide gamut. I can only see two choices; additions to my list are very welcome and I haven't checked through the H.273 chromaticities to see whether any others are wide gamut: 1. 10: CIEXYZ. Completely complete, every colour of the rainbow and all the others too. Horribly inefficient as an encoding in PNG but, in fact, that probably improves the 16-bit compression. 2. 9: REC-2020 again. Not complete. The question is whether the overlap in tristimulus colour space (measured in CIELab or CIELuv) is sufficient. This is a tricky calculation which I haven't done. I'd be tempted more by CIEXYZ despite the maybe weird adopted white. I'm not trying to persuade anyone of anything beyond my normal aim which is to present arguments in the forumagora. Based on what I have just read and just learned mDCV at least is essential to cICP in the broader context of all possible valid PNG usage. cICP can also signal (besides BT.2020): DCI-P3 (Table 2 #11; SMPTE RP431-2) Display-P3 (Table 2 #12; SMPTE EG-432-1) mDCV only identifies the master display characteristics that the content creator is viewing the content thru. It doesn’t in any way identify the content itself (although they should match 😊). If we need to add more, we can do that thru the ITU process. There are plenty of index values left. I hope this helps and thank you for the discussion. I want this to be right also but I hope we can get this out soon! We have many shows that want to create native HDR graphics and we can’t do it easily without libpng and PNG cICP. We want our archives to correctly identify the formats of all the graphics. We have newer NextGen(ATSC3) broadcasts that have app infrastructure that depends on correct identification of HDR thumbnails for HTML5 compositing (that we hope to create in W3C) in an HDR Canvas. This is just my opinion.
Received on Saturday, 4 January 2025 23:58:56 UTC