- From: David Singer <singer@apple.com>
- Date: Thu, 29 Jan 2015 10:43:35 +0100
- To: "www-style@w3.org" <www-style@w3.org>
Oh boy thanks Mark for opening up this area. Can I give a little background? Video streams and images can indeed be tagged with data that indicates dynamic range, and range of colors, that it encodes against. Actually, the dynamic range is implied by the transfer function rather than directly stated. Classical television is theoretically in a fairly small color gamut, and up to 100 nits maximum brightness; but actual modern displays are, in fact, brighter than that. Video and images are often implicitly or explicitly tagged with 3 values each from an enumeration: Transfer function, color primaries, and matrix coefficients. (Transfer functions that use a Gamma value are one class; and indeed we had a lot of fun with different Gamma values a few years back). MPEG recently made a document available that documents these, and collects all we knew of. ISO/IEC 23001-8:2013 Information technology -- MPEG systems technologies -- Part 8: Coding-independent code points — available free on <http://standards.iso.org/ittf/PubliclyAvailableStandards/index.html> Similarly the system can usually work out either implicitly or explicitly, or be configured, so it knows the characteristics of the display. For example, HDMI has the EDID <http://en.wikipedia.org/wiki/Extended_display_identification_data>. More complex management can be handled by systems like ColorSync, monitor profiling, and so on. Given all this, modern systems can generally adapt content so it looks as close as right as it can on the display it’s using. Bit-depth comes into it only peripherally; recent work at the ITU and MPEG seems to indicate that, given a suitable transfer function, 10 bits is enough to convey an HDR signal to approximately the same level of quality that 8 bits carries standard dynamic range. Automated mapping of HDR content to an SDR display is currently an investigation question. A nice thing about HDR is that scenes don’t wash out to white in their bright areas, or disappear to black in their dark areas; detail and information can be seen in both. In SDR, these can’t both be true simultaneously, and it’s an artistic question over what’s important in a given scene, alas. In Color, mapping different gamuts is theoretically also a problem, though practically people might not expect to see detail expressed as variations of colors that are at the edges of the gamut. Cameras and other elements in the pipeline sometimes (often?) map colors in the source that can’t be represented on the display, to the closest available color (essentially, clipping the color values, and losing detail). So, one might want to know, in a presentation environment, whether an SDR or HDR version, standard or wide color gamut version, of the content should be accessed and presented, and I am guessing this is what started Mark on asking the question. Hope this helps David Singer Manager, Software Standards, Apple Inc.
Received on Thursday, 29 January 2015 09:44:09 UTC