- From: David Ronca <dronca@netflix.com>
- Date: Wed, 4 Nov 2015 12:30:58 -0800
- To: Michael Dolan <mdolan@newtbt.com>, public-tt@w3.org
- Message-ID: <CAMjV-FgUyH4isjMAeVD0ZoN6JDY2vNihLN+cZu28+pQN862+eg@mail.gmail.com>
Mike, No offense taken and no need for a retraction. Since we have not been very public about our work, few people really know what we have been doing. We're taking a much more visible role in IMSC and TTML2 adoption and funding significant open-source work around these specs. If another article is written next hear, the part about Netflix should read very different. David On Wed, Nov 4, 2015 at 8:50 AM, Michael Dolan <mdolan@newtbt.com> wrote: > David- > > > > I did not mean to disparage any company, especially Netflix. The author > picked these two, for what it is worth. > > > > The statement, in context of an article about a common interoperable > technology, was meant to illustrate that the commercial silos are not > interoperable. I stand by that statement. Based on public statements (and > based on your statements below) your deployment is “based on TTML” (a good > thing!). But I believe it is not interoperable with other silo’d services. > > > > If your profile of TTML is any published TTML profile and/or interoperates > with any other company’s profile, please publish it and I will attempt to > retract the statement. > > > > Regards, > > > > Mike > > > > *From:* David Ronca [mailto:dronca@netflix.com] > *Sent:* Tuesday, November 3, 2015 7:54 PM > *To:* public-tt@w3.org > *Subject:* Re: Implementing Assistive Technologies > > > > "And for another thing, according to Dolan, major commercial content > streaming services like Netflix, Amazon Prime, and others were well into > development of their own proprietary processes" > > > > I have to take issue with this statement. Can't speak for Amazon and > "others" but we built our subtitling on TTML from day one, and have > evolved from a very simple model to full 608 support. Today, we have 100% > catalog coverage and are producing assets in 20 languages. > > > > We are in the front on TTML2 as we have 5 (yes 5) full or partial > implementations in flight including two rendering engines. Our Japanese > subtitle work was done in TTML2. I had planed to discuss our work in > Sapporo but was unable to make the trip. > > > > David > > > > On Tue, Nov 3, 2015 at 4:26 PM, Glenn Adams <glenn@skynav.com> wrote: > > FYI. Nice write-up that includes some coverage on IMSC and WebVTT. > > > > ---------- Forwarded message ---------- > From: *SMPTE Newswatch* <communications@smpte.org> > Date: Wed, Nov 4, 2015 at 1:28 AM > Subject: Implementing Assistive Technologies > To: Glenn Adams <glenn@skynav.com> > > You're receiving this email because you are a Member or have expressed an > interest in SMPTE and/or HPA. > > View Online > <http://us9.campaign-archive2.com/?u=afdd4606a7ec4be507008b977&id=79743272ef&e=84f2972d36> > > <#150d3698bb2f49fb_150cfe54c800999d_150ce6cc4af86968_> > > > *SMPTE Newswatch * > > > <http://smpte.us9.list-manage1.com/track/click?u=afdd4606a7ec4be507008b977&id=478ef7fb48&e=84f2972d36> > > > <http://smpte.us9.list-manage1.com/track/click?u=afdd4606a7ec4be507008b977&id=1a6ddcd11e&e=84f2972d36> > > > <http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=f8dbd55ad6&e=84f2972d36> > > > <http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=1c3e40374f&e=84f2972d36> > > > <http://smpte.us9.list-manage2.com/track/click?u=afdd4606a7ec4be507008b977&id=848385c8b3&e=84f2972d36> > > > <http://smpte.us9.list-manage2.com/track/click?u=afdd4606a7ec4be507008b977&id=cd7e99adab&e=84f2972d36> > > *Table of Contents* > > Implementing Assistive Technologies > <#150d3698bb2f49fb_150cfe54c800999d_150ce6cc4af86968_NW1> > > I <#150d3698bb2f49fb_150cfe54c800999d_150ce6cc4af86968_NB1>TU OK's > Immersive Audio Standard > <#150d3698bb2f49fb_150cfe54c800999d_150ce6cc4af86968_NB1> > > W <#150d3698bb2f49fb_150cfe54c800999d_150ce6cc4af86968_NB2>here are the > 4K HDMI Switchers? > <#150d3698bb2f49fb_150cfe54c800999d_150ce6cc4af86968_NB2> > > R <#150d3698bb2f49fb_150cfe54c800999d_150ce6cc4af86968_NB3>emote DVR > Progress <#150d3698bb2f49fb_150cfe54c800999d_150ce6cc4af86968_NB3> > > *Stay Connected* > > Read Our Blog > <http://smpte.us9.list-manage2.com/track/click?u=afdd4606a7ec4be507008b977&id=40b089e079&e=84f2972d36> > > Download on the > <http://smpte.us9.list-manage1.com/track/click?u=afdd4606a7ec4be507008b977&id=a2e0cd1ba1&e=84f2972d36> > > App Store > <http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=1ece1b4411&e=84f2972d36> > > Android app on > <http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=ec2308bdba&e=84f2972d36> > > Google Play > <http://smpte.us9.list-manage2.com/track/click?u=afdd4606a7ec4be507008b977&id=55e7bf5491&e=84f2972d36> > > > > *The Journal* > > > > > <http://smpte.us9.list-manage2.com/track/click?u=afdd4606a7ec4be507008b977&id=5b7a89b7fa&e=84f2972d36> > > > The current issue of the *SMPTE Motion Imaging Journal* is now Available > in the Digital Library > <http://smpte.us9.list-manage1.com/track/click?u=afdd4606a7ec4be507008b977&id=3fa7737aa3&e=84f2972d36> > . > > > Exclusive online peer-reviewed articles are available only in the Digital > Library! > > > <http://smpte.us9.list-manage2.com/track/click?u=afdd4606a7ec4be507008b977&id=2bd895304c&e=84f2972d36> > > > > > <http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=6390938772&e=84f2972d36> > > > <http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=a26dc810ca&e=84f2972d36> > > > > > <http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=f662303ebf&e=84f2972d36> > > > <http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=efd978cb83&e=84f2972d36> > > > > > <http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=e099516e0b&e=84f2972d36> > > *November 2015 #1* > > > <http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=bbb6641f97&e=84f2972d36> > > *Hot Button Discussion* > > *Implementing Assistive Technologies**By Michael Goldman * > > > > Since *SMPTE Newswatch* last examined the topic of closed captioning and > other accessibility technologies a couple of years ago > <http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=22f4b8d91f&e=84f2972d36>, > not much has changed in terms of governmental regulatory requirements on > broadcasters to widen access to modern communication technologies. Indeed, > the only major recent action taken by the FCC regarding accessibility > related to the expansion of rules regarding how to get critical emergency > information to consumers with visual impairments > <http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=75e26f93d1&e=84f2972d36> by > making that information accessible on their so-called “second screen” > personal assistive devices. However, since the Twenty-First Century > Communications and Video Accessibility Act of 2010 > <http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=dc2788fa15&e=84f2972d36> > was passed, the media industry has steadfastly been seeking ways to make > captioning, video description, and other enhancements more consistently > available with their content across all platforms. In fact, the action in > this space right now appears to be focused mainly around how to most > efficiently implement the FCC’s requirements across an industry that > “broadcasts” content just about everywhere, to everyone, using both > traditional and non-traditional methods, and delivery and viewing systems. > > As discussed previously in *Newswatch*, the traditional television > broadcast industry has remained stable and efficient in terms of providing > closed captions by adhering to the established captioning standards, > CEA-608, and its digital television descendant mandated by the FCC, > CEA-708. Methodology-wise, television broadcasters continue to author > captions in the CEA-608 format, and put them through a transcoding process > to convert them into the 708 format as the final step in the broadcast > chain. This methodology is used because 708 has never been “natively” > adopted by the caption authoring industry as a wholesale replacement, since > most archival content, hardware, and software infrastructure remains based > on 608. > > > <http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=e46cfd39f4&e=84f2972d36> > > > > It is, however, “an interesting question” how changes in broadcast > television picture creation, transmission, processing, and viewing due to > the industry’s ongoing ultra-high-definition (UHD) transition could impact > captions for broadcast content, including the integration of broadband > delivery, suggests Michael Dolan, founder of the Television Broadcast > Technology Consulting Group, chairman of the ATSC Technology and > Standards Group 1 > <http://smpte.us9.list-manage2.com/track/click?u=afdd4606a7ec4be507008b977&id=1800c3f19b&e=84f2972d36>, > chair of SMPTE Working Group 24-TB, and a SMPTE Fellow. But Dolan suggests > that this evolution to UHD and broadband delivery provides an opportunity > to introduce new caption technology along the way. > > “Caption systems today already support at least eight colors—some of them > more—and there does not seem to be any requirement from the authoring > community for a broader set of colors than what is available today, unlike > video, where you are trying to provide very smooth transitions between > shades of all the different colors, and a wider color gamut and higher bit > depth make a remarkable difference to the viewing experience,” Dolan > explains. “When it comes to captions, I’m not aware of a requirement where > you would need or want to make two subtle shades of red, for instance. That > simply wouldn’t serve the purpose of helping the hard-of-hearing person > discriminate text for different speakers or sound effects. However, it > would complicate the decoder mixing to have two color models in play, so as > you move to higher dynamic range, wider color gamut in video, ultimately > the captions have to be easily composited into the video plane. And that > process can get a little more complicated when you are working with one > color model for the video and another for the text. So one would expect > enhancements to caption technology to facilitate this [in the future], even > if more colors are not needed.” > > > > <http://smpte.us9.list-manage2.com/track/click?u=afdd4606a7ec4be507008b977&id=3ab91f88c1&e=84f2972d36> > > > > Meanwhile, in the increasingly busy commercial content streaming space, > the industry has been turning to the SMPTE Timed Text (SMPTE-TT) format > <http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=6769ed525a&e=84f2972d36> > for broadband distribution of captions. Since the FCC formally declared > SMPTE-TT as a so-called “safe harbor,” meaning commercial broadcasters who > used it would be considered compliant with the law now and for the > foreseeable future, the industry “has really taken that to heart, but they > have had to examine on a technical level what that means exactly,” Dolan > explains. > > By that, Dolan means that after the FCC’s declaration that SMPTE-TT was > the way to go, the industry had to get to work trying to find ways to > coalesce around a common profile of SMPTE-TT as the standard choice for > captioning commercial streaming video content. This is an important step > since, until recently, captioning had existed across the Web pretty much in > a hodge-podge of formats and systems. In this regard, getting both > commercial and Web content to converge around a common profile remains a > work in progress, Dolan suggests. > > “Some time ago, the UltraViolet industry forum created a profile of SMPTE > Timed Text, because it is a rather large set of technologies, not all of > which are needed to do a good job on captions and movie subtitles > specifically,” he says. “That profile did a good job for captions, and it > formed the basis of a new initiative by the W3C [Worldwide Web Consortium] > with the profile known as IMSC1 [Internet Media Subtitles and Captions > 1.0] > <http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=b356a0d869&e=84f2972d36>. > That is now close to publication, and more and more folks are looking at > adopting it as the profile for the safe harbor version of SMPTE Timed Text. > Right now, there are reference implementations underway. > > > <http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=c551c00657&e=84f2972d36> > > > > “There are a number of commercial media delivery silos on the Internet > that are using some profile of [SMPTE Timed Text] already, but most of them > do not disclose what they are doing exactly, so it is a little difficult to > talk about who is adopting it and who isn’t, other than to say that many > programmers who deliver content to tablets and other ‘second-screen’ > devices are using a version of it when they deliver their content.” > > However, Dolan quickly adds that the volume of programmers and content, > and the rapidly evolving nature of the Internet, combined with the typical > nature of what it takes to roll out a new technology or standard even under > the best of circumstances, means it will take a long time to coalesce > broadcasters around a common profile such as IMSC1 in terms of > standardizing caption formatting. For one thing, some software developers > and Web browser companies have gravitated toward another option—WebVTT > <http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=4146100aca&e=84f2972d36>. > That methodology relies on a simpler markup language built on Subtitle > Resource Tracks (SRT), and has become popular for captioning some types of > Web-based videos. > > And for another thing, according to Dolan, major commercial content > streaming services like Netflix, Amazon Prime, and others were well into > development of their own proprietary processes before the industry got > around to pushing toward standardizing commercial media delivery on the Web. > > “They are still converting not only video and audio, but also captions to > whatever they have already designed for their silos, and much of that > pre-dates a lot of the work over the last few years with respect to > captions, certainly,” he says. “Some of them are moving in the direction > [of SMPTE Timed Text] and some aren’t—it’s really on a case-by-case basis. > > “So a lot of progress has been made. But has everyone converted to a > single format or fully deployed IMSC1? No. But there has been a lot of work > put forward and a lot of activities are going on that are starting to adopt > IMSC1, both in standards’ bodies and in commercial silos. It’s a process, > but we are not even close to a common format, that’s for sure.” > > > > <http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=015995f010&e=84f2972d36> > > > > Broadcast, of course, is not the only content delivery area where > assistive technology is required, nor are captions the only area where > there have been interesting developments in this category. In the world of > digital cinema, for instance, captions are a relatively stable topic. DCI > distributions now include closed-caption standards built around an > Ethernet-based synchronization protocol, associated resource presentation > list, and a content essence format that permits content creators to > distribute DCI versions of their movies with up to six languages of > interoperable closed captions associated with them. The industry also has a > standardized protocol for how digital cinema servers talk to captioning > devices, as well as well-established standards for descriptive audio in > place that are carried in DCI packages > <http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=b6b8b13cc3&e=84f2972d36>. > Further, as Dolan points out, the Interoperable Mastering Format (IMF) has > “already embraced IMSC1” so new studio movies will typically be mastered to > be optimized for streaming platforms going forward. > > At the same time, manufacturers have been making interesting strides > regarding how to make such assistive technologies practical in the cinema > space. When it comes to the issue of descriptive audio—that is, a separate > audio track designed to describe or narrate what is happening in the > picture to assist visually impaired viewers—hardware manufacturers have > been offering a variety of solutions in recent years. For cinema > applications, companies like Dolby, Sony, and USL, among others, are > offering a range of technologies that provide closed captions to individual > consumers on small personal devices, or audio signals through small, > wireless RF receivers attached to standard headphones worn by impaired > moviegoers. > > And for home viewers, “the methods of carrying descriptive audio have been > mature for some time,” says Sripal Mehta, principal architect, broadcast, > for Dolby Laboratories and co-designer, along with Harold Hallikainen, of > the digital cinema closed caption communication protocol standard described > above. “In some cases, a separate audio program with descriptive video > mixed in is sent as an alternate sound program to the main audio program. > The issue with this is that, in many cases, the main program audio is > stereo or 5.1, while the descriptive video track may only be mono or > stereo. Another method is to send a separate descriptive video track, which > would be mixed, at playback time, with the main video. The benefit of this > approach is that the visually impaired viewer gets the full surround > experience, as opposed to a compromised stereo or mono experience. The > Dolby encoding/decoding system takes care of ‘ducking,’ or reducing the > volume of the main audio track when the descriptive video track dialogue is > presented.” > > > <http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=fb45d55930&e=84f2972d36> > > > > Mehta adds that descriptive audio has become “a standard part of [Dolby’s] > offerings, and is being adopted by our consumer electronics partners, as > well as broadcasters,” and he suggests this trend is proliferating across > the industry. And that’s not the only evolution in the assistive technology > space in the broadcast world. He adds that another paradigm shift includes > the shifting of descriptive audio tracks into the element-based, or > object-based audio delivery world. > > “With object-based audio, music and effects, dialogue, and descriptive > video are sent as separate elements, and are mixed together at playback > time,” Mehta says. “This method delivers a premium experience to each > listener of every need, provides the ability to adjust dialogue level for > increased intelligibility, and reduces the overall bit rate for different > experiences.” > > And related to the notion of “increased intelligibility” is the growing > push toward what Mehta calls “dialogue enhancement” as another application > to assist hearing-impaired consumers. > > “That’s the ability to pick out dialogue from the ambience of the > content,” he says. “Next generation audio codecs, including Dolby AC-4 > <http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=1e9b565fca&e=84f2972d36>, > support dialogue enhancement, which involves advanced signal processing to > improve the audibility and intelligibility of dialogue for both pre-mixed > stereo and 5.1 audio programs, as well as object-based audio. Dialogue > enhancement is a valuable feature for those who are hard-of-hearing.” > > > <http://smpte.us9.list-manage1.com/track/click?u=afdd4606a7ec4be507008b977&id=60426180b0&e=84f2972d36> > > > > *News Briefs* > > *ITU OK's Immersive Audio Standard* > > As reported recently by *TV Technology* > <http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=7597e2f6ca&e=84f2972d36>, > the ITU recently announced approval of Recommendation ITU-R BS.2088-0, > which essentially is an open audio standard designed to make feasible > immersive broadcast sound experiences in combination with > ultra-high-definition TV (UHDTV) pictures. The recommendation, which you > can read here > <http://smpte.us9.list-manage1.com/track/click?u=afdd4606a7ec4be507008b977&id=7978f85fb3&e=84f2972d36>, > is based on existing Resource Interchange File Format (RIFF) and WAVE audio > formats, and codifies standards that will allow single files to carry > entire audio programs and metadata for all combinations of channel-based, > object-based, and scene-based audio available for those programs. When > implemented for users who have the right technology in their homes, the > idea is to permit them “to adjust the level of immersive audio” on UHD > programs, according to the article. > > > > *Where are the 4K HDMI Switchers?* > A recent column by Rodolfo La Maestra on the HDTV Magazine site > <http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=33be7eea7a&e=84f2972d36> > takes a look at one of the understated problems with ongoing transition to > 4K broadcasting: a lack of all the associated components that consumers > with sophisticated home theaters might need to make efficient 4K viewing > worth the trouble to begin with. In particular, with the arrival of 4K > video players, UHD Blu-ray players on the horizon, and more, he suggests > that manufacturers have not kept pace in terms of providing a basic element > that home theaters with multiple components will need in 4K scenarios—4K > HDMI switchers. “The market offered 4K TVs for the past three years and 4K > players for at least a year, but the industry did not react quickly enough > regarding 4K HDMI switchers that can comply with their requirements,” La > Maestra writes. He suggests the industry needs to find a solution > considering that most current 4K consumer displays only have one input > capable of 4K HDMI 2.0 that are HDCP 2.2 compliant, while “there will soon > be more 4K sources to connect to the display, so the need for capable AVRs > and HDMI switchers to consolidate those connections will soon grow.” In his > article, La Maesta also published reaction to this concern from several > switcher manufacturers whom he spoke to earlier this year at the Infocomm > 2015 tradeshow. > > *Remote DVR Progress* > Recent cable industry news headlines included a report that progress is > apparently being made on making the concept of the remote or cloud DVR a > reality. Industry site Fierce Cable recently covered news > <http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=a5adf93829&e=84f2972d36> > that Charter Communications was making plans with technology partner Cisco > to conduct a remote DVR trial for IP video to the home, as well as > conducting experiments to enable remote content distribution through IP in > the home. These plans were disclosed in a recent filing Cisco made with the > FCC, according to the report, which added Charter and Cisco were shortly > about to begin field trials. The idea of remote DVR technology is to permit > users to record TV shows and store the recordings in a cloud-based server, > rather than on an at-home, set-top box. Conceptually, this would reduce the > cost or need for certain types of set-top boxes, and allow users to access > recordings from different devices and locations. The report adds that > Comcast and Cablevision are also working on similar technologies. > > > <#150d3698bb2f49fb_150cfe54c800999d_150ce6cc4af86968_> > > You're receiving this email because you are a Member or have expressed an > interest in Society of Motion Picture and Television Engineers - SMPTE > and/or HPA. Please Note: If you unsubscribe below, you will no longer > receive ANY SMPTE or HPA email. > > You may unsubscribe > <http://smpte.us9.list-manage2.com/unsubscribe?u=afdd4606a7ec4be507008b977&id=08becf377a&e=84f2972d36&c=79743272ef> > if you no longer wish to receive our emails. > > Society of Motion Picture and Television Engineers | 3 Barker Avenue | > White Plains | NY | 10601 > > > > >
Received on Wednesday, 4 November 2015 20:31:30 UTC