RE: Implementing Assistive Technologies

Unfortunately the requirements of captioning and the requirements of subtitling are frequently conflated.
The references to IMSC within the article draw subtitling into this article…

The naïve reader (which in the context of accessibility and subtitling is unfortunately the majority) could easily assume that all the comments made in this article are valid for both captioning and subtitling.
Indeed the majority of points made are valid for both, but unfortunately the comments about colour are not.

Best regards,
John

John Birch | Strategic Partnerships Manager | Screen
Main Line : +44 1473 831700 | Ext : 2208 | Direct Dial : +44 1473 834532
Mobile : +44 7919 558380 | Fax : +44 1473 830078
John.Birch@screensystems.tv<mailto:John.Birch@screensystems.tv> | www.screensystems.tv<http://www.screensystems.tv> | https://twitter.com/screensystems


Visit us at
BVE, London Excel, 23-25 February 2016, stand C20

P Before printing, think about the environment

From: Michael Dolan [mailto:mdolan@newtbt.com]
Sent: 04 November 2015 16:52
To: public-tt@w3.org
Subject: RE: Implementing Assistive Technologies

The article is *only* about accessibility.  I stand by that statement, and apparently you do to.  Subtitle requirements are irrelevant to the article.

                Mike

From: John Birch [mailto:John.Birch@screensystems.tv]
Sent: Wednesday, November 4, 2015 1:23 AM
To: David Ronca <dronca@netflix.com<mailto:dronca@netflix.com>>; public-tt@w3.org<mailto:public-tt@w3.org>
Subject: RE: Implementing Assistive Technologies

And I would debate this statement….

“and there does not seem to be any requirement from the authoring community for a broader set of colors than what is available today”

In the context of captions (for the hard of hearing) this may be true… and this is qualified by this later statement
“When it comes to captions, I’m not aware of a requirement where you would need or want to make two subtle shades of red, for instance.”

However in the context of subtitles (for translation), which IMSC specifically addresses, there is a very real need for a wide range of colours. The translation subtitling context numerically has a far greater application than captions in the broadcast industry (by an order of magnitude). This is not between subtitle texts within a single program, but certainly from channel to channel. The colour hue chosen for subtitle texts (and the font) is often (rightly or wrongly) influenced by ‘brand’ or genre considerations by the channel and may in some cases be program specific. There are certainly different shades of colours used by different broadcasters.

Best regards,
John

John Birch | Strategic Partnerships Manager | Screen
Main Line : +44 1473 831700 | Ext : 2208 | Direct Dial : +44 1473 834532
Mobile : +44 7919 558380 | Fax : +44 1473 830078
John.Birch@screensystems.tv<mailto:John.Birch@screensystems.tv> | www.screensystems.tv<http://www.screensystems.tv> | https://twitter.com/screensystems


Visit us at
BVE, London Excel, 23-25 February 2016, stand C20

P Before printing, think about the environment

From: David Ronca [mailto:dronca@netflix.com]
Sent: 04 November 2015 03:54
To: public-tt@w3.org<mailto:public-tt@w3.org>
Subject: Re: Implementing Assistive Technologies

"And for another thing, according to Dolan, major commercial content streaming services like Netflix, Amazon Prime, and others were well into development of their own proprietary processes"

I have to take issue with this statement.  Can't speak for Amazon and "others"  but we built our subtitling on TTML from day one, and have evolved from a very simple model to full 608 support.  Today, we have 100% catalog coverage and are producing assets in 20 languages.

We are in the front on TTML2 as we have 5 (yes 5) full or partial implementations in flight including two rendering engines.  Our Japanese subtitle work was done in TTML2.  I had planed to discuss our work in Sapporo but was unable to make the trip.

David

On Tue, Nov 3, 2015 at 4:26 PM, Glenn Adams <glenn@skynav.com<mailto:glenn@skynav.com>> wrote:
FYI. Nice write-up that includes some coverage on IMSC and WebVTT.

---------- Forwarded message ----------
From: SMPTE Newswatch <communications@smpte.org<mailto:communications@smpte.org>>
Date: Wed, Nov 4, 2015 at 1:28 AM
Subject: Implementing Assistive Technologies
To: Glenn Adams <glenn@skynav.com<mailto:glenn@skynav.com>>



You're receiving this email because you are a Member or have expressed an interest in SMPTE and/or HPA.



View Online<http://us9.campaign-archive2.com/?u=afdd4606a7ec4be507008b977&id=79743272ef&e=84f2972d36>








[https://gallery.mailchimp.com/afdd4606a7ec4be507008b977/images/0d87a8c4-3c9b-48bb-91a9-99f41e29fa21.jpg]



SMPTE Newswatch




[https://gallery.mailchimp.com/afdd4606a7ec4be507008b977/images/50c5d017-7fb1-435c-96ef-0387f4a8b227.jpg]<http://smpte.us9.list-manage1.com/track/click?u=afdd4606a7ec4be507008b977&id=478ef7fb48&e=84f2972d36>

[https://gallery.mailchimp.com/afdd4606a7ec4be507008b977/images/d123f9dc-9783-4f08-b7a8-548e34cfba2f.jpg]<http://smpte.us9.list-manage1.com/track/click?u=afdd4606a7ec4be507008b977&id=1a6ddcd11e&e=84f2972d36>

[https://gallery.mailchimp.com/afdd4606a7ec4be507008b977/images/8bf70757-33da-4bfb-82aa-4b55401b5c09.jpg]<http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=f8dbd55ad6&e=84f2972d36>

[https://gallery.mailchimp.com/afdd4606a7ec4be507008b977/images/b9182673-2a96-4eb3-8667-99b77b73afaf.jpg]<http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=1c3e40374f&e=84f2972d36>

[https://gallery.mailchimp.com/afdd4606a7ec4be507008b977/images/7458609f-76a7-4e5d-8eea-bbb40a38210f.jpg]<http://smpte.us9.list-manage2.com/track/click?u=afdd4606a7ec4be507008b977&id=848385c8b3&e=84f2972d36>

[https://gallery.mailchimp.com/afdd4606a7ec4be507008b977/images/241d51f6-fdf8-4195-a252-9ed92b35eb38.jpg]<http://smpte.us9.list-manage2.com/track/click?u=afdd4606a7ec4be507008b977&id=cd7e99adab&e=84f2972d36>







Table of Contents






Implementing Assistive Technologies





ITU OK's Immersive Audio Standard





Where are the 4K HDMI Switchers?





Remote DVR Progress





Stay Connected




[https://gallery.mailchimp.com/afdd4606a7ec4be507008b977/images/0115cde3-847c-4f50-83aa-42919c837e9b.jpg]

Read Our Blog<http://smpte.us9.list-manage2.com/track/click?u=afdd4606a7ec4be507008b977&id=40b089e079&e=84f2972d36>





[https://gallery.mailchimp.com/afdd4606a7ec4be507008b977/images/38eaf067-2132-4633-9cec-e4dab144e056.jpg]

Download on the<http://smpte.us9.list-manage1.com/track/click?u=afdd4606a7ec4be507008b977&id=a2e0cd1ba1&e=84f2972d36>
App Store<http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=1ece1b4411&e=84f2972d36>





[https://gallery.mailchimp.com/afdd4606a7ec4be507008b977/images/fdb1eac6-62e2-4474-8470-2ff0742cf189.jpg]

Android app on<http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=ec2308bdba&e=84f2972d36>
Google Play<http://smpte.us9.list-manage2.com/track/click?u=afdd4606a7ec4be507008b977&id=55e7bf5491&e=84f2972d36>






The Journal





[https://gallery.mailchimp.com/afdd4606a7ec4be507008b977/images/16097068-0af3-4204-8957-fd1f6c701870.gif]<http://smpte.us9.list-manage2.com/track/click?u=afdd4606a7ec4be507008b977&id=5b7a89b7fa&e=84f2972d36>

The current issue of the SMPTE Motion Imaging Journal is now Available in the Digital Library<http://smpte.us9.list-manage1.com/track/click?u=afdd4606a7ec4be507008b977&id=3fa7737aa3&e=84f2972d36>.

Exclusive online peer-reviewed articles are available only in the Digital Library!




[https://gallery.mailchimp.com/afdd4606a7ec4be507008b977/images/3a7a02ce-7323-42b7-b04e-5c2d3f079b74.jpg]<http://smpte.us9.list-manage2.com/track/click?u=afdd4606a7ec4be507008b977&id=2bd895304c&e=84f2972d36>





[https://gallery.mailchimp.com/afdd4606a7ec4be507008b977/images/9b697391-ab3d-4154-ad53-0a64d482b364.jpg]<http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=6390938772&e=84f2972d36>




[https://gallery.mailchimp.com/afdd4606a7ec4be507008b977/images/e12c696b-c333-43a6-a9a5-ddf34e475bd1.jpg]<http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=a26dc810ca&e=84f2972d36>





[https://gallery.mailchimp.com/afdd4606a7ec4be507008b977/images/c2d97885-315c-460d-a343-dd97be3ec826.jpeg]<http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=f662303ebf&e=84f2972d36>




[https://gallery.mailchimp.com/afdd4606a7ec4be507008b977/images/10a74f4c-3b86-422a-97ee-1c8e82791e4d.jpg]<http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=efd978cb83&e=84f2972d36>





[https://gallery.mailchimp.com/afdd4606a7ec4be507008b977/images/801287d9-b376-4804-b45e-c9cda5664655.jpg]<http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=e099516e0b&e=84f2972d36>







November 2015 #1






[https://gallery.mailchimp.com/afdd4606a7ec4be507008b977/images/ae752fc4-7b6a-4781-b69b-8322f9cab0fd.jpeg]<http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=bbb6641f97&e=84f2972d36>






Hot Button Discussion
Implementing Assistive Technologies
By Michael Goldman





Since SMPTE Newswatch last examined the topic of closed captioning and other accessibility technologies a couple of years ago<http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=22f4b8d91f&e=84f2972d36>, not much has changed in terms of governmental regulatory requirements on broadcasters to widen access to modern communication technologies. Indeed, the only major recent action taken by the FCC regarding accessibility related to the expansion of rules regarding how to get critical emergency information to consumers with visual impairments<http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=75e26f93d1&e=84f2972d36> by making that information accessible on their so-called “second screen” personal assistive devices. However, since the Twenty-First Century Communications and Video Accessibility Act of 2010<http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=dc2788fa15&e=84f2972d36> was passed, the media industry has steadfastly been seeking ways to make captioning, video description, and other enhancements more consistently available with their content across all platforms. In fact, the action in this space right now appears to be focused mainly around how to most efficiently implement the FCC’s requirements across an industry that “broadcasts” content just about everywhere, to everyone, using both traditional and non-traditional methods, and delivery and viewing systems.

As discussed previously in Newswatch, the traditional television broadcast industry has remained stable and efficient in terms of providing closed captions by adhering to the established captioning standards, CEA-608, and its digital television descendant mandated by the FCC, CEA-708. Methodology-wise, television broadcasters continue to author captions in the CEA-608 format, and put them through a transcoding process to convert them into the 708 format as the final step in the broadcast chain. This methodology is used because 708 has never been “natively” adopted by the caption authoring industry as a wholesale replacement, since most archival content, hardware, and software infrastructure remains based on 608.





[https://gallery.mailchimp.com/afdd4606a7ec4be507008b977/images/654f7ee6-bff5-4ab6-a0cd-8ee9b0d673d7.gif]<http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=e46cfd39f4&e=84f2972d36>







It is, however, “an interesting question” how changes in broadcast television picture creation, transmission, processing, and viewing due to the industry’s ongoing ultra-high-definition (UHD) transition could impact captions for broadcast content, including the integration of broadband delivery, suggests Michael Dolan, founder of the Television Broadcast Technology Consulting Group, chairman of the ATSC Technology and Standards Group 1<http://smpte.us9.list-manage2.com/track/click?u=afdd4606a7ec4be507008b977&id=1800c3f19b&e=84f2972d36>, chair of SMPTE Working Group 24-TB, and a SMPTE Fellow. But Dolan suggests that this evolution to UHD and broadband delivery provides an opportunity to introduce new caption technology along the way.

“Caption systems today already support at least eight colors—some of them more—and there does not seem to be any requirement from the authoring community for a broader set of colors than what is available today, unlike video, where you are trying to provide very smooth transitions between shades of all the different colors, and a wider color gamut and higher bit depth make a remarkable difference to the viewing experience,” Dolan explains. “When it comes to captions, I’m not aware of a requirement where you would need or want to make two subtle shades of red, for instance. That simply wouldn’t serve the purpose of helping the hard-of-hearing person discriminate text for different speakers or sound effects. However, it would complicate the decoder mixing to have two color models in play, so as you move to higher dynamic range, wider color gamut in video, ultimately the captions have to be easily composited into the video plane. And that process can get a little more complicated when you are working with one color model for the video and another for the text. So one would expect enhancements to caption technology to facilitate this [in the future], even if more colors are not needed.”






[https://gallery.mailchimp.com/afdd4606a7ec4be507008b977/images/ff3bf6e0-1246-49d3-a022-cba7c099e2c5.jpg]<http://smpte.us9.list-manage2.com/track/click?u=afdd4606a7ec4be507008b977&id=3ab91f88c1&e=84f2972d36>







Meanwhile, in the increasingly busy commercial content streaming space, the industry has been turning to the SMPTE Timed Text (SMPTE-TT) format<http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=6769ed525a&e=84f2972d36> for broadband distribution of captions. Since the FCC formally declared SMPTE-TT as a so-called “safe harbor,” meaning commercial broadcasters who used it would be considered compliant with the law now and for the foreseeable future, the industry “has really taken that to heart, but they have had to examine on a technical level what that means exactly,” Dolan explains.

By that, Dolan means that after the FCC’s declaration that SMPTE-TT was the way to go, the industry had to get to work trying to find ways to coalesce around a common profile of SMPTE-TT as the standard choice for captioning commercial streaming video content. This is an important step since, until recently, captioning had existed across the Web pretty much in a hodge-podge of formats and systems. In this regard, getting both commercial and Web content to converge around a common profile remains a work in progress, Dolan suggests.

“Some time ago, the UltraViolet industry forum created a profile of SMPTE Timed Text, because it is a rather large set of technologies, not all of which are needed to do a good job on captions and movie subtitles specifically,” he says. “That profile did a good job for captions, and it formed the basis of a new initiative by the W3C [Worldwide Web Consortium] with the profile known as IMSC1 [Internet Media Subtitles and Captions 1.0]<http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=b356a0d869&e=84f2972d36>. That is now close to publication, and more and more folks are looking at adopting it as the profile for the safe harbor version of SMPTE Timed Text. Right now, there are reference implementations underway.





[https://gallery.mailchimp.com/afdd4606a7ec4be507008b977/images/c7d99a61-b542-466f-a5bf-a05e4c1cda1e.jpg]<http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=c551c00657&e=84f2972d36>






“There are a number of commercial media delivery silos on the Internet that are using some profile of [SMPTE Timed Text] already, but most of them do not disclose what they are doing exactly, so it is a little difficult to talk about who is adopting it and who isn’t, other than to say that many programmers who deliver content to tablets and other ‘second-screen’ devices are using a version of it when they deliver their content.”

However, Dolan quickly adds that the volume of programmers and content, and the rapidly evolving nature of the Internet, combined with the typical nature of what it takes to roll out a new technology or standard even under the best of circumstances, means it will take a long time to coalesce broadcasters around a common profile such as IMSC1 in terms of standardizing caption formatting. For one thing, some software developers and Web browser companies have gravitated toward another option—WebVTT<http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=4146100aca&e=84f2972d36>. That methodology relies on a simpler markup language built on Subtitle Resource Tracks (SRT), and has become popular for captioning some types of Web-based videos.

And for another thing, according to Dolan, major commercial content streaming services like Netflix, Amazon Prime, and others were well into development of their own proprietary processes before the industry got around to pushing toward standardizing commercial media delivery on the Web.

“They are still converting not only video and audio, but also captions to whatever they have already designed for their silos, and much of that pre-dates a lot of the work over the last few years with respect to captions, certainly,” he says. “Some of them are moving in the direction [of SMPTE Timed Text] and some aren’t—it’s really on a case-by-case basis.

“So a lot of progress has been made. But has everyone converted to a single format or fully deployed IMSC1? No. But there has been a lot of work put forward and a lot of activities are going on that are starting to adopt IMSC1, both in standards’ bodies and in commercial silos. It’s a process, but we are not even close to a common format, that’s for sure.”






[https://gallery.mailchimp.com/afdd4606a7ec4be507008b977/images/37305cc7-8bb5-4dc4-842d-284b5045a8a3.jpg]<http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=015995f010&e=84f2972d36>






Broadcast, of course, is not the only content delivery area where assistive technology is required, nor are captions the only area where there have been interesting developments in this category. In the world of digital cinema, for instance, captions are a relatively stable topic. DCI distributions now include closed-caption standards built around an Ethernet-based synchronization protocol, associated resource presentation list, and a content essence format that permits content creators to distribute DCI versions of their movies with up to six languages of interoperable closed captions associated with them. The industry also has a standardized protocol for how digital cinema servers talk to captioning devices, as well as well-established standards for descriptive audio in place that are carried in DCI packages<http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=b6b8b13cc3&e=84f2972d36>. Further, as Dolan points out, the Interoperable Mastering Format (IMF) has “already embraced IMSC1” so new studio movies will typically be mastered to be optimized for streaming platforms going forward.

At the same time, manufacturers have been making interesting strides regarding how to make such assistive technologies practical in the cinema space. When it comes to the issue of descriptive audio—that is, a separate audio track designed to describe or narrate what is happening in the picture to assist visually impaired viewers—hardware manufacturers have been offering a variety of solutions in recent years. For cinema applications, companies like Dolby, Sony, and USL, among others, are offering a range of technologies that provide closed captions to individual consumers on small personal devices, or audio signals through small, wireless RF receivers attached to standard headphones worn by impaired moviegoers.

And for home viewers, “the methods of carrying descriptive audio have been mature for some time,” says Sripal Mehta, principal architect, broadcast, for Dolby Laboratories and co-designer, along with Harold Hallikainen, of the digital cinema closed caption communication protocol standard described above. “In some cases, a separate audio program with descriptive video mixed in is sent as an alternate sound program to the main audio program. The issue with this is that, in many cases, the main program audio is stereo or 5.1, while the descriptive video track may only be mono or stereo. Another method is to send a separate descriptive video track, which would be mixed, at playback time, with the main video. The benefit of this approach is that the visually impaired viewer gets the full surround experience, as opposed to a compromised stereo or mono experience. The Dolby encoding/decoding system takes care of ‘ducking,’ or reducing the volume of the main audio track when the descriptive video track dialogue is presented.”





[https://gallery.mailchimp.com/afdd4606a7ec4be507008b977/images/83362c66-788f-4a9f-9abf-854bc5971c5a.jpg]<http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=fb45d55930&e=84f2972d36>






Mehta adds that descriptive audio has become “a standard part of [Dolby’s] offerings, and is being adopted by our consumer electronics partners, as well as broadcasters,” and he suggests this trend is proliferating across the industry. And that’s not the only evolution in the assistive technology space in the broadcast world. He adds that another paradigm shift includes the shifting of descriptive audio tracks into the element-based, or object-based audio delivery world.

“With object-based audio, music and effects, dialogue, and descriptive video are sent as separate elements, and are mixed together at playback time,” Mehta says. “This method delivers a premium experience to each listener of every need, provides the ability to adjust dialogue level for increased intelligibility, and reduces the overall bit rate for different experiences.”

And related to the notion of “increased intelligibility” is the growing push toward what Mehta calls “dialogue enhancement” as another application to assist hearing-impaired consumers.

“That’s the ability to pick out dialogue from the ambience of the content,” he says. “Next generation audio codecs, including Dolby AC-4<http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=1e9b565fca&e=84f2972d36>, support dialogue enhancement, which involves advanced signal processing to improve the audibility and intelligibility of dialogue for both pre-mixed stereo and 5.1 audio programs, as well as object-based audio. Dialogue enhancement is a valuable feature for those who are hard-of-hearing.”





[https://gallery.mailchimp.com/afdd4606a7ec4be507008b977/images/2da701da-9ca6-43e6-bed6-298fad4ac2b8.jpg]<http://smpte.us9.list-manage1.com/track/click?u=afdd4606a7ec4be507008b977&id=60426180b0&e=84f2972d36>






News Briefs

ITU OK's Immersive Audio Standard
As reported recently by TV Technology<http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=7597e2f6ca&e=84f2972d36>, the ITU recently announced approval of Recommendation ITU-R BS.2088-0, which essentially is an open audio standard designed to make feasible immersive broadcast sound experiences in combination with ultra-high-definition TV (UHDTV) pictures. The recommendation, which you can read here<http://smpte.us9.list-manage1.com/track/click?u=afdd4606a7ec4be507008b977&id=7978f85fb3&e=84f2972d36>, is based on existing Resource Interchange File Format (RIFF) and WAVE audio formats, and codifies standards that will allow single files to carry entire audio programs and metadata for all combinations of channel-based, object-based, and scene-based audio available for those programs. When implemented for users who have the right technology in their homes, the idea is to permit them “to adjust the level of immersive audio” on UHD programs, according to the article.






Where are the 4K HDMI Switchers?
A recent column by Rodolfo La Maestra on the HDTV Magazine site<http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=33be7eea7a&e=84f2972d36> takes a look at one of the understated problems with ongoing transition to 4K broadcasting: a lack of all the associated components that consumers with sophisticated home theaters might need to make efficient 4K viewing worth the trouble to begin with. In particular, with the arrival of 4K video players, UHD Blu-ray players on the horizon, and more, he suggests that manufacturers have not kept pace in terms of providing a basic element that home theaters with multiple components will need in 4K scenarios—4K HDMI switchers. “The market offered 4K TVs for the past three years and 4K players for at least a year, but the industry did not react quickly enough regarding 4K HDMI switchers that can comply with their requirements,” La Maestra writes. He suggests the industry needs to find a solution considering that most current 4K consumer displays only have one input capable of 4K HDMI 2.0 that are HDCP 2.2 compliant, while “there will soon be more 4K sources to connect to the display, so the need for capable AVRs and HDMI switchers to consolidate those connections will soon grow.” In his article, La Maesta also published reaction to this concern from several switcher manufacturers whom he spoke to earlier this year at the Infocomm 2015 tradeshow.

Remote DVR Progress
Recent cable industry news headlines included a report that progress is apparently being made on making the concept of the remote or cloud DVR a reality. Industry site Fierce Cable recently covered news<http://smpte.us9.list-manage.com/track/click?u=afdd4606a7ec4be507008b977&id=a5adf93829&e=84f2972d36> that Charter Communications was making plans with technology partner Cisco to conduct a remote DVR trial for IP video to the home, as well as conducting experiments to enable remote content distribution through IP in the home. These plans were disclosed in a recent filing Cisco made with the FCC, according to the report, which added Charter and Cisco were shortly about to begin field trials. The idea of remote DVR technology is to permit users to record TV shows and store the recordings in a cloud-based server, rather than on an at-home, set-top box. Conceptually, this would reduce the cost or need for certain types of set-top boxes, and allow users to access recordings from different devices and locations. The report adds that Comcast and Cablevision are also working on similar technologies.







[https://gallery.mailchimp.com/afdd4606a7ec4be507008b977/images/553c707a-b04b-4bec-a8a8-14c0132ad9a6.jpg]

You're receiving this email because you are a Member or have expressed an interest in Society of Motion Picture and Television Engineers - SMPTE and/or HPA. Please Note: If you unsubscribe below, you will no longer receive ANY SMPTE or HPA email.

You may unsubscribe<http://smpte.us9.list-manage2.com/unsubscribe?u=afdd4606a7ec4be507008b977&id=08becf377a&e=84f2972d36&c=79743272ef> if you no longer wish to receive our emails.

Society of Motion Picture and Television Engineers | 3 Barker Avenue | White Plains | NY | 10601









This message may contain confidential and/or privileged information. If you are not the intended recipient you must not use, copy, disclose or take any action based on this message or any information herein. If you have received this message in error, please advise the sender immediately by reply e-mail and delete this message. Thank you for your cooperation. Screen Subtitling Systems Ltd. Registered in England No. 2596832. Registered Office: The Old Rectory, Claydon Church Lane, Claydon, Ipswich, Suffolk, IP6 0EQ
  ­­

Received on Wednesday, 4 November 2015 17:22:25 UTC