[encrypted-media] Formal objection: Future accessibility and EME

doctorow has just created a new issue for https://github.com/w3c/encrypted-media:

== Formal objection: Future accessibility and EME ==
EFF has repeatedly raised the issue of accessibility and EME, on-list, in calls and during the earlier covenant process. We reiterate these concerns, first published here (https://www.eff.org/deeplinks/2016/03/interoperability-and-w3c-defending-future-present) as a formal objection:

Media companies invest in accessible versions of their products — sometimes they're legally obliged to provide them. But people with disabilities have diverse access needs, and statutory requirements or centrally-provided dispensations barely cover the possible ways that content, including video, could be made available to a wider audience. That's one of the reasons why the W3C's other work on media standardization is so exciting. HTML5's unencrypted media extensions not only provide built-in accessibility features, they also offer the possibility of third-party programs that can transform, re-interpret, or mix original content in order to make it accessible to an audience that can't accept the default presentation methods.

To give a few examples of what the future of HTML accessibility might include:

*    YouTube attempts to create closed captions on the fly using speech recognition. It's not always perfect, but it's getting better every day. A smart web browser displaying video could hand over audio to be recognised locally, creating captioning for content that doesn't have it. Add to that auto-translate, and your movie gets a global audience, unlimited by language barriers.
 
*   While we wait for better algorithms to improve captioning, many take advantage large volunteer subbing communities that create subtitling and captioning independent of any rightsholders. Synchronizing such content with the original video is sometimes an exercise in frustration for the users of these subtitles. In the future, subbers could create webpages with javascript that seeks for audio and video cues in existing media to correctly synchronize their unofficial subtitles on the fly (as dubbing companies like RiffTrax have had to do with their own synchronization workarounds).
 
*   Security researcher Dan Kaminsky has developed a method for transforming the color space of video, in real time, so that red-green colorblind viewers can see images with real reds and greens. The DanKam could be applied to HTML5 video to let the color blind see a fuller range of color.

*    One in four thousand people rely on video passing "the Harding test," a method for determining whether movies contain flashing imagery that may cause harm to those suffering from photosensitive epilepsy. But the Harding test doesn't catch footage for every person with epilepsy, and not every video source is checked against it. In the future, we can envisage a website that could pro-actively run flash and pattern identification on incoming video and warn users or skip dangerous content.


Please view or discuss this issue at https://github.com/w3c/encrypted-media/issues/376 using your GitHub account

Received on Thursday, 23 March 2017 21:06:37 UTC