W3C home > Mailing lists > Public > public-html-a11y@w3.org > February 2010

Re: timing model of the media resource in HTML5

From: Eric Carlson <eric.carlson@apple.com>
Date: Mon, 1 Feb 2010 10:54:16 -0800
Cc: Silvia Pfeiffer <silviapfeiffer1@gmail.com>, HTML Accessibility Task Force <public-html-a11y@w3.org>, Ken Harrenstien <klh@google.com>
Message-Id: <603B8F1A-B372-4D66-89BF-2B1C8E7D9D21@apple.com>
To: Philip Jägenstedt <philipj@opera.com>

On Feb 1, 2010, at 9:06 AM, Philip Jägenstedt wrote:

> On Mon, 01 Feb 2010 13:19:59 +0100, Silvia Pfeiffer <silviapfeiffer1@gmail.com> wrote:
> 
>> On Fri, Jan 29, 2010 at 12:39 AM, Philip Jägenstedt <philipj@opera.com> wrote:
>>> 
>>>> Incidentally, we do need to develop the javascript API for exposing
>>>> the video's tracks no matter whether we do it in declarative syntax or
>>>> not. Here's a start at a proposal for this (obviously inspired by the
>>>> markup):
>>>> 
>>>> video.numberTracks(); -> return number of available tracks
>>>> video.firstTrack(); -> returns first track ("first" to be defined -
>>>> e.g. there is no inherent order in Ogg)
>>>> video.lastTrack(); -> returns last track ("last" to be defined)
>>>> track.next(); -> returns next track in list
>>>> track has the following attributes: type, ref, lang, role, media
>>>> (and the usual contenders, e.g. id, style)
>>> 
>>> Yes, we need something like this.
>> 
>> OK, so if we cannot right now agree to have actual declarative syntax
>> for it, could we for the moment focus on developing that API? While
>> implementing this API, we will at least find out its flaws and we will
>> also be able to exactly measure how much time and bandwidth is used in
>> comparison to having declarative syntax provide this information.
> 
> Yes, let's do that. It's worth taking a look at http://www.w3.org/TR/mediaont-api-1.0/#webidl-for-api, but it crucially lacks the single most important thing we need -- a way to distinguish between different tracks.
> 
  A has metadata that may be distinct from the movie that contains it. For example a movie may language alternate tracks of any type, a visual track's width and height are not necessarily the same as the movie dimensions, etc. If we expose the media ontology API on the Track object as well as on the media element, I think it provides exactly what we need to distinguish between different tracks because it will allow a script to choose between tracks based on whatever characteristic is most important to the job at hand.


> 
>> Also: Are subtitles all you can agree with? They are not really an
>> accessibility means, but rather an internationalisation means, that
>> can just be covered in the same way as captions. So, could I suggest
>> to add at least the role of "caption"?
> 
> I'm fine with any/all of the roles, as long as it's text. I don't know what a user agent should do with it though, if anything.
> 
> [snip]
> 
>> How would you suggest to solve the problems of in-stream text tracks
>> and those of audio description sound files and sign language videos?
> 
> For audio/video tracks, by exposing them in the browser context menus and providing the DOM APIs to make it possible to do the same with scripted controls.
> 
> For text it's basically the same except we also need to figure out how to render it and how it interacts with CSS (if at all). Because this is a bit messy, I'm much more interested in sorting out how to handle external subtitles right now.
> 
  We also need to define how external captions and subtitles interact with internal tracks. For example, which is chosen if there is both an internal track and an external file with the same characteristics.

eric
Received on Monday, 1 February 2010 18:54:53 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 04:42:01 GMT