Re: [media] progress on multitrack api - issue-152

Some further insights from today's media subgroup meeting inline.

On Mon, Apr 18, 2011 at 9:46 PM, Silvia Pfeiffer
<silviapfeiffer1@gmail.com> wrote:
> On Mon, Apr 18, 2011 at 6:59 PM, Philip Jägenstedt <philipj@opera.com> wrote:
>> On Sun, 17 Apr 2011 16:05:13 +0200, Silvia Pfeiffer
>> <silviapfeiffer1@gmail.com> wrote:
>>
>>> (2) interface on TrackList:
>>>
>>> The current interface of TrackList is:
>>>  readonly attribute unsigned long length;
>>>  DOMString getName(in unsigned long index);
>>>  DOMString getLanguage(in unsigned long index);
>>>           attribute Function onchange;
>>>
>>> The proposal is that in addition to exposing name and language
>>> attributes - in analogy to TextTrack it should also expose a label and
>>> a kind.
>>>
>>> The label is necessary to include the track into menus for track
>>> activation/deactivation.
>>> The kind is necessary to classify the track correctly in menus, e.g.
>>> as sign language, audio description, or even a transparent caption
>>> track.
>>
>> Maybe the spec changed since you wrote this, because currently it has
>> getLabel and getLanguage.
>
> Hmm... it looks like getName was renamed to getLabel - that's cool.
> But we still need getKind() and maybe then getId() or getName().


As it actually turns out: getId() is required to discover the uniquely
identifying name of a track through which we can create a track media
fragment URI.

The issue here is that sometimes a Web page author does not actually
know what tracks are available in-band in a loaded multitrack media
resource. Thus, they need to use script to discover the tracks and
their functionality. For example, when they discover a sign language
track, they would want to create a slave video element with the media
fragment URI to that sign language track. The unique identifier of
that track is given through an ID and therefore needs to be
discoverable.

Further, getKind() is necessary to identify the functionality of the
track, e.g. to distinguish between a sign language track and a
different camera angle, or to distinguish between an audio description
track and dubbed audio tracks.


>>> (4) autoplay should be possible on combined multitrack:
>>>
>>> Similar to looping, autoplay could also be defined on a combined
>>> multitrack resource as the union of all the autoplay settings of all
>>> the slaves: if one of them is on autoplay, the whole combined resource
>>> is.
>>
>> I have no strong opinion, but we should have consistency such that changing
>> the paused attribute (e.g. by calling play()) has the exact same effect.
>
> Yes, it should be the same as calling pause() once the metadata is
> loaded on all resources.
>
>> It's not clear to me what the spec thinks should happen when play() is
>> called on a media element with a controller.
>
> I thought it meant that a play() call is dispatched to all the slave
> media elements. However, that is not currently specified I think, so
> might be a good addition, too.


In today's call,  we came up with an additional and related issue:

If you add through script an already playing media element to a media
controller that is not yet playing, which one wins? Will the new
combined resource be playing or will it be paused?

If the answer is paused, then the same could apply to autoplay: the
element that creates the controller defines the autoplay state of the
controller - any added element cannot override that and their autoplay
attributes is ignored.


Cheers,
Silvia.

Received on Tuesday, 19 April 2011 02:56:55 UTC