Re: [media] progress on multitrack api - issue-152

On Mon, Apr 18, 2011 at 6:59 PM, Philip Jägenstedt <philipj@opera.com> wrote:
> Thanks for compiling this, Silvia, I'll do my best to point out where I
> disagree and why.

Thanks for replying! I'll address the things where I have an answer
and leave others to other people.

> On Sun, 17 Apr 2011 16:05:13 +0200, Silvia Pfeiffer
> <silviapfeiffer1@gmail.com> wrote:
>
>> (2) interface on TrackList:
>>
>> The current interface of TrackList is:
>>  readonly attribute unsigned long length;
>>  DOMString getName(in unsigned long index);
>>  DOMString getLanguage(in unsigned long index);
>>           attribute Function onchange;
>>
>> The proposal is that in addition to exposing name and language
>> attributes - in analogy to TextTrack it should also expose a label and
>> a kind.
>>
>> The label is necessary to include the track into menus for track
>> activation/deactivation.
>> The kind is necessary to classify the track correctly in menus, e.g.
>> as sign language, audio description, or even a transparent caption
>> track.
>
> Maybe the spec changed since you wrote this, because currently it has
> getLabel and getLanguage.

Hmm... it looks like getName was renamed to getLabel - that's cool.
But we still need getKind() and maybe then getId() or getName().


> What would the kind reflect? There's no attribute in the DOM for out-of-band
> tracks for this currently.

That was indeed another issue that I didn't enumerate because Ian
proposed to use data-* attributes for this.

The kind would be served from metadata that is stored within the media
resource. It could be "sign language", "audio description", "dubbing"
or a number of other things. The main point being that JavaScript can
identify what the purpose of that track actually is.


> Apart from this, I think that TrackList really should be a list of Track
> objects or similar, so that getName/getLanguage are pushed onto that object
> as .name and .language, just like on TextTrack.

I think that would be ok as another approach. Though I wonder if there
was a reason for them being access functions rather than attributes -
maybe because they may change half-way through loading the resources?
I'm happy with either solution.


>> (4) autoplay should be possible on combined multitrack:
>>
>> Similar to looping, autoplay could also be defined on a combined
>> multitrack resource as the union of all the autoplay settings of all
>> the slaves: if one of them is on autoplay, the whole combined resource
>> is.
>
> I have no strong opinion, but we should have consistency such that changing
> the paused attribute (e.g. by calling play()) has the exact same effect.

Yes, it should be the same as calling pause() once the metadata is
loaded on all resources.

> It's not clear to me what the spec thinks should happen when play() is
> called on a media element with a controller.

I thought it meant that a play() call is dispatched to all the slave
media elements. However, that is not currently specified I think, so
might be a good addition, too.


>> (5) more events should be available for combined multitrack:
>>
>> The following events should be available in the controller:
>>
>> * onloadedmetadata: is raised when all slave media elements have
>> reached at minimum a readyState of HAVE_METADATA
>>
>> * onloadeddata: is raised when all slave media elements have reached
>> at minimum a readyState of HAVE_CURRENT_DATA
>>
>> * canplaythrough: is raised when all slave media elements have reached
>> at minimum a readyState of HAVE_FUTURE_DATA
>
> This should be HAVE_ENOUGH_DATA. (HAVE_FUTURE_DATA is a useless state that
> indicates that two frames (including the current frame) are available. The
> associated event, canplay, doesn't mean that one can actually play in any
> meaningful sense.)

Ups, well spotted. Thanks.


>> * onended: is raised when all  slave media elements are in ended state
>>
>> or said differently: these events are raised when the last slave in a
>> group reaches that state.
>>
>> These are convenience events that will for example help write combined
>> transport bars. It is easier to attach just a single event handler to
>> the controller than to attach one to each individual slave and make
>> sure they all fire. Also, they help to maintain the logic of when a
>> combined resource is loaded. Since these are very commonly used
>> events, their introduction makes sense.
>>
>> Alternatively or in addition, readyState could be added to the controller.
>
> I would like to see more discussion of the actual use cases. When would it
> be useful to know that all slave media elements have reached
> HAVE_CURRENT_DATA, for example? For HAVE_ENOUGH_DATA, the main use case
> seems like it would be to autoplay, but you also suggested adding an
> explicit autoplay.
>
> My reluctance comes from not-so-happy experience with readyState and the
> related events on HTMLMediaElement. Most of the states and events are
> borderline useless and it's also necessary to lie about them in order to
> avoid exposing race conditions to scripts. Without very compelling use
> cases, I'd prefer pushing the burden over on scripts, until we see what it's
> going to be used for.

Onloadedmetadata is a very very important event. Any time that I want
to do something in JavaScript with a video or audio resource I have to
wait until they reach that state before I can do anything sensible on
them, such as seek to a particular offset, or pause the video and play
a pre-roll ad, or determine the displayed width and height to do
something with.

Onloadeddata is used before I can feed an image into a canvas to grab
the pixels and analyse them.

Canplaythrough is useful to put the video loading into the background
and display something else during that time such as an ad, and then
fade to the poster.

Onended is important to do something once the video or audio resource
is finished playing, such as display related videos, or display a
post-roll ad.

I'm sure I can come up with more use cases if need be. ;-)


Cheers,
Silvia.

Received on Monday, 18 April 2011 11:47:36 UTC