RE: [media] progress on multitrack api - issue-152

Actually if you need to implement both a controller and media playback then it's fairly likely you'd use one codebase for the common parts of both, this would just pass that design on into the JS API so it actually removes a layer of complexity. 

There would be only controller state; whether its controlling one or multiple elements. The combined state is the state of the controller and only the state of the controller, there is nothing else; you don't need to be concerned about the states of the individual elements because they wouldn't have individual states (except for their network state, which I also suggested last week should be moved out into a separate and sharable object to handle tracks coming from the same network object),  thus there is less to reconcile and it's simpler all round.

-----Original Message-----
From: Silvia Pfeiffer [mailto:silviapfeiffer1@gmail.com] 
Sent: 19 April 2011 10:52
To: Sean Hayes
Cc: Philip Jägenstedt; public-html-a11y@w3.org
Subject: Re: [media] progress on multitrack api - issue-152

The suggested approach of always creating a controller and thus moving
the state from the individual element to its controller doesn't work
well for combinations. The problem is that you stop distinguishing
between the state of the individual element and the combined state.
So, you would again need a master controller that shows the combined
state of them all. This would just introduce another layer of
complexity for which I don't see a need.

I do understand your other points though and wonder what others think about it.

Cheers,
Silvia.

On Tue, Apr 19, 2011 at 7:35 PM, Sean Hayes <Sean.Hayes@microsoft.com> wrote:
> It occurs to me that what the author really needs is a 'getFragmentUrl()' function which returns a media fragment URL that addresses that track. Although getId() apparently provides enough information to construct one, it would seem more robust, and possibly more secure, if the UA provided this functionality directly.
>
> As to the second point,
> " If you add through script an already playing media element to a media controller that is not yet playing, which one wins "
>
> The point I was trying to make on the call is that in my understanding of the model,  this question is actually backwards. You add a controller to a media element, not the other way around.
> e.g.
>
> media2.controller = media1.controller;
>
> So if media1 is playing, then adding its controller to media2 will cause media2 to start following its timeline and thus play (unless apparently if media2 is paused, which then blocks the controller; although I don't find that very intuitive). Conversely if media1 is not playing then its timeline will not be advancing and so neither will media2's.
> Thus similarly after the assignment, 'autoplay' and 'loop' on media1 will have controlled indirectly the behavior of media2; regardless of what those attributes were on media2.
>
> I personally think that the controller mechanism has the possibility to simplify the model, if we were to restructure the chapter so that rather than a bolt on afterthought, a controller is always created - even for singleton media groups - and define that the functionality currently defined on a media element is actually a pass through to its controller. All the media functionality can then be defined in terms of controllers, there would then be no need for an explicit constructor for a controller, and to slave two elements together the code above is all that would be needed.
>
> For example, consider:
> media3.play();   //media 3 playing
> media4.pause();    //media 4 not playing
> media4.controller = media3.controller;    // media 3 and media 4 now playing.
> media4.pause();    // media 3 and media 4 now pause.
>
> In the controller model, line 4 above would be a pass through to its controller, which now happens to be the controller created for media3; and so the group as a whole stops playing. That seems simple and intuitive to me.
>
> That would however require something of re-write of the media chapter which may not be feasible before LC.
>
> -----Original Message-----
> From: public-html-a11y-request@w3.org [mailto:public-html-a11y-request@w3.org] On Behalf Of Silvia Pfeiffer
> Sent: 19 April 2011 03:56
> To: Philip Jägenstedt
> Cc: public-html-a11y@w3.org
> Subject: Re: [media] progress on multitrack api - issue-152
>
> Some further insights from today's media subgroup meeting inline.
>
> On Mon, Apr 18, 2011 at 9:46 PM, Silvia Pfeiffer
> <silviapfeiffer1@gmail.com> wrote:
>> On Mon, Apr 18, 2011 at 6:59 PM, Philip Jägenstedt <philipj@opera.com> wrote:
>>> On Sun, 17 Apr 2011 16:05:13 +0200, Silvia Pfeiffer
>>> <silviapfeiffer1@gmail.com> wrote:
>>>
>>>> (2) interface on TrackList:
>>>>
>>>> The current interface of TrackList is:
>>>>  readonly attribute unsigned long length;
>>>>  DOMString getName(in unsigned long index);
>>>>  DOMString getLanguage(in unsigned long index);
>>>>           attribute Function onchange;
>>>>
>>>> The proposal is that in addition to exposing name and language
>>>> attributes - in analogy to TextTrack it should also expose a label and
>>>> a kind.
>>>>
>>>> The label is necessary to include the track into menus for track
>>>> activation/deactivation.
>>>> The kind is necessary to classify the track correctly in menus, e.g.
>>>> as sign language, audio description, or even a transparent caption
>>>> track.
>>>
>>> Maybe the spec changed since you wrote this, because currently it has
>>> getLabel and getLanguage.
>>
>> Hmm... it looks like getName was renamed to getLabel - that's cool.
>> But we still need getKind() and maybe then getId() or getName().
>
>
> As it actually turns out: getId() is required to discover the uniquely
> identifying name of a track through which we can create a track media
> fragment URI.
>
> The issue here is that sometimes a Web page author does not actually
> know what tracks are available in-band in a loaded multitrack media
> resource. Thus, they need to use script to discover the tracks and
> their functionality. For example, when they discover a sign language
> track, they would want to create a slave video element with the media
> fragment URI to that sign language track. The unique identifier of
> that track is given through an ID and therefore needs to be
> discoverable.
>
> Further, getKind() is necessary to identify the functionality of the
> track, e.g. to distinguish between a sign language track and a
> different camera angle, or to distinguish between an audio description
> track and dubbed audio tracks.
>
>
>>>> (4) autoplay should be possible on combined multitrack:
>>>>
>>>> Similar to looping, autoplay could also be defined on a combined
>>>> multitrack resource as the union of all the autoplay settings of all
>>>> the slaves: if one of them is on autoplay, the whole combined resource
>>>> is.
>>>
>>> I have no strong opinion, but we should have consistency such that changing
>>> the paused attribute (e.g. by calling play()) has the exact same effect.
>>
>> Yes, it should be the same as calling pause() once the metadata is
>> loaded on all resources.
>>
>>> It's not clear to me what the spec thinks should happen when play() is
>>> called on a media element with a controller.
>>
>> I thought it meant that a play() call is dispatched to all the slave
>> media elements. However, that is not currently specified I think, so
>> might be a good addition, too.
>
>
> In today's call,  we came up with an additional and related issue:
>
> If you add through script an already playing media element to a media
> controller that is not yet playing, which one wins? Will the new
> combined resource be playing or will it be paused?
>
> If the answer is paused, then the same could apply to autoplay: the
> element that creates the controller defines the autoplay state of the
> controller - any added element cannot override that and their autoplay
> attributes is ignored.
>
>
> Cheers,
> Silvia.
>
>
>

Received on Tuesday, 19 April 2011 11:34:26 UTC