- From: Mark Watson <watsonm@netflix.com>
- Date: Wed, 20 Apr 2011 19:27:44 -0700
- To: Silvia Pfeiffer <silviapfeiffer1@gmail.com>
- CC: Ian Hickson <ian@hixie.ch>, HTML Accessibility Task Force <public-html-a11y@w3.org>
Sent from my iPhone On Apr 20, 2011, at 7:01 PM, "Silvia Pfeiffer" <silviapfeiffer1@gmail.com> wrote: > On Thu, Apr 21, 2011 at 9:01 AM, Ian Hickson <ian@hixie.ch> wrote: >> On Mon, 18 Apr 2011, Silvia Pfeiffer wrote: >>> >>> (2) interface on TrackList: >>> >>> The current interface of TrackList is: >>> readonly attribute unsigned long length; >>> DOMString getName(in unsigned long index); >>> DOMString getLanguage(in unsigned long index); >>> attribute Function onchange; >>> >>> The proposal is that in addition to exposing name and language >>> attributes - in analogy to TextTrack it should also expose a label and >>> a kind. >>> The label is necessary to include the track into menus for track >>> activation/deactivation. >> >> Name and label are the same. > > Name was supposed to be what now is ID, so I'm happy with the changes. > > >>> The kind is necessary to classify the track correctly in menus, e.g. >>> as sign language, audio description, or even a transparent caption >>> track. >> >> I'm fine with exposing kind; is there any documentation on what video >> formats expose for this? > >> On Wed, 20 Apr 2011, Silvia Pfeiffer wrote: >>> >>> I have thus far come up with the following: >>> >>> video: >>> * sign language video (in different sign languages) >>> * captions (as in: burnt-in video that may just be overlays) >>> * different camera angle >>> >>> audio: >>> * audio descriptions >>> * language dub >> >> We should derive these from the kinds that are exposed in media formats, >> it doesn't make sense for us to come up with them. > > http://wiki.xiph.org/Ogg_Skeleton_4 has a specification for what it is > in Ogg now. > This includes the roles as specified here: > http://wiki.xiph.org/SkeletonHeaders#Role > > I don't know if MPEG has anything like it. At least the MPEG DASH group is looking to W3C HTML Accessibility Task Force to define kinds for accessibility. The other ones: dubbed audio and commentary are kind-of obvious if you considers movies as a use-case. ...Mark > > WebM has a bunch of metadata on tracks e.g. TrackType, but not much > semantics IIUC. > http://www.webmproject.org/code/specs/container/#track > > > >>> (3) looping should be possible on combined multitrack: >>> >>> In proposal 4 the loop attribute on individual media elements is >>> disabled on multitrack created through a controller, because it is not >>> clear what looping means for the individual element. >>> >>> However, looping on a multitrack resource with in-band tracks is well >>> defined and goes over the complete resource. >> >> It's not especially well-defined, since there's no concept of "ending" >> with the controller, given how streaming is handled. >> >> But more importantly, what are the use cases? >> >> The use case for looping a single track is things like this: >> >> http://www.google.com/green/ >> >> ...but I don't see why you would use a MediaController to do that kind of >> thing. It's not like you'd want the multiple videos there in sync, they're >> just background. >> >> I'm also skeptical of introducing loop at the MediaController level even >> in the simple case of finite resources, because it's not clear how to make >> it work with looping subresources. Say you had two resources, both set to >> loop, one of which was 5s and one 3s, and that you then further say that >> the whole thing should loop. What should happen? We don't want to define >> MediaController looping in a way that precludes that from being possible, >> IMHO, at least not unless we have a strong use case. >> >> >>> In analogy, it makes sense to interpret loop on a combined multitrack >>> resource in the same way. Thus, the controller should also have a muted >>> attribute which is activated when a single loop attribute on a slave >>> media element is activated and the effect should be to loop over the >>> combined resource, i.e. when the duration of the controller is reached, >>> all slave media elements' currentTime-s are reset to >>> initialPlaybackPosition. >> >> Why would an attribute on any one of the <video>s affect the >> MediaController as a whole? Why would they jump back to >> initialPlaybackTime? I don't think this makes sense. > > To me looping on individual slaves of a multitrack resource > individually doesn't make sense. That would mean that this element is > independent of the others. Either they all loop or none does. > > >>> (4) autoplay should be possible on combined multitrack: >>> >>> Similar to looping, autoplay could also be defined on a combined >>> multitrack resource as the union of all the autoplay settings of all the >>> slaves: if one of them is on autoplay, the whole combined resource is. >> >> Actually currently autoplay is the only behaviour; MediaControllers start >> off playing and just wait for any autoplaying resources to be ready. If >> none of the resources are autoplaying the controller just advances without >> anything playing. This is probably suboptimal. >> >> I guess we could say that if none of the resources have autoplay enabled >> it doesn't play, but how would you handle dynamic changes to the set of >> slaved media elements? > > When attributes on slaves are changed, the state of the group changes. > Autoplay changed during playback has no effect anyway. Autoplay is > only relevant right after the first load. > > I also think that having autoplay on individual elements while not on > others decouples their timeline and is wrong. That should not be > possible. They either slave to the same timeline or they don't. > > >>> (5) more events should be available for combined multitrack: >>> >>> The following events should be available in the controller: >>> >>> * onloadedmetadata: is raised when all slave media elements have >>> reached at minimum a readyState of HAVE_METADATA >>> >>> * onloadeddata: is raised when all slave media elements have reached >>> at minimum a readyState of HAVE_CURRENT_DATA >>> >>> * canplaythrough: is raised when all slave media elements have reached >>> at minimum a readyState of HAVE_FUTURE_DATA >> >> These are supported now. > > Cool! > > >>> * onended: is raised when all slave media elements are in ended state >> >> This isn't supported; a media controller can't be "ended" currently. > > Hmm, I guess my arguments were really a bit messed up between IDL > attribute and event. I really care about the event, not so much about > the IDL attribute. > > (Also, I didn't know that @foo was only for content attributes - > thanks for clarifying!) > >>>> But what's the use case? >>> >>> If I reach the end, I want to present something different, such as a >>> post-roll add or an overlay with links to other videos that are related. >>> It is much easier to wait on a onended event on the combined resource >>> than having to register an event handler with each slave and then try >>> and combine the result. >> >> I'm confused. Are we talking about the event or the attributed? You seem >> to be arguing for the attribute but giving use cases for the event. >> >> If you want to display an overlay at the end of a video, it seems like >> you'd want to do that as soon as the video ended, you wouldn't want to >> wait until the end of the entire resource, no? So you'd want onended on >> the <video> element, not the controller. > > No, I'd actually want to wait until all elements are ended before > showing an overlay, in particular if an audio description or a sign > language video or a commentary continues. The grouped resource is not > ended before all of the slaves are ended and I don't want random > advertising in my face before it's all over. That would be really > disturbing. > > > >>> (6) controls on slaves control the combined multitrack: >>> >>> Proposal 4 does not provide any information on what happens with media >>> elements when the @controls attribute is specified. >> >> The user interface section covers this already. > > I'm happy with the @controls after all your feedback, thanks. > > > Cheers, > Silvia. > >
Received on Thursday, 21 April 2011 02:28:51 UTC