Re: [media] issue-152: documents for further discussion

Ah excellent - I was just about to shoot off some of the questions to you! :-)


On Tue, Apr 12, 2011 at 8:17 AM, Ian Hickson <ian@hixie.ch> wrote:
> On Mon, 11 Apr 2011, Silvia Pfeiffer wrote:
>>
>> http://www.w3.org/WAI/PF/HTML/wiki/Media_Multitrack_Change_Proposals_Summary
>
> Here are some notes on your notes. :-)
>
>
> | videoTracks are present both in audio and video elements ... maybe
> | should be reduced to just video element?
>
> You can specify a video file in an <audio> element, so I figured it would
> make sense to expose them. Selecting one doesn't do much good though. I
> could see an argument for not exposing this information, or for exposing
> it using an object that didn't allow you to select a track; if that's
> something people have use cases for or against, I'm happy to change it.
> The current design was just easier to spec.

We discussed this in the meeting and it was general agreement that
it's good in the way that you have spec-ed it. Even though I can't
think of a use case where you'd want to know about all the video
tracks in an <audio> element, the added complexity of making it
different is not worth prohibiting it.


> | the TrackList only includes name and language attributes - in analogy to
> | TextTrack it should probably rather include (name, label, language,
> | kind)
>
> I'm fine with exposing more data, but I don't know what data in-band
> tracks typically have. What do in-band tracks in popular video formats
> expose? Is there any documentation on this?

There is a discussion on the main list about metadata right now and I
have posted a link there about what the W3C Media Annotations WG 's
analysis of media formats found as typically used metadata on audio
and video. If you want to understand what is generally available, that
is a good starting point, see http://www.w3.org/TR/mediaont-10/ .

I would, however, regard these two attributes that we discussed here
as a separate issue, because if somebody wants to create custom
controls and e.g. provide all the alternative video descriptions in
one menu, they would want all the text descriptions and audio
descriptions listed - similarly if they want all the alternative
captions in one menu, they would want all the text track captions as
well as all the videos that are created from bitmaps as overlay
captions as well as all the alternative video tracks with burnt-in
captions. So, providing a label (for use in the menu) and a kind (for
classification) is very useful. These can all be mapped from fields
from within video formats.

On a side note, we should probably define a list of the typical kinds
that we expect from media elements.


> Note that for media-element-level data, you can already use data-*
> attributes to get anything you want, so the out-of-band case is already
> fully handled as far as I can tell.

Interesting. In the audio description case, would a label, kind, and
language be added to the menu of the related video element?


> | a group should be able to loop over the full multitrack rather than a
> | single slave
>
> Not sure what this means.

We discussed the looping behaviour. To make it symmetrical with
in-band multitrack resources, it would make sense to be able to loop
over composed multitrack resources, too. The expected looping
behaviour is that a loop on the composed resource loops over the
composite as a whole. So, the question is then how to turn such
looping on.

The proposal is that when one media element in the group has a @loop
attribute, that would turn the looping on the composite resource on.
This means that when the loop is set and the end of the composite
resource is reached (its duration), the currentTime would be reset to
its beginning and playback of the composite resource would start
again. Looping on individual elements is turned off and only the
composite resource can loop.


> | some attributes of HTMLMediaElement are missing in the MediaController
> | that might make sense to collect state from the slaves: error,
>
> Errors only occur as part of loading, which is a per-media-element issue,
> so I don't really know what it would mean for the controller to have it.

The MediaController is generally regarded as the state keeper for the
composite resource.

So, what happens when a single slave goes into error state. Does the
full composite resource go into error state? Or does it ignore the
slave - turn it off, and continue?

If the composite goes into error state, then that should be exposed in
the MediaController, so there is one place for the script to check.


> | networkState
>
> This only makes sense at a per-media-resource level (this is one reason I
> don't think the master/slave media element design works). What would it
> mean at the controller level?

Yes, I think I agree. There is no combined state that makes sense.


> | readyState
>
> I could expose a readyState that returns the lowest value of all the
> readyState values of the slaved media elements, would that be useful? It
> would be helpful to see a sample script that would make use of this; I
> don't really understand why someone would care about doing this at the
> controller level rather than the individual track level.

I think it makes sense, in particular when script is waiting for all
elements to go to HAVE_METADATA state, which is often the case when
you are trying to do something on the media resource, but have to wait
until it's actually available.

An example JS would be where you are running your own controls for the
combined resource and want to determine the combined duration and
volume for visual display, e.g.

      video.controller.addEventListener("loadedmetadata", init, false);
      function init(evt) {
        duration.innerHTML = video.controller.duration.toFixed(2);
        vol.innerHTML      = video.controller.volume.toFixed(2);
      }

So, I think a combined readyState makes sense in the way you described.


> | (this one is particularly important for onmetadatavailable events)
>
> The events are independent of the attributes. What events would you want
> on a MediaController, and why? Again, sample code would really help
> clarify the use cases you have in mind.

Maybe a onmetadatavailable event is more useful than a readyState then?

I am not aware of many scripts that use the readyState values directly
for anything, even on the media elements themselves.


> | seeking
>
> How would this work? Return true if any of the slaved media elements are
> seeking, and false otherwise? I don't really understand the use case.

Yeah, I am not aware of many scripts using the seeking attribute of
media elements in general, so I don't think it's necessary.


> | TimeRanges played
>
> Would this return the union or the intersection of the slaves'?

That would probably be the union, because those parts of the timeline
are what the user has viewed, so he/she would expect them to be marked
in manually created controls.


> | ended
>
> Since tracks can vary in length, this doesn't make much sense at the media
> controller level. You can tell if you're at the end by looking at
> currentTime and duration, but with infinite streams and no buffering the
> underlying slaves might keep moving things along (actively playing) with
> currentTime and duration both equal to zero the whole time. So I'm not
> sure how to really expose 'ended' on the media controller.

"ended" on the individual elements (in the absence of loop) returns true when

Either:

    The current playback position is the end of the media resource, and
    The direction of playback is forwards.

Or:

    The current playback position is the earliest possible position, and
    The direction of playback is backwards.

So, in analogy, for the composed resource: it would return the union
of the ended result on all individual elements, namely "ended" only
when all of them are in ended state.


> | and autoplay.
>
> How would this work? Autoplay doesn't really make sense as an IDL
> attribute, it's the content attribute that matters. And we already have
> that set up to work with media controllers.

As with @loop, it would be possible to say that when one media element
in the union has @autoplay set, then the combined resource is in
autoplay state.


> | Multiple video and audio active in parallel: the solution must allow
> | multiple video tracks active in parallel, just like audio, even for
> | in-band
>
> I looked at doing this (with just a single <video> and no explicit
> controller, like for audio) but I couldn't find a good solution to the
> problem of how to also support only one track being enabled.
>
> Take this situation:
>
>   Media resource A has three video tracks A1, A2, and A3.
>   A1's dimensions are 1600x900 and fill the frame.
>   A2's dimensions are 160x90 in the top left.
>   A3's dimensions are 160x90 in the bottom right.
>
>   You play the video, with just A1 enabled. Then you turn on A2 and A3.
>   The A1 video plays at 1600x900 with the A2 and A3 videos playing in
>   their respective corner, all small. The <video> element's intrinsic
>   dimensions are 1600x900.
>
>   Now you turn off A1. What happens to the intrinsic dimensions? The
>   rendering?
>
>   Now you turn off A2. What happens to the intrinsic dimensions? The
>   rendering?
>
>   Now you create another <video> and load A1 into it, and you sync the
>   two <video>s using a controller or the master/slave relationship, so
>   that you can have A1 and A3 side by side. What happens to the intrinsic
>   dimensions? The rendering?
>
> I couldn't find intuitive answers to these questions, which is why I
> decided to only support one video per media element and not support the
> in-band dimensions at all.

Right. As I understand it, the only media resource type that we are
dealing with and that currently supports display of more than one
video track at the same time within the confines of the existing
viewport is MP4 in Safari. I don't understand enough about the
dimension calculation and whether there is a means to determine a
intrinsic width and height from the resource that would be compatible
with the way that other browsers would display that same resource. I
think Eric will have to jump in here.


One more question turned up today: is there any means in which we
could possibly create @controls (with track menu and all) for the
combined resource? Maybe they could be the same controls on all the
elements that have a @controls active, but would actually be driven by
the controller's state rather than the element's? Maybe the first
video element that has a @controls attribute would get the full
controller's state represented in the controls? Could there be any way
to make @controls work?

Cheers,
Silvia.

Received on Tuesday, 12 April 2011 01:46:02 UTC