W3C home > Mailing lists > Public > public-html@w3.org > March 2011

Re: Proposal for Audio and Video Track Selection and Synchronisation for Media Elements

From: Philip Jägenstedt <philipj@opera.com>
Date: Mon, 21 Mar 2011 15:38:08 +0100
To: public-html@w3.org
Message-ID: <op.vso6xuofsr6mfa@localhost.localdomain>
I think that this proposal looks mostly good, some nitpicking:

On Mon, 21 Mar 2011 12:00:46 +0100, Ian Hickson <ian@hixie.ch> wrote:

>  * Allowing authors to use CSS for presentation control, e.g. to
>    control where multiple video channels are to be placed relative to
>    each other.

It's clear how to do this declaratively, but with scripts there seems to  
be a few steps involved:

1. Create/find a second video element and position it with CSS.

2. Binds the two video tracks together with the same MediaController using  
.mediaGroup or creating a new controller.

3. Set the src of the second video element to the same as the first.

4. Wait until loadedmetadata and then enable the alternative video track.

This seems like it would work, but what if steps 2 and 3 happen in the  
other order? Then the media framework has no hint that the two resources  
will be correlated and will have to set up a completely separate decoding  
pipeline for it. This is not very nice, because in some situations (same  
playback rate and offset) it's possible to have a single decoding pipeline  
and just plug in more decoders after the demuxer.

Is this a bug, or do we expect browsers to become clever enough to figure  
out that the same decoding pipeline can be reused without any kind of  
hint? Admittedly, this would help for games creating 10 Audio() elements  
for the same URL.

>     <video src="movie.vid#track=Video&amp;track=English" autoplay  
> controls mediagroup=movie></video>
>     <video src="movie.vid#track=sign" autoplay mediagroup=movie></video>

This is using Media Fragment URI:  
http://www.w3.org/2008/WebVideo/Fragments/WD-media-fragments-spec/#naming-track

It appears that the proposal assumes that track dimension can only be  
specified one in a valid Media Fragment, but this is unfortunately not the  
case. The MF spec states that "Multiple track specification is allowed,  
but requires the specification of multiple track parameters."

Perhaps this is not a problem, #track=Alternative&track=Commentary would  
just result in a resource with two tracks, and the first (Alternative)  
would be used. However, how should this be reflected in audioTracks and  
videoTracks? Should only the selected tracks be exposed there, or should  
all tracks be exposed but the targetted ones enabled? I think the latter  
makes more sense, but the former is more in line with MF.

> RISKS
>  * It's possible that it is still too early for us to be adding any
>    kind of multi-track feature given the current implementation
>    priorities of user agents.

Indeed, the complexity of implementing this is significant. It requires a  
very capable media framework to do things like gapless looping of one  
track synchronized with another and to determine when decoding pipelines  
can be shared and not.

-- 
Philip Jägenstedt
Core Developer
Opera Software
Received on Monday, 21 March 2011 14:38:43 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 9 May 2012 00:17:26 GMT