Re: Addressing media files: the SMIL video element

On 17-nov-04, at 10:15, Cyril Concolato wrote:
> "The video element specifies a video file which is to be rendered to
> provide synchronized video. The usual SMIL animation features are used
> to start and stop the video at the appropriate times. An xlink:href is
> used to link to the video content. It is assumed that the video content
> also includes an audio stream, since this is the usual way that video
> content is produced, and thus the audio is controlled by the video
> element's media attributes."

The reason for having the <video> tag control both audio and video is 
that
long ago (I think as far back as SMIL 1.0 standardisation) we recognized
that having <video> play only the video part and a parallel <audio>
to play the audio part would either result in losing lip-sync, or place
a heavy burden on implementations to detect this parallelism. In case
of RTSP streams this would become an even bigger problem.

And then there's the added problem that <video> doesn't really have
any different semantics than <audio>, the only difference is documentary
(the mimetype of the media item governs what it is).

We toyed with various ideas along the lines of having tags that merely
define a stream and then use some scheme to address substreams
from within this stream, but it quickly turned out that it would be a 
lot
of work to come up with an addressing scheme that would be
sufficiently general, so we postponed it. In the mean time, 
implementations
can use the <param> mechanism (or even url?parameter) to provide
this functionality.

But note that this is only the historical explanation: I fully agree 
that
some way to have more control over which bits of multiplexed streams
are rendered is needed.
--
Jack Jansen, <Jack.Jansen@cwi.nl>, http://www.cwi.nl/~jack
If I can't dance I don't want to be part of your revolution -- Emma 
Goldman

Received on Thursday, 18 November 2004 21:54:59 UTC