Re: Survey ready on Media Multitrack API proposal

Hi Silvia,

You wrote:
> If we really wanted to have a possibility of compositing multimedia
> presentations from multiple resources in a flexible manner, we should
> not be using <audio> or <video> for it, but rather introduce SMIL -
> just like we are not compositing image resources in the <img> element,
> but in other elements, such as <canvas> or through JavaScript.

I think you are under-estimating what you are trying to achieve (as 
least as far as I can follow the discussions so far).

If 'all' you were trying to do was to turn on one of the two or three 
pre-recorded, pre-packaged, pre-composed optional tracks within an Ogg 
or mp4 container, then I'd say: fine, this is a local media issue that 
can be handled within the context of a single <video> or <audio> 
element. But this is NOT what you are doing: you are referencing 
external text files (srt, smilText, DFXP, whatever) -- these need to be 
composed temporally and spatially with the content in the video or audio 
object. Once you head down this path, you are no longer looking at local 
manipulations WITHIN a media object, but activating objects from 
multiple sources.

This is why I really believe that you need to look at a more scalable 
solution to this problem -- not because I want to impose SMIL on you, 
but because you are imposing temporal and spatial problems on yourself 
in the context of composing separate media objects.

As an aside: One of the benefits of this approach is that it means that 
you get broader selectivity for free on other objects -- this increases 
accessibility options at no extra cost. (Note that whether you view the 
controlling syntax in declarative or scripted terms is an orthogonal 
concern.)

Perhaps I'm missing something key; if so, let me know.

-d.

(I'm off-site today and tomorrow, but I'll try to monitor the list.)

Received on Friday, 12 March 2010 13:40:05 UTC