Re: Survey ready on Media Multitrack API proposal

Dick, all,


On Fri, Mar 12, 2010 at 1:26 AM, Dick Bulterman <Dick.Bulterman@cwi.nl> wrote:
> On the multi-track issue:
>
>> I agree, but think that the "nice feature" is the possibility of having
>> parallel text tracks, while having mutually exclusive tracks is absolutely
>> fundamental. If we can't handle grouping in a nice way we can discard it and
>> require scripts to achieve parallel text tracks. But let's finish this in
>> the HTML WG.
>
> I would continue to argue that two separate concepts are being mixed here:
> 1. a general mechanism for selecting alternative forms of content (or
> structure), and
> 2. a synchronization mechanism that allows 0, 1, or more elements to be
> displayed at the same time.
>
> The first (which SMIL handles as <switch> and which is being reinvented here
> as <trackgroup>) should be totally decoupled from the second (which SMIL
> handles as <par>,<seq> and <excl>). It will make your life -- and that of
> authors -- much easier in the future.

Issue 1 is indeed what is required here.

Issue 2 is an issue for SMIL because SMIL allows the composition of
multimedia presentations. It is, however, not something that HTML has
to resolve. I thought I was being clear on this in earlier
communications, but apparently not.

HTML has created <audio> and <video> elements to include a single
timeline audio or video resource. Just like <img>, which only points
to a single picture and not a spatial combination of pictures, <audio>
and <video> point to a single audio resource or a single video
resource. It is that resource that defines the timeline.

There is no intention of sequentially compositing media resources
together (as SMIL does with <seq>). The only intention we have here is
to provide *supporting* material towards that one media resource.
There is no intention of including a new timeline (as <par> does) or
making the main media resource optional (as <excl> does).

What we are doing is temporally aligning text (and potentially audio
or video) with the main resource. This has to be very clear in our
minds and very clear in the specification.

"Synchronisation" in the wide meaning of the word as it is being used
in SMIL is therefore not an issue here. There is no need to worry
about synchrnonisation because everything has to synchronise with the
main resource anyway. Therefore there is no need for extra constructs
to deal with this problem: it simply doesn't pose itself in the same
manner as it does in SMIL.

If we really wanted to have a possibility of compositing multimedia
presentations from multiple resources in a flexible manner, we should
not be using <audio> or <video> for it, but rather introduce SMIL -
just like we are not compositing image resources in the <img> element,
but in other elements, such as <canvas> or through JavaScript.

Regards,
Silvia.

Received on Thursday, 11 March 2010 21:36:08 UTC