W3C home > Mailing lists > Public > whatwg@whatwg.org > March 2011

[whatwg] Implementation difficulties for MediaController

From: Silvia Pfeiffer <silviapfeiffer1@gmail.com>
Date: Mon, 28 Mar 2011 21:29:35 -0700
Message-ID: <AANLkTin=hsW3i6EF70ovm5kNv=kPO4SaGbCp2+QpVn4c@mail.gmail.com>
On Sun, Mar 27, 2011 at 8:01 PM, Ian Hickson <ian at hixie.ch> wrote:
> It's been brought to my attention that there are aspects of the
> MediaController design that are hard to implement; in particular around
> the ability to synchronise or desynchronise media while it is playing
> back.
> To help with this, I propose to put in some blocks on the API on the short
> term so that things that are hard to implement today will simply throw
> exceptions or otherwise fail in detectable and predictable ways.
> However, to do that I need a better idea of what exactly is hard to
> implement.
> It would be helpful if you could describe exactly what is easy and what is
> hard (that is, glitchy or simply unsupported by common media frameworks)
> in terms of media synchronisation, in particular along the following axes:
> ?* multiple in-band tracks vs multiple independent files
> ?* playing tracks synchronised at different offsets
> ?* playing tracks at different rates
> ?* changing any of the above while media is playing vs when it is stopped
> ?* adding or removing tracks while media is playing vs when it is stopped
> ?* changing overall playback rate while a synced set of media is playing
> Based on this I can then limit the API accordingly.
> (Any other feedback you may have on this proposed API is of course also
> very welcome.)

Hi Ian,

While I can't give you feedback on how hard it is to tweak existing
media frameworks into changing the playback rate of different tracks
independently, or start individual tracks with a delay compared to
others, I am fairly sure that none of the existing frameworks
currently supports such for in-band tracks - not while playing, nor
when stopped. I am also finding it hard to find actual use cases for
this that are not somewhat artificially trying to make things parallel
that should not be parallel. The only use case that I can find are
actually text tracks, namely subtitles or captions, which are not
relevant here.

We haven't allowed caption tracks to start with a different
startTimeOffset than the video, nor are we allowing to give them a
different playbackRate to the video. I don't really see a need for
this functionality for multitrack media either. The beat example that
you use in the spec seems artificial to me and rather a functionality
of a drum machine than something I would expect to be solved through

I *hear* that the startTimeOffset and change of playbackRate
functionality may be almost impossible to get from current media
frameworks, but I can't tell from my own experience. More
fundamentally though I don't really see a need for them. Tracks in a
multitrack resource (no matter if in-band or external files) are
rather tightly authored to cover the exact same timeline in my

Received on Monday, 28 March 2011 21:29:35 UTC

This archive was generated by hypermail 2.4.0 : Wednesday, 22 January 2020 16:59:31 UTC