W3C home > Mailing lists > Public > public-html-a11y@w3.org > October 2010

Re: [media] handling multitrack audio / video

From: Geoff Freed <geoff_freed@wgbh.org>
Date: Thu, 28 Oct 2010 13:00:22 -0400
To: Philip Jägenstedt <philipj@opera.com>, "public-html-a11y@w3.org" <public-html-a11y@w3.org>
Message-ID: <C8EF2466.125AC%geoff_freed@wgbh.org>

On 10/28/10 11:08 AM, "Philip Jägenstedt" <philipj@opera.com> wrote:

On Thu, 28 Oct 2010 14:46:32 +0200, Geoff Freed <geoff_freed@wgbh.org>

> On Thu, 28 Oct 2010 13:05:57 +0200, Philip Jägenstedt
> <philipj@opera.com> wrote:
>> It's
>> beyond this most basic case I'd like to understand the actual use cases.
>> To clarify, option 2 would allow things like this, borrowing SMIL syntax
>> as seen in SVG:
>> <video id="v" src="video.webm"></video>
>> <video begin="v.begin+10s" src="video2.webm"></video>
>> <!-- video and video2 should be synchronized with a 10s offset -->
>> or
>> <video id="v" src="video.webm"></video>
>> <video begin="v.end" src="video2.webm"></video>
>> <!-- video and video2 should play gapless back-to-back -->
>> Are there compelling reasons to complicate things to this extent? The
>> last example could be abused to achieve gapless playback between chunks
>> in a HTTP live streaming setup, but I'm not a fan of the solution
>> myself.
> I think there are compelling cases which are likely to occur in
> production environment because they are more efficient than the example
> I outlined above.  For example, an author could store the same three
> descriptions discretely, rather than in a single audio file, and then
> fire each one at the appropriate point in the timeline, in a manner
> similar to the one you've noted above:
> <video id="v" src="video.webm"></video>
> <audio sync="v.begin+15s" src="description1.webm"></audio>
> <audio sync="v.begin+30s" src="description2.webm"></audio>
> <audio sync="v.begin+45s" src="description3.webm"></audio>

Rights, it's easy to see how it could be used. If the implementation cost
is worth what you get, I expect that similar implementations already exist
in desktop applications. Are there any implementations of such a system in
widespread use and does it actually get the sync right down to the sample?

You could use SMIL today to do something like this, and it would be playable in the RealPlayer and most likely the Ambulant Player.  It might also work in the QuickTime Player (pre-QTX version) but I'd need to test because QT's support for SMIL is not complete.

However, I'd caution against basing this decision on current widespread implementation.  When it comes to accessible multimedia on the Web, "widespread" is relative.  Relative to the sheer number of media clips on the Web today, those that are accessible make up a small percentage today.  Instead, I'd urge you to base the decision on what would be most useful to blind/visually impaired or deaf/hard-of-hearing users, as well as the authors who want or need to make multimedia accessible.
Received on Thursday, 28 October 2010 17:02:00 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:55:46 UTC