Re: [media] handling multitrack audio / video

On Mon, 01 Nov 2010 02:14:25 +0100, Silvia Pfeiffer  
<silviapfeiffer1@gmail.com> wrote:

> On Fri, Oct 29, 2010 at 2:08 AM, Philip Jägenstedt <philipj@opera.com>  
> wrote:
>> On Thu, 28 Oct 2010 14:46:32 +0200, Geoff Freed <geoff_freed@wgbh.org>
>> wrote:
>>
>>> On Thu, 28 Oct 2010 13:05:57 +0200, Philip Jägenstedt  
>>> <philipj@opera.com>
>>> wrote:
>>>>
>>>> It's
>>>> beyond this most basic case I'd like to understand the actual use  
>>>> cases.
>>>> To clarify, option 2 would allow things like this, borrowing SMIL  
>>>> syntax
>>>> as seen in SVG:
>>>>
>>>> <video id="v" src="video.webm"></video>
>>>> <video begin="v.begin+10s" src="video2.webm"></video>
>>>> <!-- video and video2 should be synchronized with a 10s offset -->
>>>>
>>>> or
>>>>
>>>> <video id="v" src="video.webm"></video>
>>>> <video begin="v.end" src="video2.webm"></video>
>>>> <!-- video and video2 should play gapless back-to-back -->
>>>>
>>>> Are there compelling reasons to complicate things to this extent? The
>>>> last example could be abused to achieve gapless playback between  
>>>> chunks in a
>>>> HTTP live streaming setup, but I'm not a fan of the solution myself.
>>>
>>> I think there are compelling cases which are likely to occur in  
>>> production
>>> environment because they are more efficient than the example I outlined
>>> above.  For example, an author could store the same three descriptions
>>> discretely, rather than in a single audio file, and then fire each one  
>>> at
>>> the appropriate point in the timeline, in a manner similar to the one  
>>> you've
>>> noted above:
>>>
>>> <video id="v" src="video.webm"></video>
>>> <audio sync="v.begin+15s" src="description1.webm"></audio>
>>> <audio sync="v.begin+30s" src="description2.webm"></audio>
>>> <audio sync="v.begin+45s" src="description3.webm"></audio>
>>
>> Rights, it's easy to see how it could be used. If the implementation  
>> cost is
>> worth what you get, I expect that similar implementations already exist  
>> in
>> desktop applications. Are there any implementations of such a system in
>> widespread use and does it actually get the sync right down to the  
>> sample?
>
>
> Jeroen from JWPlayer/Longtail Video has implemented something for
> audio descriptions, where audio descriptions come in separate files
> and are synchronized through markup - I believe the synchronization is
> done in the JWplayer in Flash, see
> http://www.longtailvideo.com/support/addons/audio-description/15136/audio-description-reference-guide
> . AFAIK this is the most used platform for providing audio
> descriptions on the Web at this point in time - I've seen it use in
> government Websites around the globe.
>
> If it can be done in Flash in an acceptable quality, I would think
> browsers should be able to do it. I can ask Jeroen for more
> implementation details if necessary - AFAIK he said there was frequent
> re-synchronization of the secondary resource to the main resource,
> which continues playback at its existing speed.

It sounds like perfect sync isn't being achieved and that scripting of  
some sort is being used to get approximately the right sync. That's  
already possible today with <audio> and <video>, as I'm sure you know. If  
that works well enough, that would really speak against requiring perfect  
sync if it's difficult to implement (and it is).

-- 
Philip Jägenstedt
Core Developer
Opera Software

Received on Monday, 1 November 2010 08:52:54 UTC