Re: [media] handling multitrack audio / video

On Thu, 28 Oct 2010 14:46:32 +0200, Geoff Freed <geoff_freed@wgbh.org>  
wrote:

> On Thu, 28 Oct 2010 13:05:57 +0200, Philip Jägenstedt  
> <philipj@opera.com> wrote:
>> It's
>> beyond this most basic case I'd like to understand the actual use cases.
>> To clarify, option 2 would allow things like this, borrowing SMIL syntax
>> as seen in SVG:
>>
>> <video id="v" src="video.webm"></video>
>> <video begin="v.begin+10s" src="video2.webm"></video>
>> <!-- video and video2 should be synchronized with a 10s offset -->
>>
>> or
>>
>> <video id="v" src="video.webm"></video>
>> <video begin="v.end" src="video2.webm"></video>
>> <!-- video and video2 should play gapless back-to-back -->
>>
>> Are there compelling reasons to complicate things to this extent? The  
>> last example could be abused to achieve gapless playback between chunks  
>> in a HTTP live streaming setup, but I'm not a fan of the solution  
>> myself.
>
> I think there are compelling cases which are likely to occur in  
> production environment because they are more efficient than the example  
> I outlined above.  For example, an author could store the same three  
> descriptions discretely, rather than in a single audio file, and then  
> fire each one at the appropriate point in the timeline, in a manner  
> similar to the one you've noted above:
>
> <video id="v" src="video.webm"></video>
> <audio sync="v.begin+15s" src="description1.webm"></audio>
> <audio sync="v.begin+30s" src="description2.webm"></audio>
> <audio sync="v.begin+45s" src="description3.webm"></audio>

Rights, it's easy to see how it could be used. If the implementation cost  
is worth what you get, I expect that similar implementations already exist  
in desktop applications. Are there any implementations of such a system in  
widespread use and does it actually get the sync right down to the sample?

-- 
Philip Jägenstedt
Core Developer
Opera Software

Received on Thursday, 28 October 2010 15:09:00 UTC