- From: Olivier Thereaux <olivier.thereaux@bbc.co.uk>
- Date: Mon, 12 Mar 2012 15:42:58 +0000
- To: public-audio@w3.org
- CC: robert@ocallahan.org
- Message-ID: <4F5E1982.1040506@bbc.co.uk>
Following this discussion, and a recent teleconference where we agreed it was a valuable use case, I have drafted text for UC15: http://www.w3.org/2011/audio/wiki/Use_Cases_and_Requirements#UC-15:_Video_commentary This should complete ACTION-35: Add use case for video sync, add requirement to work well with mediacontroller Note that I haven't added a direct mention of MediaController in the use case prose itself, as it is an implementation detail and should probably not matter as far as usage scenarios are concerned? I have made a note of this in here, however: http://www.w3.org/2011/audio/wiki/Use_Cases_and_Requirements#UC15_.E2.80.94_Notes Feedback on the use case text welcome. I will be populating the related requirements for this use case shortly. Cheers, Olivier On 02/03/2012 13:45, Olivier Thereaux wrote: > On 29/02/2012 23:41, Robert O'Callahan wrote: >> The MediaStreams Processing document had a scenario which I think isn't >> covered by the existing use-cases in the wiki: "Play video with >> processing effects mixing in out-of-band audio tracks (in sync) (e.g. >> mixing in an audio commentary with audio ducking)" > > Thanks for raising this, I believe you are right. We did go through the > "heap" of all our sources of use cases and were due to review a few open > questions, including this. > > See: http://www.w3.org/2011/audio/track/actions/28 > Chris wrote: “There was a requirement to "Seamlessly switch from one > input stream to another (e.g. adaptive streaming)" which I think is out > of scope for this group. ” > > > >> A very common example of this is DVD commentary tracks. > > Indeed, and not just commentary tracks. The BBC for instance is > providing audio description for a number of its programmes, and the > ability to start the audio description track in sync with the specific > video timing, mix the two tracks and ideally duck the main audio track > when the description track is "speaking" are realistic scenario. > > >> A browser-oriented use-case could be: "User wants to play a video from a >> Web site, with a third-party commentary track downloaded from another >> site." > > Likewise for a number of multilingual content, where you could want to > keep the original sound from an interview and have the dubbing track on > top, with an appropriate amount of ducking. > > > If I may deconstruct the issue here, could we say that this use case > illustrates the need for: > > * Mixing sources > * Ducking > * Syncing sources/streams with other timed media and events > > Anything I forgot? > > > I'd love to see demos of implementing this with the web audio API, and > the approaches explained on the spec differences doc. > > Olivier > -- Olivier Thereaux BBC Internet Research & Future Services
Attachments
- application/pkcs7-signature attachment: S/MIME Cryptographic Signature
Received on Monday, 12 March 2012 15:43:46 UTC