- From: Joseph Berkovitz <joe@noteflight.com>
- Date: Mon, 11 Oct 2010 13:40:21 -0400
- To: Chris Grigg <chrisg@chrisgrigg.org>
- Cc: public-xg-audio@w3.org
- Message-Id: <3184ADA9-AD93-4593-BC16-6C341B075E8F@noteflight.com>
I think this is also a very useful feature. I see it as complementary to my proposal -- a media-marker-driven event, as opposed to a programmatically driven event. The Flash Video API has a comparable feature and event type which seems worth researching (I'm personally not familiar with it) I would imagine such an event could simply be dispatched from an active Audio or Video element, independently of the web audio framework. ...joe On Oct 9, 2010, at 4:05 PM, Chris Grigg wrote: > There's more than one way to do this sort of thing. > > In interactive audio worlds, it's common to use markers embedded in > the playable media as the timing source for this sort of event. > This makes it easy for media creators to precisely place events > within any given audio file, using tools they're already familiar > with. It also survives further editing of the source material > (unlike e.g. a separate list of sampleframe indexes, which would > have to be updated manually if the audio file were ever to change). > > So a player that's playing a piece of media containing at least one > marker would generate some sort of code event at the time when the > playhead passes through the marker. Depending on the choice of > design, there could be one object for handling all such events, > which branches on some sort of type ID in the marker contents; or, > alternatively, there could be multiple handler objects, and the > marker contents could indicate the desired handler for that > particular event. In either case, the marker could also, if > desired, include parameters to tell the handler what to do, with > greater specificity. > > -- Chris G. > > > On 2010Oct 8, at 4:48 a, Joseph Berkovitz wrote: > >>> >>> Hi Joe, this sounds like a useful idea. But, I think that I >>> wouldn't implement it as an AudioNode since it's not processing >>> audio in any way, but instead as a method on AudioContext, >>> something like: >>> >>> context.scheduleTimer(time, callbackFunction); >>> >>> The "time" parameter would be on the same timescale as the >>> "currentTime" attribute of the context. Special care would need >>> to be taken to ensure that excessively large numbers of event >>> listeners don't get fired. Some kind of throttling mechanism >>> would need to be implemented. This all has to be balanced with >>> the throttling mechanism for others timer such as setTimeout(). >> >> >> I was thinking it should be an AudioNode because this permits self- >> contained subgraphs to encapsulate their own event generation code >> as internal nodes. This is analogous to the other issue I've raised >> about encapsulation of scheduling concerns. In any case it's not >> an absolute requirement and a method on the context works fine. >> >> I don't see that huge numbers of listeners should be needed, I see >> this as a small point feature that makes common periodic scheduling >> tasks much more straightforward. >> >> . . . Joe >> >> Joe Berkovitz >> President >> Noteflight LLC >> 160 Sidney St, Cambridge, MA 02139 >> phone: +1 978 314 6271 >> www.noteflight.com >> >> >> >> >> >> > > > ... . . . Joe Joe Berkovitz President Noteflight LLC 160 Sidney St, Cambridge, MA 02139 phone: +1 978 314 6271 www.noteflight.com
Received on Monday, 11 October 2010 17:41:00 UTC