- From: Chris Rogers <crogers@google.com>
- Date: Tue, 19 Oct 2010 13:11:58 -0700
- To: Joseph Berkovitz <joe@noteflight.com>
- Cc: Joe Turner <joe@oampo.co.uk>, public-xg-audio@w3.org
- Message-ID: <AANLkTin=MCA+FF6d0Fz2HDKzTxgQ0_jrAbuoxPZFM5F4@mail.gmail.com>
Hi Joe, You bring up some interesting points. On Tue, Oct 19, 2010 at 4:34 AM, Joseph Berkovitz <joe@noteflight.com>wrote: > > Chris, can you explain how a JavaScriptAudioNode can be scheduled and also > do sample-accurate generation? It seems to me that such a node has no way > of knowing what exact sample frames it is generating. It can't look at > context.currentTime because that reflects what is playing at the moment, not > what the JSAudioNode is generating. The event contains no information > relating to the time at which generated output will actually be rendered. > > Even if we do not adopt a Performance/scheduler-like proposal, I see a few > different ways to rectify the problem JSAudioNode: > > - introduce a "playbackTime" attribute into AudioProcessingEvent that > indicates the AudioContext time at which the first sample of the generated > output will be played back. This is I think the minimum necessary to allow > the node to know what it's supposed to do relative to the time dimension. > Yes, I agree that we'll need this attribute. It will be very important for allowing synchronization between direct JavaScript processing and the rest of the audio graph. > > - [this adds to the above point] introduce a flavor of JSAudioNode that is > only a generator, and which has a noteOn()/startAt() function. Time-shift > the value of playbackTime by the start time of the node. This makes it > extremely easy to code up such nodes, since they always "think" that they > are simply generating batches of samples starting from time 0. The > implementation overhead for this time shift is the management of batches of > JS output that are out of phase with the main sample-generation cycle. That > doesn't seem so bad. > > - [further adding to the above point] offset the noteOn() time by the > context's current transform time offset that was in effect when noteOn() was > called (relative to my previous proposal). > > - [further addition] provide a noteOff() that causes the node to stop being > asked for output after some designated time, or allow a duration to be > passed > My thinking has been that the JavaScriptAudioNodes would not be created and destroyed at a fine time granularity. There are extremely complex issues with buffering and scheduling from the real-time audio thread to the main thread where JS and page-rendering occur. The reliability of the timing of event listeners is not 100% certain, and at the buffer sizes necessary to avoid audio glitching there are latency factors which come into play. So, although I agree that we'll need a "playbackTime" attribute for sample-accurate synchronization of JS processing versus the rest of the audio graph, I'm a bit concerned about the idea of having dozens of JavaScriptAudioNodes getting created and destroyed in a short amount of time with the expectation that it will all happen with perfect timing. Instead, I would propose that JavaScriptAudioNodes are created at the beginning and remain running as long as needed. They can do whatever synchronized rendering they want, including generating silence if it's in between time to play any events. I know that you will probably consider this much less elegant than the system that you're proposing. But, practical implementation details and reliability are really important here. And this simpler approach does not limit, in any respect, the types of applications which could be created. Chris
Received on Tuesday, 19 October 2010 20:12:29 UTC