Re: OfflineAudioContext specification gaps

On Sat, Mar 30, 2013 at 11:36 AM, Joseph Berkovitz <joe@noteflight.com>wrote:

> Hi all,
>
> I thought I would offer a few thoughts on OfflineAudioContext (OAC) to
> help identify areas that are not fully specified. My approach here is to
> ask: what are the observable aspects of a regular AudioContext (AC)'s state
> with respect to time, and how are those manifested in the offline case?
>
> I've offered some initial suggestions on how to approach these issues to
> stimulate discussion. They are not exact enough to use in the spec and I
> apologize in advance for their lack of clarity. I hope these are helpful in
> advancing our understanding.
>
> ----
>
> Issue: What is the overall algorithmic description of an OAC's rendering
> process?
>
> Suggestion: Prior to the start of OAC rendering, AudioNodes connected to
> it "do nothing" (that is, they experience no forward motion of performance
> time). Once an OAC begins rendering, the AudioNode graph upstream processes
> audio exactly as if a regular AC's performance time was moving forward
> monotonically, starting from a value of zero.  The performance time value
> of zero (with respect to AudioProcessingEvent.playbackTime, source
> start/stop times and AudioParam value-curve times) is mapped to the first
> sample frame in the audio output emitted by the OAC. Upon reaching the
> limit of the supplied length argument in the constructor, the rendering
> process ends and performance time does not move forward any more.
>

Yes, this is more or less my understanding.


>
> ----
>
> Issue: Can an OAC be used to render more than one audio result?
>
> Suggestion: No, it is a one-shot-use object (although it could render and
> deliver a single audio result in discrete chunks).
>

Agreed


>
> ----
>
> Issue: A regular AC's currentTime attribute progresses monotonically in
> lock-step with real time. What value does an OAC's currentTime present
> during the asynchronous rendering process?
>
> Suggestion: Upon calling startRendering(), the currentTime value becomes
> zero.
>

Actually, it should initially be zero even before startRendering() is
called, but will progress forward in time from zero when startRendering()
is called.


> During rendering the currentTime attribute of an OAC MAY increase
> monotonically to approximately reflect the progress of the rendering
> process, whose rate may be faster or slower than real time. But whenever
> any rendering-related event is dispatched (e.g. oncomplete or any future
> incremental rendering event), the currentTime value MUST reflect the exact
> duration of all rendered audio up to that point.
>

Sounds good to me.  It's actually a useful feature to be able to read the
.currentTime attribute in this way because a progress UI can be displayed...


>
> ----
>
> Issue: It is not clear whether one can modify the node graph feeding an
> OAC. However, synthesis graphs feeding a real-time AC's destination are
> typically constructed in a just-in-time fashion driven by
> window.setInterval(), including only source nodes which are scheduled in a
> reasonably short time window into the future (e.g. 5-10 seconds). Thus,
> graphs feeding a real time AC need never become all that large and the work
> of constructing these graphs can be broken into nice processing slices.
>
> Another way of saying this is that in an OAC there is no way to
> "granulate" the rendering process (at least, as long as we keep the
> approach that a single chunk of data is to be produced at the end). Thus,
> it seems that developers must assemble a single huge graph for the entire
> timespan to be rendered, at once. This is likely to tie up the main thread
> while application JS code constructs this huge graph.
>

I'm not too worried that the graph construction will take very long, even
for large graphs.


>
> Suggestion: Dispatch periodic "rendering partially complete" events from
> an OAC for reasonably sized chunks of data. Typically these would be much
> larger than 128-frame blocks; they would be in a multi-second timeframe.
> During handling of these events (but at no other times), AudioNodes may be
> removed from or added to the OAC's graph. This not only solves the issue
> detailed above, but also handles arbitrarily long audio output streams.
>  Note that one cannot easily use a sequence of multiple OACs on successive
> time ranges to simulate this outcome because of effect tail times.
>

Especially in the case of rendering very long time periods (for example
>10mins) I think it's very interesting to have these "partial render"
events.  I'd like to make sure we can have a good way to add such an event,
without necessarily requiring it in a V1 of the spec.



>
> Corollary: The consequences of modifying the graph of AudioNodes feeding
> the OAC during rendering are not defined EXCEPT when these modifications
> take place during these proposed events.
>

Yes


>
> ----
>
> Issue: The spatialization attributes (location, orientation, etc.)
> of AudioListener and PannerNode cannot be scheduled. In a regular AC these
> can be modified in real time during rendering (I think). However, there is
> no way in an OAC to perform the same modifications at various moments in
> offline performance time.
>

> Suggestion: Introduce events that trigger at given performance time
> offsets in an OAC? Replace these spatialization attributes with
> AudioParams? Simply stipulate that this can't be done?
>

That's a good point, and this is a limitation even in the normal
AudioContext case if very precise scheduling is desired for the
spatialization attributes.  I think we can consider making these be
controllable via AudioParams, but hopefully that's something we can
consider as separate from just getting basic OfflineAudioContext defined.



>
> ----
>
>
> .            .       .    .  . ...Joe
>
> *Joe Berkovitz*
> President
>
> *Noteflight LLC*
> Boston, Mass.
> phone: +1 978 314 6271
> www.noteflight.com
> "Your music, everywhere"
>
>

Received on Friday, 3 May 2013 22:43:03 UTC