Re: Proposal for fixing race conditions

On Thu, Jun 20, 2013 at 7:32 PM, Robert O'Callahan <robert@ocallahan.org>wrote:

> We need to avoid having implementation details (e.g. whether or when data
> is copied internally) affect observable output. This can be an issue when
> JS passes an array to an API (or gets an array from an API) and later
> modifies the array. We need to specify what happens in all such cases.
>
> I believe these are the remaining issues not already addressed in the spec:
>
> 1) AudioContext.createPeriodicWave(Float32Array real, Float32Array imag)
> I propose copying the data during the createPeriodicWave call.
>

The PeriodicWave objects have their own internal representation, and don't
need to be affected by any subsequent changes to the Float32Arrays.  The
good news is that these PeriodicWave objects, once created can be shared
across many many different instances of OscillatorNode, so are
memory-efficient once created.  In other words, it's not necessary to
create a unique PeriodicWave for each new OscillatorNode that is created
and used.



>
> 2) AudioParam.setValueCurveAtTime(Float32Array values, double startTime,
> double duration)
> I propose copying the data during the setValueCurveAtTime call.
>

This would be extremely inefficient, because these curves need to be shared
across many different "instances".  One example of this is where the curve
is used as a grain envelope, amplitude envelope, or custom filter envelope.
 In granular synthesis, a large number of grain instances can be created
per second, so it's important to be able to share these curves without
incurring the copying cost.  Similarly with a custom filter envelope using
curves, there can be many different synthesis "note" instances playing, so
I can't agree with this change because of the large performance cost.


>
> 3) WaveShaperNode.curve
> I propose copying the data when the attribute is assigned to.
>

Similar to the AudioParam curves, it would also be inefficient to copy
here, although the problem isn't quite as bad (compared with the grain
envelope case).


>
> 4) AudioBuffer.getChannelData(unsigned long channel)
> This is the tricky one. Proposal:
> Define a spec-level operation "acquire AudioBuffer contents" which
> delivers the current contents of the AudioBuffer to whatever operation
> needs them, replaces the AudioBuffer's internal Float32Arrays with new
> Float32Array objects containing copies of the data, and neuters the
> previous Float32Arrays.
> [IMPORTANT: Implementations can and should optimize this operation so that
> a) multiple "acquire contents" operations on the same AudioBuffer (with no
> intervening calls to getChannelData) return the same shared data; b)
> replacing the internal Float32Arrays with new Float32Arrays happens lazily
> at the next getChannelData (if any); and thus c) no data copying actually
> happens during an "acquire contents" operation. Let me know if this is
> unclear; it's terrifically important.]
> Then:
> -- Every assignment to AudioBufferSourceNode.buffer "acquires the
> contents" of that buffer and the result is what gets used by the
> AudioBufferSourceNode.
>

One of the most important features of the Web Audio API is to be able to
efficiently trigger many overlapping short sounds.  This is very common in
games, interactive applications, and musical applications.  In many cases,
these individual "sound instances" are based on the same underlying audio
sample data.  So it's very important to not require copying PCM data.
 These PCM buffers can also be quite large (multiple megabytes), and
portions of a larger buffer can be scheduled as "sound grains" (or "audio
sprites").  So I can't agree to requiring any kind of copying here.




> -- Immediately after the dispatch of an AudioProcessingEvent, the UA
> "acquires the contents" of the event's outputBuffer. (This is similar to
> what the spec already says; however, the "acquire contents" operation
> neuters existing arrays (which is observable), which lets the UA avoid a
> copy.)
>

I assume what you mean here is that immediately after the "onaudioprocess"
handler returns, then the UA "acquires the contents" of what was written to
outputBuffer?  This seems like it could be ok.



> -- Every assignment to ConvolverNode.buffer "acquires the contents" of
> that buffer for use by the ConvolverNode.
>

For ConvolverNode, I think I agree with this.  Let me make sure I
understand what you mean.  My interpretation is that when
ConvolverNode.buffer is set, then an internal representing is maintained.


>
> Additional minor comments:
> OfflineAudioCompletionEvent.renderedBuffer should specify that a fresh
> AudioBuffer is used for each event. AudioProcessingEvent.inputBuffer and
> outputBuffer should specify that fresh AudioBuffers are used for each event.
>

I'll have to think about this one a little more.  That one might be
possible...


> The text "This AudioBuffer is only valid while in the scope of the
> onaudioprocess function. Its values will be meaningless outside of this
> scope." is itself meaningless :-). If we specify that inputBuffer is always
> a fresh AudioBuffer object, I think nothing else needs to be said.
>
> All these comments are what we've actually implemented, except for
> createPeriodicWave which isn't fully implemented yet.
>
> Rob
> --
> Jtehsauts tshaei dS,o n" Wohfy Mdaon yhoaus eanuttehrotraiitny eovni le
> atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o Whhei csha iids teoa
> stiheer :p atroa lsyazye,d 'mYaonu,r "sGients uapr,e tfaokreg iyvoeunr, 'm
> aotr atnod sgaoy ,h o'mGee.t" uTph eann dt hwea lmka'n? gBoutt uIp waanndt
> wyeonut thoo mken.o w
>

Received on Monday, 24 June 2013 22:07:08 UTC