W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2013

Data racing proposals

From: Chris Wilson <cwilso@google.com>
Date: Tue, 30 Jul 2013 17:46:11 -0700
Message-ID: <CAJK2wqW6Aj2Mnr1EWsZ7fTCJ69acGagm9QH_F_KDergOHmQE4w@mail.gmail.com>
To: "public-audio@w3.org" <public-audio@w3.org>
I want to be clear that I still believe (as does Chris) that we should not
change the current API at this time to address memory sharing across the
main thread and audio thread with respect to AudioBuffer contents.
(WaveshaperNode, etc, and worker posting of AudioBuffers are not included
in this statement.)  Although we respect the design constraint of
preventing JS thread interference in memory operations, we believe that the
memory sharing of buffers to the audio thread is not a violation of this
spirit, and is worth the benefit (and common practice in audio APIs).  If
the developer community shows that this is a problem in practice,
particularly as another implementation adds to the corpus of experience, we
would of course be open to reopening this question at a later date.

However, independently I'd like to comment on the various proposals.  In
the event the group believes we should be preventing memory sharing now, I
would like to suggest an alternate proposal (though I want to be clear that
my strong preference is still to not change the API without proof this is a
problem in practice).

I believe this proposal has a fair amount in common with ROC's proposal,
and some with Jer's proposal - however, I've been exclusively looking at
this from the point of view of the web developer and how they would use the
API.  The problem I see with ROC's proposal (and thanks to ROC and Ehsan
for patiently elaborating on that) is that it is not clear to the developer
when a copy operation is performed, and they need to do something different
with the ArrayBuffer returned from getChannelData (i.e. re-assign it to
another AudioBuffer); with Jer's, it is much more obvious, although in my
opinion it's TOO obvious - the developer will have to effectively create
copies, or at least repeatedly be very explicit about handing data across,
in order to do common operations.

This proposal is:
AudioBuffer's contents continue to be exposed and used in the same way
prior to playback.  Implementations continue to decode or create
AudioBuffers in the same way, and continue to read (and write) contents
with the same getChannelData method, which allows access to the memory
buffers themselves to the main UI thread.

However: once playback of the AudioBuffer has been scheduled (i.e., "start"
is called on any BufferSourceNode that has its .buffer assigned to this
AudioBuffer), the AudioBuffer's internal Float32Arrays are neutered and no
longer available for direct access [a][b].  Subsequent attempts to access
the ArrayBuffer contents will throw; subsequent calls to getChannelData
will also throw.  [c]  For the scenarios where this data needs to be
accessed after playback has been scheduled, however (e.g. waveform
display), a new method should be added on AudioBuffer:

    partial interface AudioBuffer {
        Promise clone( optional begin, optional end );   // Fulfills with a
value of type AudioBuffer, with the same channel layout as the original
AudioBuffer
    }

which will create a copy of [a slice of] the data, as a separate
AudioBuffer (which, of course, is not neutered to begin with).  The slicing
capability is critical for minimizing in-place memory of small sections of
audio requested while the audio is being played - such as live waveform
display.

Additionally: in the case when the AudioBuffer's contents are transferred
to another thread, the AudioBuffer is again neutered and no longer
available for direct access.  In this case, once neutered in this way, the
AudioBuffer cannot be played (but, of course, you could request a clone to
be made).

In both of these cases, note the use of an asynchronous Promise for the
cloning operation; in some implementations, the memory may have moved
across thread boundaries, and need to be copied back across those
boundaries.

In practice - if the developer has not been accessing buffer contents after
beginning to play, there will be zero effect.  If the developer needs to
get read access to the memory after beginning playback (e.g. the waveform
display case), they can do so in a chunked fashion, minimizing memory
overhead.  If the developer wants to write into a buffer that is being
played back, of course, they can't directly do that.

If only we could selectively make ArrayBufferViews readonly...

-Chris

[a] Optional modifier - once playback has ended, you could un-neuter the
ArrayBuffer contents.  I'll resist the temptation to come up with a
creative term for this.  This seems challenging to define (as poorly as
neutering appears to be defined, there's no precedent I could find for
un-neutering), however, so not part of the main proposal.
[b] Optional modifier - make the neutering occur when the AudioBuffer is
assigned to any BufferSourceNode's .buffer, as in ROC's original proposal.
 I prefer to reserve this as long as possible, but I do not feel this is a
critical point.
[c] It wasn't clear if we need an additional "I've been neutered" attribute
exposed on AudioBuffer.  I opted for simplicity here, but it might be a
good idea.
Received on Wednesday, 31 July 2013 00:46:38 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:50:10 UTC