W3C home > Mailing lists > Public > public-audio@w3.org > October to December 2012

Re: AudioBuffer mutability

From: Marcus Geelnard <mage@opera.com>
Date: Tue, 30 Oct 2012 09:35:55 +0100
To: public-audio@w3.org, "Robert O'Callahan" <robert@ocallahan.org>
Message-ID: <op.wmzgt5j5m77heq@mage-speeddemon>
There are a couple of bugs dealing with this:

* https://www.w3.org/Bugs/Public/show_bug.cgi?id=17342
* https://www.w3.org/Bugs/Public/show_bug.cgi?id=17401

I agree that mutable AudioBuffers that are in use by the audio engine is a  
problem. It seems to me that the current AudioBuffer solution ties the  
audio engine and the client code very tightly together, making it  
difficult to achieve a concurrency-safe separation between the two.

I can't help thinking about OpenGL textures and vertex buffers as an  
analogy: for instance, a texture object can be written to (data copied  
 from client memory), but the specification is very clear about what  
happens, so that no race conditions can occur.

An alternate design for the Audio API could have been to have two  
interfaces: the current AudioBuffer interface for accessing the data  
programmatically, and another interface for audio-engine side audio data  
that can be read/written to/from an AudioBuffer in a controlled manner.  
Apart from solving the concurrency issues, this would also make it  
possible to use internal, machine optimal data formats (memory alignment,  
float/integer precision, etc), and even transfer the data to special  
memory (e.g. for using a dedicated DSP for the signal processing, such as  
an SPE in the Cell architecture).

In any event, I think we must solve the concurrency issues in one way or  
the other.


Den 2012-10-29 11:31:02 skrev Robert O'Callahan <robert@ocallahan.org>:

> If someone attaches an AudioBuffer to an AudioBufferSourceNode, then  
> calls start(T) on the node, and then modifies the >contents of the  
> AudioBuffer (by calling getChannelData() and then modifying the returned  
> array), do the changes affect >what will be played by the  
> AudioBufferSourceNode?
> If they don't, we have to make a copy of the buffer in the start() call,  
> in case the buffer is mutated later.
> If they do, then we are exposed to race conditions where depending on  
> exactly where the playback position is when script >modifies the array,  
> how much audio is buffered and how everything is implemented, changes to  
> the buffer may or may not be >played. Even worse, on machines with  
> non-sequentially-consistent memory (or very aggressive optimizing JS  
> compilers), some >subsets of the modifications may be played while  
> others are mysteriously ignored.
> Both options are undesirable. If AudioBuffer didn't provide direct array  
> access to its samples, we'd have better options. >Is it too late to  
> change that?
> Rob
> --Jesus called them together and said, “You know that the rulers of the  
> Gentiles lord it over them, and their high officials >exercise authority  
> over them. Not so with you. Instead, whoever wants to become great among  
> you must be your servant, and >whoever wants to be first must be your  
> slave — just as the Son of Man did not come to be served, but to serve,  
> and to give >his life as a ransom for many.” [Matthew 20:25-28]

Marcus Geelnard
Core graphics developer
Opera Software ASA
Received on Tuesday, 30 October 2012 08:36:30 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:14 UTC