Re: memory footprint of AudioBuffer data copies

Um.  I'm seriously confused now.  If you're saying you can set() into a
range of a currently-playing buffer, then that seems to introduce precisely
the same timing race opportunities that the current model has; how is this
different?

If you're implementing a ring of buffers, and synthesizing (or copying)
prior to them being required, that's fine; but that's no different than
today.  The developer would still have to carefully manage timing and
latency in order to not underrun or overrun their ring buffer.

This highlights my point, that there are inherent race hazard possibilities
in ANY asynchronous multi-threaded environment.  I've been opposed to
making a change here primarily because I think "shared memory bad,
neutering good" is about as true as "four legs good, two legs bad";
certainly, creating an API that makes race side effects common would be
bad, but I don't think we have that - and the need to avoid glitching in
audio, and the constraints on the main thread in the environment we have
today, makes this API an interesting challenge to design.


On Tue, Jul 30, 2013 at 8:18 AM, Jer Noble <jer.noble@apple.com> wrote:

>
> On Jul 30, 2013, at 6:50 AM, Joseph Berkovitz <joe@noteflight.com> wrote:
>
> Jer,
>
> One of the main issues raised by Chris Wilson with respect to your
> proposal was memory footprint with respect to bulk synthesis of large
> AudioBuffers, and the overhead for copying them.
>
> Let me ask a leading question: isn't one of the side benefits of the
> AudioBufferChannel.set() method the fact that it allows one to populate an
> AudioBuffer in chunks of a manageable size? It seems to me that if bulk
> synthesis can be performed in reasonable-size chunks, each of which is
> passed to the set() method to be copied into the corresponding subrange of
> the buffer, the maximum memory overhead due to copying can be constrained
> to 2x the chunk size.
>
> If true, this doesn't completely remove Chris W's concern but it does mean
> that a chunked approach to buffer synthesis can mitigate overhead in a
> low-memory environment.
>
>
> Yep.
>
> What's more, you probably wouldn't want to synthesize the AudioBuffers
> entirely in advance either.  You'd synthesize a few chunks, schedule them
> for their specific times, and as they finished playing, you would
> synthesize additional chunks.  You might even implement a ringbuffer
> structure, so that later chunks imposed no additional memory or GC costs.
>  In that way, both the 2x chunk size would be mitigated, as would the
> overall outstanding buffer size would be limited to the size of your
> ringbuffer.
>
> However, this presumes an advanced developer who is concerned about memory
> use.  A naive developer may still hit the 2x overall buffer cost by
> decoding everything into a Float32Array up front, then copying into an
> AudioBuffer.
>
> -Jer
>
>

Received on Tuesday, 30 July 2013 15:38:53 UTC