- From: Jer Noble <jer.noble@apple.com>
- Date: Tue, 30 Jul 2013 08:18:20 -0700
- To: Joseph Berkovitz <joe@noteflight.com>
- Cc: WG WG <public-audio@w3.org>
- Message-id: <7A310D3D-8B55-440F-94F7-C7DC6722258D@apple.com>
On Jul 30, 2013, at 6:50 AM, Joseph Berkovitz <joe@noteflight.com> wrote: > Jer, > > One of the main issues raised by Chris Wilson with respect to your proposal was memory footprint with respect to bulk synthesis of large AudioBuffers, and the overhead for copying them. > > Let me ask a leading question: isn't one of the side benefits of the AudioBufferChannel.set() method the fact that it allows one to populate an AudioBuffer in chunks of a manageable size? It seems to me that if bulk synthesis can be performed in reasonable-size chunks, each of which is passed to the set() method to be copied into the corresponding subrange of the buffer, the maximum memory overhead due to copying can be constrained to 2x the chunk size. > > If true, this doesn't completely remove Chris W's concern but it does mean that a chunked approach to buffer synthesis can mitigate overhead in a low-memory environment. Yep. What's more, you probably wouldn't want to synthesize the AudioBuffers entirely in advance either. You'd synthesize a few chunks, schedule them for their specific times, and as they finished playing, you would synthesize additional chunks. You might even implement a ringbuffer structure, so that later chunks imposed no additional memory or GC costs. In that way, both the 2x chunk size would be mitigated, as would the overall outstanding buffer size would be limited to the size of your ringbuffer. However, this presumes an advanced developer who is concerned about memory use. A naive developer may still hit the 2x overall buffer cost by decoding everything into a Float32Array up front, then copying into an AudioBuffer. -Jer
Received on Tuesday, 30 July 2013 15:19:04 UTC