On Jul 18, 2013, at 1:07 AM, Jussi Kalliokoski <jussi.kalliokoski@gmail.com> wrote:
> > So how about a JavaScript-based Opus decoder that uses an AudioBuffer-backed ring buffer to do memcopy-free decoding and playback by decoding directly into a mutable AudioBuffer.
>
> I'm not sure I understood what you're saying here correctly, but you seem to be implying that the decoder would run in the main thread which is quite suboptimal. What we do in aurora.js (the JS audio decoding framework) is that a worker handles the decoding and streaming to avoid clogging the main thread + lower the risk of glitches and the worker uses transferable ArrayBuffers to avoid copying when juggling between threads. What the main thread does is feed that data into a ScriptProcessorNode and IIRC passing the ArrayBuffer back to the worker. The worker at no point even knows what and AudioBuffer is. So far this seems like the most optimal solution, but it already includes a copy of the data.
Wherever the decoding occurs, whether on the main thread or in a worker, the "immutable ArrayBuffer" option will require two buffers in existence: the source ArrayBuffer, and the destination AudioBuffer. This can result in a doubling of memory usage by the decoder.
> Jer:
> > but "except when creating an AudioBuffer" is a very large caveat.
>
> I think this should be a case of neutering too. What I have in mind is that creating an AudioBuffer out of a Float32Arrays would be an asynchronous operation where the arrays become de-neutered when the operation is complete, e.g.
> Promise AudioContext.createBuffer(sequence<Float32Array>, sampleRate)
This is not a reasonable place to add asynchronous API. This is akin to making ArrayBuffer.slice() an asynchronous operation.
> Note that I also suggest the method is static, I don't see why AudioBuffers need to be linked to a specific AudioContext, it makes things harder for libraries that need to know the AudioContext instance in order to create an AudioBuffer.
This however, I agree with.
-Jer