OfflineAudioContext Concerns

The spec is still fairly vague about OfflineAudioContext generally, and I'm not really sure whether it's considered "finished", but currently I have two fairly major concerns with it.

First, it really should provide some way to receive data block-by-block rather than in a single "oncomplete" callback.  Otherwise, the memory footprint grows quite quickly with the rendering time.  I don't think this would a major burden to implementors, and it would make the API tremendously more useful.  Currently it's just not feasible to mix down even a minute or so.  If this is ever going to be used for musical applications, this has to change.

Second, the behavior with ScriptProcessorNodes as implemented in Chrome is currently not helpful.  Inside "online" audio contexts, the script processors work on the main thread, asynchronously from the main processing.  If they do not produce buffers in time, the buffers are dropped.  Currently, the behavior is the same for offline audio contexts.
This behavior is useful for real-time processing situations, but however it is quite frustrating for offline audio contexts, where the native nodes will render so fast that it's quite unlikely that the script processors will have enough time to produce data.  
Of course, changing this would not be trivial on the implementation side, and the main javascript thread would effectively be blocked while the mix down happens due to constant onaudioprocess callbacks.  The latter issue could of course be prevented if there is ever a spec or implementation of Web Worker nodes for audio (please!).  Either way, having no way to run javascript code in offline audio contexts severely limits their usefulness.

Thanks,
-Russell McClellan

Received on Wednesday, 13 March 2013 00:16:08 UTC