Re: Large AudioBuffers in Web Audio API

On Mon, May 16, 2011 at 2:28 PM, David Lindkvist <
david.lindkvist@shapeshift.se> wrote:

> Hi everyone,
>
> sorry for entering this discussion a bit late. I recently did an experiment
> of scheduling large audio samples as buffered chunks, and would be
> interested in hearing your feedback on the approach:
>
> 1 - It uses the FileSystem API to store a wav file on disk.
> 2 - Uses the File API to read slices of the wav file into memory
> 3 - Schedules these chunks in sequence using AudioBufferSourceNodes (Web
> Audio API)
>
> Demo URL: http://jsbin.com/ffdead-audio-wav-slice/233  (feel free to play
> with the source code and save new revisions)
>
> I have tested this on MacOSX and the Windows Canary build of Chrome and am
> very happy with the result so far - no audible clicks as long as the tab has
> focus.
>

David, thanks for doing this - very interesting!



>
> Topics for discussion:
>  - will this approach scale up to multiple tracks with realtime effects?
>

I believe that it should, if the underlying file API implementation is
implemented well.  It looks like you're well on your way to actually trying
this experiment out.


>  - Is window.requestAnimationFrame a good solution for scheduling/updating
> the UI without stealing priority from audio playback?
>

With the exception of the JavaScriptAudioNode, all of the audio processing
happens in a separate thread.  The JavaScript and audio processing can run
simultaneously so this should not be an issue.  I should mention that I'm
currently working on tweaking the audio thread priority for maximum
performance, but that's an implementation detail...


>  - Using the Web Audio api, how would I go about bouncing a mix of X tracks
> with effects? Do I route the main mix through a JavaScriptAudioNode and
> create a wav on the fly using something like XAudioJS? I would very much
> like to see a save method provided by the api (as I believe you already have
> discussed in this group).
>

If you were rendering exclusively from in-memory AudioBuffers (and not doing
the streaming as in your experiment) then you could do the mix offline using
an OfflineAudioContext, which I haven't really talked about or documented
yet, but is implemented currently.  The main idea here is that you setup a
rendering graph in advance, then the actual rendering happens in another
thread (generally running much faster than real-time).  It fires an event
listener when finished with the resulting AudioBuffer, which can then be
written out as an audio file.

Because you're doing streaming with the file API, then currently you would
have to intercept the mixed stream in real-time using a JavaScriptAudioNode,
writing the stream data to a buffer until the complete mix is done.

I agree that optimally it would be nice to have nodes for streaming reading
and writing to files.  The API is certainly scalable to support such nodes.
 But, right now I'm fairly busy working on other parts, so it would take a
fair bit of time to spec that out and implement it.


>
> Thanks for a very promising API Chris, I'm happy to see DAWs explicitly
> called out as a use case in the Web Audio spec.
>
> Thanks,
> David Lindkvist
>

Thanks for you example.  I'm excited to see people starting to experiment
using the API with others like the file API - very cool!

Chris

Received on Tuesday, 17 May 2011 19:14:29 UTC