RE: Suggestion: Web Audio graphs that does not output sound but can write to an AudioBuffer instead

Currently offline context can render the audio into buffer. It is used in the web audio layout test.

Best Regards

James


From: Per Nyblom [mailto:perny843@hotmail.com]
Sent: Friday, April 13, 2012 4:47 PM
To: public-audio@w3.org
Subject: Suggestion: Web Audio graphs that does not output sound but can write to an AudioBuffer instead

Hello,

it would be great to be able to access the generated sound data without having to send it to the speakers as well.
This feature is very useful for a musical application that should render a wav-file or similar (perhaps with the help of the FileSystem API). Most existing musical applications support this "offline" rendering mode.

It is also useful to be able to create AudioBuffers as an optimization of a graph. Suppose that you create a very large graph that generates a sound effect or musical instrument sound and want to reuse it. It is very convenient to be able to generate AudioBuffers to improve performance without having to do this with an external program.

All this could perhaps be supported by using a subclass of AudioContext that supports methods like: renderToBuffer(AudioBuffer, bufferOffset, length) or something similar.
It is important to be able to incrementally render to the buffer because of the single-threaded nature of JS (you can use Workers for this but I think it is important anyway).

Also, this operation that I suggest is very similar to OpenGL's capability to render to a texture.

Best regards
Per Nyblom

Received on Friday, 13 April 2012 10:01:56 UTC