- From: Per Nyblom <perny843@hotmail.com>
- Date: Fri, 13 Apr 2012 08:47:08 +0000
- To: <public-audio@w3.org>
- Message-ID: <SNT131-W47EE490DE6ACE4E03593AF803B0@phx.gbl>
Hello, it would be great to be able to access the generated sound data without having to send it to the speakers as well.This feature is very useful for a musical application that should render a wav-file or similar (perhaps with the help of the FileSystem API). Most existing musical applications support this "offline" rendering mode. It is also useful to be able to create AudioBuffers as an optimization of a graph. Suppose that you create a very large graph that generates a sound effect or musical instrument sound and want to reuse it. It is very convenient to be able to generate AudioBuffers to improve performance without having to do this with an external program. All this could perhaps be supported by using a subclass of AudioContext that supports methods like: renderToBuffer(AudioBuffer, bufferOffset, length) or something similar.It is important to be able to incrementally render to the buffer because of the single-threaded nature of JS (you can use Workers for this but I think it is important anyway). Also, this operation that I suggest is very similar to OpenGL's capability to render to a texture. Best regardsPer Nyblom
Received on Friday, 13 April 2012 08:48:27 UTC