W3C home > Mailing lists > Public > public-audio@w3.org > April to June 2012

Re: Suggestion: Web Audio graphs that does not output sound but can write to an AudioBuffer instead

From: Raymond Toy <rtoy@google.com>
Date: Mon, 16 Apr 2012 12:10:34 -0700
Message-ID: <CAE3TgXEeB292028oPfQJ+2p4ScwzNcRo3X0vRC=SH4TBigV4cw@mail.gmail.com>
To: Per Nyblom <perny843@hotmail.com>
Cc: public-audio@w3.org
On Fri, Apr 13, 2012 at 1:47 AM, Per Nyblom <perny843@hotmail.com> wrote:

>  Hello,
>
> it would be great to be able to access the generated sound data without
> having to send it to the speakers as well.
> This feature is very useful for a musical application that should render a
> wav-file or similar (perhaps with the help of the FileSystem API). Most
> existing musical applications support this "offline" rendering mode.
>
> It is also useful to be able to create AudioBuffers as an optimization of
> a graph. Suppose that you create a very large graph that generates a sound
> effect or musical instrument sound and want to reuse it. It is very
> convenient to be able to generate AudioBuffers to improve performance
> without having to do this with an external program.
>
> All this could perhaps be supported by using a subclass of AudioContext
> that supports methods like: renderToBuffer(AudioBuffer, bufferOffset,
> length) or something similar.
> It is important to be able to incrementally render to the buffer because
> of the single-threaded nature of JS (you can use Workers for this but I
> think it is important anyway).
>

Won't a JavaScriptNode work for this, where the node just saves the data
away in an audiobuffer?  Or are you saying it won't work because JS is
single-threaded?

Ray
Received on Monday, 16 April 2012 19:11:04 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 16 April 2012 19:11:05 GMT