- From: Chris Rogers <crogers@google.com>
- Date: Mon, 16 Apr 2012 12:36:17 -0700
- To: Raymond Toy <rtoy@google.com>
- Cc: Per Nyblom <perny843@hotmail.com>, public-audio@w3.org
- Message-ID: <CA+EzO0nEyppZ6RdyXQWBZYSnLduqc8WmmK2h4n=AvZ+N04V0VA@mail.gmail.com>
On Mon, Apr 16, 2012 at 12:10 PM, Raymond Toy <rtoy@google.com> wrote: > > > On Fri, Apr 13, 2012 at 1:47 AM, Per Nyblom <perny843@hotmail.com> wrote: > >> Hello, >> >> it would be great to be able to access the generated sound data without >> having to send it to the speakers as well. >> This feature is very useful for a musical application that should render >> a wav-file or similar (perhaps with the help of the FileSystem API). Most >> existing musical applications support this "offline" rendering mode. >> >> It is also useful to be able to create AudioBuffers as an optimization of >> a graph. Suppose that you create a very large graph that generates a sound >> effect or musical instrument sound and want to reuse it. It is very >> convenient to be able to generate AudioBuffers to improve performance >> without having to do this with an external program. >> >> All this could perhaps be supported by using a subclass of AudioContext >> that supports methods like: renderToBuffer(AudioBuffer, bufferOffset, >> length) or something similar. >> It is important to be able to incrementally render to the buffer because >> of the single-threaded nature of JS (you can use Workers for this but I >> think it is important anyway). >> > > Won't a JavaScriptNode work for this, where the node just saves the data > away in an audiobuffer? Or are you saying it won't work because JS is > single-threaded? > > Ray > I think the idea of an OfflineAudioContext is what we want, because ideally the rendering will occur faster than real-time. Chris
Received on Monday, 16 April 2012 19:36:47 UTC