RE: Suggestion: Web Audio graphs that does not output sound but can write to an AudioBuffer instead

Hi Alistair,
the OfflineAudioContext seems to cover the scenario that I described.
I think that it is important to be able to do the rendering incrementally, like offlineContext.renderToBuffer(AudioBuffer or Float32Array, offset, length), that writes the input to the destination node to the buffer/array. 
Thanks for paying so much attention to this suggestion, all of you working with this. The Web Audio spec is a very important for the future of the web, and you are doing a great job!
/Per

-

> Date: Tue, 17 Apr 2012 11:35:41 -0400
> Subject: Re: Suggestion: Web Audio graphs that does not output sound but can write to an AudioBuffer instead
> From: al@signedon.com
> To: perny843@hotmail.com
> CC: public-audio@w3.org
> 
> Hi Per,
> 
> Does the OfflineAudioContext Chris R. and James W. are talking about
> cover the scenario you are thinking of?
> 
> Thanks again for the suggestion!
> 
> Alistair
> 
> 
> On Mon, Apr 16, 2012 at 3:36 PM, Chris Rogers <crogers@google.com> wrote:
> >
> >
> > On Mon, Apr 16, 2012 at 12:10 PM, Raymond Toy <rtoy@google.com> wrote:
> >>
> >>
> >>
> >> On Fri, Apr 13, 2012 at 1:47 AM, Per Nyblom <perny843@hotmail.com> wrote:
> >>>
> >>> Hello,
> >>>
> >>> it would be great to be able to access the generated sound data without
> >>> having to send it to the speakers as well.
> >>> This feature is very useful for a musical application that should render
> >>> a wav-file or similar (perhaps with the help of the FileSystem API). Most
> >>> existing musical applications support this "offline" rendering mode.
> >>>
> >>> It is also useful to be able to create AudioBuffers as an optimization of
> >>> a graph. Suppose that you create a very large graph that generates a sound
> >>> effect or musical instrument sound and want to reuse it. It is very
> >>> convenient to be able to generate AudioBuffers to improve performance
> >>> without having to do this with an external program.
> >>>
> >>> All this could perhaps be supported by using a subclass of AudioContext
> >>> that supports methods like: renderToBuffer(AudioBuffer, bufferOffset,
> >>> length) or something similar.
> >>> It is important to be able to incrementally render to the buffer because
> >>> of the single-threaded nature of JS (you can use Workers for this but I
> >>> think it is important anyway).
> >>
> >>
> >> Won't a JavaScriptNode work for this, where the node just saves the data
> >> away in an audiobuffer?  Or are you saying it won't work because JS is
> >> single-threaded?
> >>
> >> Ray
> >
> >
> > I think the idea of an OfflineAudioContext is what we want, because ideally
> > the rendering will occur faster than real-time.
> >
> > Chris
> >
> 
> 
> 
> -- 
> Alistair MacDonald
> SignedOn, Inc - W3C Audio WG
> Boston, MA, (707) 701-3730
> al@signedon.com - http://signedon.com
 		 	   		  

Received on Tuesday, 17 April 2012 17:01:12 UTC