Re: "Bouncing" an audioContext

On Tue, Mar 27, 2012 at 3:04 AM, David Lindkvist <
david.lindkvist@shapeshift.se> wrote:

> Gabriel,
>
> I asked Chris about this last year:
> http://lists.w3.org/Archives/Public/public-audio/2011AprJun/0045.html
>
> He said there was an OfflineAudioContext but I was unable to find any
> documentation on it, and it wasn't available it in Chrome at the time. Has
> it been implemented since then?
>

Internally, WebKit has something called an OfflineAudioContext which
processes faster than real-time, rendering the result into an AudioBuffer,
then calling a completion callback when finished.  We currently use it
extensively for running our automated  "layout" tests.  With fairly little
trouble, we could put this in the Web Audio spec.  I'm sure it would be
useful for rendering down final mixes, bouncing, "pre-baking" audio to be
used as parts of a larger whole...

Cheers,
Chris


>
> Best,
> David
>
>
> On Tue, Mar 27, 2012 at 11:47 AM, Gabriel Cardoso <
> gabriel.cardoso@inria.fr> wrote:
>
>> Hi all,
>>
>> I was wondering if it is possible with the current Web Audio API to
>> "export" an AudioContext (a graph) and how ?
>>
>> I guess that it is possible with a JavaScriptAudioNode but it does not
>> appear really straightforward to me ... could anyone show me the way ?
>>
>> Once my graph built, I would like to be able to export it as easy as I
>> can render it by connecting it to the context's AudioDestinationNode :
>> wouldn't it possible to have some kind of AudioExportNode on which one can
>> connect a graph and is able to set a begin and an end time (relative to the
>> context time) outputing an AudioBuffer ?
>>
>> I am just throwing unsophisticated ideas here, hope it's not out of sense,
>>
>> Thanks !
>>
>> Gabriel
>>
>>
>>
>

Received on Tuesday, 27 March 2012 17:27:55 UTC