W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2012

Re: AudioBufferSource usage

From: Raymond Toy <rtoy@google.com>
Date: Mon, 23 Jul 2012 10:04:16 -0700
Message-ID: <CAE3TgXHE17MSEZUBd4s4ANchuaD=A42ZbxT2x-WR7QF1tXaAgQ@mail.gmail.com>
To: Ray Bellis <ray@bellis.me.uk>
Cc: public-audio@w3.org
On Sun, Jul 22, 2012 at 3:36 AM, Ray Bellis <ray@bellis.me.uk> wrote:

> On 22/07/2012 10:48, Peter van der Noord wrote:
>
>> For some reason, i've been reasing over this sentence a few times
>> without actually realizing what it meant:
>>
>> "Once an AudioBufferSourceNode has reached the FINISHED state it will
>> no longer emit any sound. Thus noteOn() and noteOff() may not be
>> issued multiple times for a given AudioBufferSourceNode."
>>
>> This strikes me as quite odd, what's the reasoning for this?
>>
>
> The philosophy appears to be that the buffers are "one shot" only, and
> anything that they're connected to is potentially garbage collected as
> soon as the sample is played.
>
> All nodes appear to be considered short-lived, and if they're not
> connected to anything they'll get garbarge collected.   If you need to play
> the sample again, it's considered "cheap" to instantiate a new node, and
> reconnect it.
>
> Like you, I would prefer to think of nodes as the parts of a modular
> synth, that stay around until they're explicitly disconnected from all of
> their inputs and outputs.


There is an implication of leaving such nodes in the audio graph:  all
following nodes will continue to process silence, eating up CPU.   Chrome
does try to minimize this, but doesn't handle all possible cases.  In
particular, a javascript node is assumed to produce non-silent audio until
it is removed.

(Another) Ray
Received on Monday, 23 July 2012 17:04:47 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 23 July 2012 17:04:48 GMT